Grounded in experience. Focused on clarity.

Virginia Levy Abulafia


AI Governance & Decision Systems | Auditability, AI Readiness, BI



For 25 years, I worked in environments where decisions are not a theoretical exercise: when you make a mistake, the best-case scenario is that everything stops.
If it goes worse, it goes much worse.

I operated in high-pressure, highly regulated environments (airport operations), dealing with complex systems, operational coordination, and distributed responsibility. That’s where I learned a simple but uncomfortable truth: complexity is not “managed”, it is engineered.

Today, I apply this approach to Business Intelligence, functional analysis, and requirements engineering, working on data-driven decision systems and on introducing AI into real-world operational contexts. My focus is not the model, but the system: data, constraints, decision flows, accountability, failure modes.

I don’t work on AI as experimentation.
I work on what happens when data and AI are put into production and start influencing operational decisions.

I intervene on:

– definition and validation of functional requirements

– data quality, consistency, and limitations

– AI readiness and data governance

– compliance-by-design processes and documentation

– decision guardrails under uncertainty

I design robust, traceable, and auditable decision systems, built to hold over time, not just to work under ideal conditions.

My position is clear, and it goes beyond AI: accuracy is a form of ethics.

Poorly structured data leads to fragile decisions.
Fragile decisions eventually turn into operational problems.
My Tools

LANGUAGES: 

Python · R · SQL

FAVORITE LIBRARIES:

Pandas · Matplotlib · Seaborn · Folium · Scikit-learn

BI & WORKFLOW: 

Excel · Tableau · Looker Studio · BigQuery · Google Sheets

AI:

Prompt Engineering · APIs · Predictive Modelling · Compliance

AI Policy

I follow a transparent and responsible approach to AI, aligned with ISO/IEC 42001 principles.
My work integrates risk assessment, human-in-the-loop design, and traceability by default.
Responsible AI is not a constraint. It is a cultural choice that enables trust and long-term value.

Scroll to Top