Case Study: Leak of sensitive airport data due to email configuration error (2007)
Maciej Lesiak
- 14 minutes read - 2785 words
Ten artykuł jest dostępny również po polsku:
Case Study: Wyciek wrażliwych danych lotniskowych przez błąd konfiguracji e-mail (2007)
What's in this article
DISCLAIMER: All personal data has been anonymized. The article is educational in nature and concerns events from 18 years ago. I removed the key airport plan and the list of personnel from security services long ago.
In 2007, during preparations for the launch of the new Terminal 2 at Chopin Airport in Warsaw, there was an accidental leak of sensitive operational documents. As a person unconnected to the airport, I received by mistake detailed materials concerning comprehensive security tests of the facility codenamed “Test Prawdy” (Truth Test). After many years, I decided to describe this case because this incident constitutes a perfect example of how errors in email system configuration and lack of procedures can lead to serious information security breaches. I note that I am not a cybersecurity specialist and do not position myself as such. This is merely a case study of an old matter.
Historical context: Terminal 2 as a strategic investment
In 2007, Chopin Airport was undergoing the largest expansion in its history. The new Terminal 2 was a key element in preparations for the European Football Championship, which Poland was to co-organize with Ukraine in 2012 (candidates were selected on April 18, 2007 in Cardiff). The facility was designed to handle 6.5 million passengers annually with 70 check-in counters. For Poland, 2007 was an era of building modern infrastructure and positioning itself as an attractive tourist destination in Europe.
The strategic importance of this project was enormous. Terminal 2 was to become the country’s showcase for the growing number of travelers visiting Poland. Every element of infrastructure, from security control systems to passenger flows, had to be thoroughly tested before official opening. This is precisely why comprehensive exercises codenamed “Test Prawdy” were planned.
“Test Prawdy” - major operational exercises
Scheduled for October 16, 2007, the exercises were a comprehensive undertaking testing all aspects of the new terminal’s operation under conditions as close as possible to actual exploitation. Organizing such an undertaking required coordination between many services and institutions.
The exercises involved one hundred extras playing different types of passengers, from able-bodied persons to people in wheelchairs, with children, or even transporting unusual luggage like kayaks or bicycles. The scenario included six test phases lasting from 7:00 AM to 3:30 PM. Each phase was to test different aspects of terminal operation, from basic passenger flows to complicated customs and border guard control procedures.
Most impressive was that a real aircraft with code C was used for the tests to check the mobile bridge docking systems. This shows how seriously the entire certification process was treated and how high the stakes were. An error at this moment could have delayed the terminal opening by months, with all political and economic consequences.
Course of the incident: how the leak occurred
On October 11, 2007, five days before the planned exercises, an email was sent with attachments containing detailed materials concerning “Test Prawdy.” The email was sent by Tadeusz D×××××××, Terminal Technology Specialist, to a group of recipients including various airport services, but one of these people forwarded this email to their private mailbox - made an error and sent it to me. The email content contained a request for comments and correction proposals for the scenario, indicating that the documents were still in the finalization phase.
Analysis of the leak’s cause turned out to be prosaic, though disturbing. Someone forwarded to wrong email with wrong alias.
Particularly disturbing was that the documents contained no confidentiality markings or instructions regarding their handling. For an outsider, it was not obvious that they had received sensitive materials, which could lead to their further distribution or inadvertent sharing. Personally, I suspected a provocation, because I was unable to believe that someone had made such a huge error!

Content of the sent documents
The materials I received were extremely detailed and contained information that, in the wrong hands, could be used for sabotage or terrorist activities. The first document was a 24-slide operational presentation containing a detailed schedule of all six test phases with specific times and locations.
The second document, significantly more sensitive, was a 26-page technical scenario containing detailed terminal plans with markings of all levels from -1 (basements with loading ramp) to +9.70 meters (arrivals level). The document also contained detailed lists of personnel responsible for individual areas, security procedures for various services, and technical information about FIDS, CUTE, and SATE control systems.

The most sensitive were information concerning:
- Exact placement of PSG (Border Guard) control stations
- “RED LINE” procedures used by the Customs Office
- Transport paths for persons detained by border services
- Security systems against unauthorized access
- Procedures for handling disabled persons by special services
However, the most interesting was the third document, which I deleted immediately after receiving. These were detailed terminal layouts and airport aprons with marked corridors, security measures, and procedures. It looked like part of a construction plan with diagrams. The problem was that in addition to information about procedures and security measures, there was also a list of service personnel with telephone contacts (GSM) supervising the entire operation. In short, along with the email I received a list of responsible persons from each side, from the terminal contractor through all services, and a comprehensive Terminal 2 project with security measures and procedures… phew, I got hot.

Schedule and Phases of “Test Prawdy” Operation (16.10.2007)
Phase | Time | Description of Activities |
---|---|---|
Phase 0 | 07:00 - 09:00 | Preparation. Separation of roadway in front of T-2. Staffing of work stations by all services (Agents, PSG, UC, PPL). System and communication checks. |
Phase 1 | 09:00 - 10:30 | Passenger Check-in. Arrival of 100 extras (PAX) and entry to departures hall. Check-in at check-in counters, including hand baggage and oversized baggage. |
Phase 2 | 10:45 - 11:00 | Security Control. Passengers undergo security control and document control conducted by PSG. |
Phase 3 | 11:00 - 11:40 | Boarding - Bridge No. 4. Aircraft (SP) docking simulation. Passenger boarding check-in at GATE A37 and transition to jetbridge. |
Phase 4 | 12:00 - 12:25 | Boarding - Bridge No. 8. Passenger check-in at GATE A29/A30. Passenger entry to jetbridge and aircraft boarding or bus boarding. |
Phase 5 | 12:50 - 13:45 | Passenger Arrival. PAX arrival simulation to T-2. Transition from jetbridge to arrivals level (+9.70). Document control by PSG and transition to baggage claim hall. |
Phase 6 | 14:30 - 15:30 | Goods Delivery. Test of goods delivery and distribution procedure from basement ramp (-4.20) to commercial zones in the pier. |
Level of Detail of Disclosed Data
Data Category | Example Information | Potential Risk |
---|---|---|
Operational Plans | Detailed test schedule down to the minute. Division into six phases with precise activity descriptions. | Possibility of planning precise test disruption or attack at specific location and time. |
Security Procedures | “RED LINE” and “GREEN LINE” customs control paths. Security and document control procedures by PSG. Transport path for detained persons from arrivals level (+9.70) to detention rooms (0.0). | Knowledge of bypassing or exploiting gaps in control procedures. Possibility of planning escape of detained person. |
Technical Data | Tested systems: FIDS (visual information), CUTE (check-in), SATE (baggage sorting). Locations: GATE A37, A29, A30, bridges No. 4 and 8. | Enabling attacks on airport IT systems. Knowledge of physical location of key infrastructure elements. |
Facility Plans | Terminal layouts with markings of all levels from basements (-4.20) to arrivals (+9.70). Location of cash registers, Executive lounges, restrooms and childcare rooms. | Complete knowledge of facility topography, facilitating planning of unauthorized access, sabotage or attack. |
Personal Data and Resources | Lists of personnel responsible for individual areas. Number of involved extras (100) and their profiles (disabled, disoriented). | Risk of social engineering attacks, corruption attempts or personnel blackmail. |

Security risk analysis
You don’t need to be a cybersecurity genius, or more broadly information security genius, to realize that the leak of this type of documents in the context of a strategic facility such as an airport (new terminal) carried multilevel risk. The most obvious threat was the possibility of using the plans by persons with bad intentions to plan attacks or sabotage activities. Knowledge of exact security service procedures could allow their bypass or disruption. Especially in times of Islamic terrorism and radicalism threats. Poland was repeatedly a transit country for terrorists. I was so scared that I didn’t even tell friends about this matter.
From an operational security perspective, there was a risk of disrupting the tests themselves by unauthorized persons. Knowing the exact schedule and locations, someone could deliberately interfere with conducting the exercises, which in turn could delay certification and terminal opening. In the context of the upcoming European Championship, such a delay would have painful image and financial consequences. Let the delays in opening terminals in Berlin be a warning.
One cannot forget about the risk to information security and personal data. The documents contained detailed information about employees of various services, their roles and responsibilities. This data could be used for later social engineering attacks or personnel corruption attempts, e.g., to penetrate these organizations. Although GDPR didn’t exist then, supposedly in Poland there were services dealing with protection of sensitive data… supposedly…
From what I understand from today’s perspective, this incident can be classified as “Accidental Disclosure” - accidental disclosure of data at various sensitivity levels, from RESTRICTED (strategic facility plans) through CONFIDENTIAL (service operational procedures) to INTERNAL (organizational and personal data). I think there are appropriate specialists who have competencies to properly assess these issues.
System gaps of 2007
Let’s look at this matter from the perspective of the rapidly developing internet as technology. Analysis of this incident reveals a series of systemic problems characteristic of that era. Above all, distribution list management was, to put it mildly, primitive compared to today’s standards. We have an entire list of addresses in the CC field. There were no verification mechanisms before sending, automatic encryption, or data leak prevention systems (DLP). It would have been enough to place, for example, some downloadable image in the code to track every opening of such an email. But let’s not be so harsh - the Signal and Waltz case I described in the context of the White House leak is probably a similar incident… so it’s chaos.
Information classification was informal and imprecise. Documents had no confidentiality markings, there was no precise definition of who should have access to them, and procedures for handling sensitive documents were unclear or didn’t exist at all. This shows how much the level of cybersecurity awareness in 2007 differed from today’s standards. Although I claim this is a case from the field of general information security.
Equally problematic were employee training programs. Lack of awareness of consequences of errors in handling sensitive data, routine treatment of security procedures, and lack of verification mechanisms before sending documents are problems that unfortunately we still encounter in some organizations. In my opinion, this is very often routine procedure stamping, and I hope that in “budget sector” these issues no longer come down to going on sick leave in case of problems.
Technological perspective - 2007 vs 2025
The year 2007 was an era of technological breakthrough, but also limitations that today seem archaic. Gmail was then a relatively new service (it only left beta version in 2009, and I was one of the first people using it in Poland; originally Gmail was used as a free disk… there were many projects placing data chunks and index which gave GB of free “cloud” disk), and advanced data leak prevention systems were available only to the largest corporations. Email encryption functions existed but were complicated to use and rarely implemented in standard business communication.
Today’s security standards include automatic document classification, real-time DLP systems, end-to-end encryption as standard, multi-factor authentication, zero-trust architecture, and continuous monitoring of all data operations. This is a qualitative difference, not just quantitative.
Particularly important is today’s possibility of implementing automatic document tagging based on their content, role-based access control, and full audit trails for all operations. These systems could prevent a similar incident already at the stage of attempting to send an email with sensitive attachments. I myself have detected unpleasant things in my clients’ companies several times, analyzing only anomalies in their mail.
Learning from mistakes or pretending nothing happened
The most important conclusion from this case is that the human factor will never disappear, regardless of technological progress. Human error remains the main cause of security incidents in most organizations. No technology will replace appropriate procedures, regular training, and security culture in the organization. It’s also important to make recipients aware of what to do with such mistakenly sent documents. Maybe in schools it would be worth making this an element of lessons, since we live in times when persuasive techniques and hostile activities are commonplace?
The second key conclusion concerns classification as the foundation of information security. Every organization should define clear data sensitivity levels, implement automatic tagging, and enforce procedures at all organizational levels. This is not an option but a necessity in today’s world. Even in my basement world there are data sensitivity levels and appropriate procedures. There, as we can see, procedures were on paper.
The third area is monitoring and response. Even the best procedures can fail, so continuous communication monitoring systems, rapid anomaly detection, incident response procedures, and regular tests and exercises are crucial. I have friends who work in critical industries and have training practically non-stop. But let’s return to 2007…
What did I do? After receiving the documents, I took immediate action according to responsible disclosure principles - that’s what it’s nicely called, but I think any normal and honest person would do the same. First, I contacted the sender - Tadeusz - informing him about the erroneous sending. I explained that I should not be the recipient of these materials and that there was probably an error in the email system configuration. I also drew attention to the very sensitive nature of the attachments.
Of course, the person I wrote to then was probably so scared that not only did they not report it to superiors, they didn’t even thank me! Of course, I don’t know the official consequences of this specific incident; in my world I assume that immediate changes were introduced in procedures and appropriate persons were informed. They probably included verification of distribution lists, additional personnel training, document classification procedures, and response protocols for similar cases. Some protocol was created afterward. That’s what happened, right?
Security dilemmas and suspicions
The situation was so unusual that initially I suspected a provocation by services (although I would probably never constitute an interesting target for such provocation). Was I reading the wrong usenet group in those days, or (paranoid) did someone decide that my interests were dangerous and I needed to be checked in morning mode? I couldn’t believe that documents of such high sensitivity level would reach a random person through a simple system error. I was afraid it might be an attempt to test my reaction or preparation for searching computer equipment under the pretext of possessing sensitive materials.
Paranoia led me to immediately delete the most sensitive materials - detailed airport plans containing precise layouts, personnel list with phone numbers, service ranks (mostly people from various services), and detailed security procedures - immediately after understanding their content. I kept only two documents of a more general nature: the operational presentation and part of the technical scenario, which documented the fact of the incident without revealing the most sensitive details. I wanted to have protection in case of a home visit…
Lessons from incident handling
This situation taught me an important lesson about how difficult decisions can be for a person who accidentally receives sensitive data. On one hand, there’s an obligation to immediately report the incident; on the other - understandable concerns about potential consequences. My decision to keep part of the documents was dictated by the desire to protect myself.
The ideal solution would be the existence of clear procedures for people who accidentally receive sensitive data. Such procedures should guarantee safety to the reporter and encourage responsible disclosure instead of quietly deleting materials. It would also be nice if these people weren’t later targets for blame-shifting for incompetence and sloppiness.
Conclusions for the future
Use Gmail less… but seriously, the case of the “Test Prawdy” document leak from 2007 remains relevant despite eighteen years passing. It shows how basic errors in processes - from data entry to distribution groups to handling email aliases - can lead to serious security breaches, regardless of available technology.
And generally, everything hangs on anomalies, but that’s a topic for another text…
Sources:
Photography: Departures hall in Terminal A at Chopin Airport in Warsaw English: Departures hall in Terminal A at Chopin Airport in Warsaw Date: November 8, 2011, 09:10:39 Author: Adrian Grycuk
Related
- #2520 Manipulating Recommendation Systems: GROK, White Genocide, and Musk's Racist Conspiracy Theories
- Bypassing Security Filters in ChatGPT's SVG Generation
- AI series: A scenario of how AI can take over recommendation systems, generating and reinforcing conspiracy theories and disinformation
- Paranoia as a Working Method: Anticipating Threats in IT, Part 1
- AI-Driven Marketing: The E-commerce Revolution and the Dawn of the Agentic Internet (!)
- Phatic Function in Practice: How ChatGPT's Conversation Maintenance Generates Millions in Losses
- GPTBot Is Scanning The Internet: How OpenAI Will Change Content Consumption and the Future of Search
- Bypassing Security Filters in ChatGPT's SVG Generation
Support Independent Research

If you find this research valuable, consider supporting through Ko-fi. Your contribution helps maintain this project as an ad-free, independent resource.
COFFEE NOT BULLETSDirect support means more research, better content, and no ads.