Falco + Nginx Plugin Development: Falcoya Days 57-61
~ Large-Scale Attack Verification and E2E Test Debugging Chronicle ~

Looking Back
Days 51-56 saw the publication of the E2E test report (Phase 1), i18n support, retry of attack verification, fine-tuning of over-detection, and creation of integrated documentation for 37 rules/810+ attack patterns. The Falco plugin's "net" became more sophisticated, and it was time to enter the large-scale attack verification phase.
Day 57 (09/07) — Attack Verification Expansion
On this day, I fed newly generated attack logs and verified detection with existing rules. When injecting a large number of scenarios at once, some cases detect as expected while others fail.
I added new failure examples to PROBLEM_PATTERNS.md
while deeply feeling the difficulty of large-scale verification once again.
Learning
Unexpected failures always occur in large-scale verification. Recording failure examples reliably is the first step toward future improvements.
Day 58 (09/08) — Test Verification Work
Continuing from the previous day, I verified E2E test results one by one and identified failed cases.
To determine "why it failed," I investigated by comparing logs and outputs. While the cause couldn't be identified yet, I recorded reproduction conditions and traces in integration-test-requirements.md
.
Leaving these "failure footprints" one by one leads to future improvements.
Learning
Leaving failure footprints is the path to improvement. Recording reproduction conditions and traces becomes important clues for problem-solving.
Day 59 (09/09) — Fatal Mistake: Crushing Output Without Referring to Documentation
On this day, I faced a critical problem where Falco's detection logs weren't reflected in the report, and all output disappeared.
The cause was my own fault.Despite the output specifications being clearly documented in integration-test-requirements.md
, I changed them arbitrarily without referring to the documentation.
As a result, while Falco was detecting internally, from the user's perspective, it appeared as if "Falco had gone silent." This was the greatest risk of losing trust as an OSS project, a spine-chilling experience.
I recovered by reverting the implementation and checking the documentation again, but this failure was devastating.
So I decided to strengthen the documentation further:
- Added "output specification compliance check items" to
integration-test-requirements.md
- Added a new pattern "changed output specifications without referring to documentation" to
PROBLEM_PATTERNS.md
The learning is clear:Read documentation before code, or we will inevitably repeat the same failures.
Learning
Read documentation before code. Changes ignoring specifications cause critical problems that lose user trust.
Day 60 (09/10) — CI Infrastructure Instability
On this day, problems occurred with E2E test execution on GitHub Actions. Jobs would stop midway, and artifacts weren't saved correctly sometimes.
With possible environment dependencies, I couldn't immediately identify the root cause. I recorded it as "CI infrastructure issues" in PROBLEM_PATTERNS.md
for comparison when it recurs.
Learning
Record CI infrastructure issues too. Environment-dependent problems have low reproducibility, making detailed records of occurrence conditions important.
Day 61 (09/11) — Security E2E Debugging
This day was focused on debugging "Security Verification E2E Tests." I executed test cases one by one, compared detection logs with report outputs, and identified inconsistencies.
During verification, I found several bugs and rule adjustment points, which I added to integration-test-requirements.md
.
It was tedious and time-consuming work, but I realized this accumulation becomes the power to use Falco practically.
Learning
Accumulation of tedious debugging work becomes practical power. Carefully resolving each inconsistency is the path to quality improvement.
Tasks Performed on Days 57-61
- Attack scenario expansion and verification (recording failure cases)
- E2E test result confirmation and cause investigation
- Fixing critical bugs from output specification changes
- Major documentation updates (adding output specification compliance check items)
- CI infrastructure issue investigation and recording
- Security E2E test debugging and rule adjustments
Created/Updated Documentation
integration-test-requirements.md
→ Added output specification compliance check items and adjustment notes
PROBLEM_PATTERNS.md
→ Added pattern "Falco went silent after changing output specifications without referring to documentation"
Others
→ Added CI infrastructure issue reproduction conditions
Summary
Days 57-61 were spent on "large-scale attack verification and E2E debugging." The failure on 9/9 was particularly critical - changing output specifications without referring to documentation created a state where Falco appeared silent. However, embedding this painful mistake into documentation and transforming it into a recurrence prevention mechanism was a significant harvest.
The most important thing in OSS development is "openly publishing and continuously improving."Next, we aim to publish the Phase 2 test report that comprehensively runs the expanded rules and attack patterns.