Building a Production Honeypot on Routable BGP Space
I have been running honeypot infrastructure for a while, but recently rebuilt it into a production telemetry platform. The current deployment listens across 1,024 routable IPv4 addresses and roughly 1,813 IPv6 /48 equivalents, with coverage spanning 60+ TCP services plus UDP 69 and UDP 161.
This write-up explains the architecture and the operational outcomes, but deliberately avoids implementation details that would make evasion easier.
Foundation: Routable Announced Space
The sensors run on announced, unused routable space under direct operational control in Wolverhampton, UK. Because this address space is not hosting customer workloads during telemetry windows, unsolicited inbound traffic can be treated as suspicious by default.
Compared with single-IP honeypots, larger routed coverage provides a better view of broad internet attack behavior. It captures scanners and opportunistic exploit traffic at enough scale to produce stable trends, not one-off noise.
Service Coverage and Emulation
This is not a simple connect-and-close collector. Services are emulated at protocol level so clients continue far enough to expose intent.
- Credential surfaces: SSH, FTP, Telnet, IMAP, POP3, SMTP, LDAP/LDAPS, WinRM, VNC, and RDP workflows designed to capture repeated auth abuse patterns.
- Web and control planes: HTTP/HTTPS, container and orchestration APIs, and related admin interfaces targeted by automated exploit kits.
- Data stores: common database and middleware services used for unauthenticated access attempts and post-compromise staging.
- IoT and edge protocols: camera, router, and embedded-service emulation to surface credential stuffing and exploit probing.
- Network services: SMB and related protocols for legacy exploit signatures and mass-probe behavior.
The emphasis is signal quality: enough protocol fidelity to classify intent, while keeping capture safe and controlled.
Concurrency and Reliability
The runtime is event-driven and optimized for high connection churn. In practice, reliability comes down to one rule: never let request handlers block critical I/O paths.
External enrichment, storage operations, and deferred maintenance run away from hot-path network handling. That separation significantly reduced dropped sessions and improved consistency during traffic spikes.
Classification Pipeline
Each event is classified using payload traits, protocol context, and service exposure. The output maps to practical categories, including credential brute-force, web exploitation, database enumeration, infrastructure API abuse, IoT exploitation, legacy network exploitation, and callback/C2 style activity.
The objective is actionable labels rather than generic "scan" records. That context improves local triage and makes downstream abuse reports useful to operators consuming them.
Noise Suppression and Reporting Hygiene
A large fraction of unsolicited traffic is internet-wide measurement rather than direct abuse. Before any external submission, events pass through a suppression process that separates measurement traffic from malicious behavior.
Operational specifics of suppression are intentionally withheld. The important point is policy: scanner and research traffic is retained for internal telemetry baselining but excluded from abuse reporting.
AbuseIPDB Contribution Model
Confirmed abuse events are reported with structured context: attacked service, observed attack method, source network attribution, and concise proof text. Reports are rate-limited to avoid duplicate noise and preserve quality.
Contributor profile: abuseipdb.com/user/121600
What the Data Shows
Some patterns are persistent: high-volume credential abuse, recurring legacy exploit traffic, and rapid automated probing against exposed infrastructure APIs. Other signals are more useful for operations, such as coordinated tool behavior across services and repeated payload families appearing from rotating source pools.
That mix is what makes the platform useful: long-run baseline trends plus high-confidence events that can be actioned quickly.
Live Dashboard and Next Steps
A live dashboard is available at hp-dash.aaran.cloud showing the current attack map, event stream, and top-source breakdowns.
Next iterations are focused on improving classification precision and campaign correlation so related activity is clustered as coherent abuse patterns rather than isolated hits.