The first job of port security is not glamorous. It is not threat hunting, behavioural anomaly detection, or AI-driven anything. It is knowing what is listening on every interface you own, and being able to explain why.
If you can answer that question for every server in your environment, you are already ahead of most organisations. If you cannot, every other security investment is built on sand.
Three questions worth knowing the answer to
For every host you operate, you should be able to answer three questions:
- What is listening? Which ports are open from the outside, which are open only on internal interfaces, which are loopback-only.
- Should it be? For each open port, what process opened it, what user it runs as, and whether there is a documented reason for it to exist.
- What is behind it? A port banner tells you something is responding. The real question is whether the service behind it is patched, configured correctly, and authenticated.
If you can't answer all three for a host, that host is a soft target. Attackers don't need exotic exploits when they can find a forgotten Redis instance on port 6379 with no password.
How attackers find your open ports
Internet scanning has been industrialised. Tools like Shodan, Censys, and BinaryEdge continuously scan the entire IPv4 address space and index every responding service. By the time you spin up a server with an open port, you have minutes — sometimes seconds — before an automated scanner notices.
A typical scan profile looks for the same handful of high-value targets:
- 22, 23 — SSH and Telnet (Telnet should never be exposed to the public internet)
- 80, 443, 8080, 8443 — HTTP variants (these are usually fine if they're really web servers)
- 25, 465, 587, 993, 995 — SMTP and IMAP/POP
- 3306, 5432, 1433, 1521 — MySQL, PostgreSQL, MSSQL, Oracle (these should never be exposed)
- 6379, 11211, 27017 — Redis, Memcached, MongoDB (the "no auth by default" hall of fame)
- 3389, 5900 — RDP and VNC (these should be behind a VPN)
- 8086, 8888, 9200, 9300 — InfluxDB, Jupyter, Elasticsearch
If you have any of these open to the public internet without explicit reason, fix it today.
Use a port checker to see what the world sees
The single most useful audit you can run is to scan your own external IPs from outside your network. Your local ss -tlnp or netstat tells you what's listening on the host, but it doesn't tell you what gets through the cloud security group, the corporate firewall, and the NAT.
Run a port checker against your public endpoints quarterly. Specifically check:
- Every IP listed in your DNS records (A, AAAA)
- Every IP that has been allocated to you but isn't actively in use (these are favourite squat targets)
- The IPs of any management bastions
For each open port the scanner reports, you should be able to immediately say: "Yes, that should be open, it's the [service]." If you can't, you have a finding.
Three audits to run quarterly
These are cheap, they catch real problems, and they take an hour total.
Audit 1: External port scan
For every public IP you own, run an external port check. Compare against the previous quarter. Anything new is a finding. Anything that was open and is now closed needs to be verified (the service might have died unintentionally).
Audit 2: Internal port scan
From a normal user workstation, scan the internal network. You're looking for management interfaces, debug ports, and forgotten test services. Most "we left it open by accident" incidents are caught this way.
Audit 3: Process-to-port mapping
On each server, run ss -tlnp (or netstat -tlnp) and confirm that every listening port maps to a process you expect, running as a user you expect. The classic incident pattern is a port that's been listening for six months running as a user nobody recognises — that's how persistence often hides.
Common surprises (and what to do)
In years of audits, the same patterns come up over and over.
A debug port left open in production
The classic. A developer enabled the Spring Boot Actuator on port 9001 to debug something in staging, the deploy script copied the config to production, and now /actuator/env exposes all environment variables on the public internet. Search "Spring Boot Actuator exposed" — there are thousands of these.
Fix: never bind debug or management interfaces to 0.0.0.0. Bind to loopback. If you need remote access, use SSH port forwarding or a VPN, not a public binding.
A management UI behind a default password
Grafana, Jenkins, Airflow, Jupyter, RabbitMQ — every one of these has a "default admin/admin" stage where it ships ready to be configured. If that stage gets exposed to the public internet for any window of time, you should assume it's compromised.
Fix: deploy with a randomised admin password from the very first start. Don't let a service exist in its default-password state on a public IP, ever.
A reverse tunnel established months ago
Someone needed remote access for a one-off task, opened a reverse SSH tunnel, and forgot to close it. It's now a persistent backdoor that ignores your firewall rules. These are nearly invisible to external port scans because they originate as outbound connections.
Fix: audit ss -tn outputs for long-lived outbound connections to unusual addresses. Look for the same destination port (often 22, 443, 8443) connecting from many of your servers to one external IP.
A service nobody knows what it is
You scan a host, find port 38291 open, and nobody on the team recognises the port number. There's a process running called worker.sh owned by user tomcat. Nobody remembers deploying it. You've probably been compromised.
Fix: this is your "stop everything and investigate" finding. Capture the process, the open connections, the cron jobs, then take the host offline.
The minimum baseline
If you do nothing else, do these:
- Default-deny inbound. Configure your cloud security groups to block all inbound by default. Open ports only as required, and document why each rule exists.
- Audit quarterly with a port checker. External view, from the internet, against your published IPs. Compare to the previous quarter.
- Never expose databases. No exceptions. Databases sit on private subnets with security groups that only allow connections from application servers.
- Use SSH key auth, not passwords. And rotate keys when team members leave.
- Monitor for new listeners. A new listening port on a server you didn't change is one of the highest-signal alerts you can configure.
Port security is one of those areas where the boring fundamentals matter more than the clever advanced techniques. A regular port check from outside your network, combined with discipline about which ports are open and why, will catch 80% of the issues that lead to real incidents.
The other 20% — that's why you have the rest of your security stack. But this is the part you can do today, in an hour, with no budget.
