Solving Oracle OID Connection Limits
Solving Oracle OID Connection Limits: Breaking Through the 1024 Barrier

Oracle OID Connection Limit Breakthrough — Scaling from 1024 to 5,000 Concurrent Users

Summary: Resolving "Exceeding maximum number of connections … Num Conns = 1024, Max Conns = 1024" errors and scaling to ~5,000 concurrent connections through PID-based monitoring and strategic tuning.

The Problem

Production environments are encountering OID connection limit errors:

  • Error: "Exceeding maximum number of connections … Num Conns = 1024, Max Conns = 1024"
  • External Count: Observed at 855–893 connections
  • The Puzzle: Why did it fail before reaching 1024?
System Analysis Summary
  • Process count: ps … | wc -l = 2 → 1 control + 1 server ⇒ runtime orclserverprocs=1
  • FD limits/usage: nofile 65536/65536, current FD = 277 → OS FD not a bottleneck
  • pgrep -a unsupported: Use pgrep -fl oidldapd for full command lines
  • ldapsearch(3060) failed: Non-SSL access blocked → LDAPS(3131) or StartTLS enforcement likely
  • OID UI (Excel) settings:
    — Max per process = 1024
    orclserverprocs = 1, Idle timeout = 60 min
    — Dispatcher threads/process = 5, DB connections/process = 4

Current System Configuration - orclserverprocs = 1

Key OID Settings:
• orclserverprocs = 1 (Number of server processes)
• Max LDAP Connections per Server Process = 1024
• Total Connection Capacity = 1 × 1024 = 1024

Customer Questions & Analysis

Q1: Why did the error occur before reaching 1024?

Answer: OID enforces per-process limits (1024), not global limits. With orclserverprocs=1, the single server process hits 1024 first.

Common reasons for lower external counts:

  • Missing LDAPS(3131) connections
  • Excluding CLOSE_WAIT, TIME_WAIT socket states
  • IPv6 connections not counted

Q2: How to scale to 5,000 and verify actual connection usage?

Answer: Total capacity = orclserverprocs × orclmaxldapconns. Keeping per-process limits at 1024, gradually increase orclserverprocs and monitor PID-based totals and TOTAL peaks during load to determine actual maximum concurrent usage.
Step 1: Verify Endpoints
Bash • Port listening check
# Check 3060(Non-SSL), 3131(LDAPS) listening
$ ss -lntp | grep -E "\b:(3060|3131)\b"
Also verify through UI
  • ODSM → Connections: Check Host/Port and SSL/StartTLS settings
  • EM (Fusion Middleware Control) → Server Properties: Verify Non-SSL/SSL Port values
Bash • OID daemon command line
$ pgrep -fl oidldapd
Step 2: Real-time Monitoring (Per-PID totals + TOTAL)
Bash • Per-PID totals & TOTAL
$ sudo ss -tanp '( sport = :3060 or sport = :3131 )' 2>/dev/null \
| awk '/ESTAB|SYN-RECV|CLOSE-WAIT|TIME-WAIT/ && match($0,/pid=([0-9]+)/,m){c[m[1]]++} \
       END{tot=0; for(p in c){printf "PID %s %d\n",p,c[p]; tot+=c[p]} printf "TOTAL %d\n",tot}'
Step 3: Achieving ~5,000 capacity
  • Conservative approach: Set orclserverprocs to 5 (5×1024=5120)
  • Verification: Run monitoring at 1–5 second intervals for several minutes → record TOTAL peak

Best Practices for Connection Scaling

  1. During load testing: Run monitoring commands periodically
  2. Record peaks: Track the maximum TOTAL value
  3. Gradual scaling: Increase processes incrementally: 2 → 3 → 5
  4. Resource monitoring: Watch CPU/memory/DB sessions/FD usage alongside connections

Important Considerations

  • Increasing process count = proportional increase in memory/CPU/DB session usage
  • Review DB connection pool and WLS JDBC pool settings simultaneously
  • Always perform thorough performance testing after changes
  • Monitor resource/slot usage during peak hours

5,000 Concurrent Connection Scaling Plan

Phase 1 — Initial Implementation

  • orclserverprocs: 1 → 2 (3 if resources allow) — prevents single process 1024 early saturation
  • Idle Connection Timeout (min): 60 → 15–30 — accelerates idle slot recovery
  • Dispatcher Threads/process: 5 → 8–10 — absorbs bursts, reduces accept delays
  • Max LDAP Connections/process: Keep at 1024 — only consider increasing in final phase

Phase 2 — If Phase 1 Still Saturates

  • DB Connections/process: 4 → 8 (or 12) — increases OID→DB parallelism, reduces processing time
  • Further increase orclserverprocs: approach physical core count
  • Check core count: cat /proc/cpuinfo | grep "core id" | sort | uniq | wc -l

Phase 3 — Approaching ~5,000 if needed

  • Target design: Keep per-process at 1024, set orclserverprocs = 5 → total capacity 5120
  • Pre-checks required: DB processes/sessions, WLS JDBC pool (Max/Initial/Statement Cache/Test-on-Reserve), system memory/CPU/FD headroom

Troubleshooting Tips

When connections appear lower than expected:
  • Count both 3060(Non-SSL) and 3131(LDAPS) ports
  • Verify IPv4/IPv6 bindings and policies
  • Include ESTAB, SYN-RECV, CLOSE_WAIT, TIME_WAIT states
  • Cross-reference PID-based internal counts with external totals

Author: Oracle Middleware Expert (27 years experience)

Expertise: OID, DIP, OAM, ODI, OGG, OBIEE, OAS

Scroll to Top