NGINX Reverse and Forward Proxies

NGINX Layer 4 Proxy

NGINX is a popular open-source web server and reverse proxy that is known for its high performance and flexibility. It is often used as a load balancer or as a reverse proxy to distribute incoming traffic across multiple servers or to cache static content. In this article, we will explain the basics of NGINX as a layer 4 (L4) reverse proxy and forward proxy.

What is a Reverse Proxy?

A reverse proxy is a type of server that sits in front of one or more backend servers and acts as a intermediary for incoming requests. It receives requests from clients and forwards them to the appropriate backend server, and then returns the response from the backend server back to the client.

The main advantage of using a reverse proxy is that it can provide additional security and performance benefits. For example, a reverse proxy can hide the IP addresses of the backend servers from clients, and it can also perform tasks such as SSL termination, compression, and caching to offload workload from the backend servers.

What is a Forward Proxy?

A forward proxy is similar to a reverse proxy, but it sits in front of client computers and acts as a intermediary for outgoing requests. It receives requests from clients and forwards them to the appropriate server, and then returns the response back to the client.

Forward proxies are often used to control access to the Internet, or to provide additional security and privacy for clients. For example, a forward proxy can be used to prevent clients from accessing certain websites, or to encrypt the traffic between clients and servers to protect against eavesdropping.

NGINX as a Reverse Proxy

NGINX can be used as a reverse proxy to distribute incoming traffic across multiple servers or to cache static content. To configure NGINX as a reverse proxy, you need to specify the backend servers in the upstream block, and then use the proxy_pass directive to forward the incoming requests to the backend servers.

Here is an example configuration for NGINX as a reverse proxy:

upstream backend {
  server backend1.example.com;
  server backend2.example.com;
}

server {
  listen 80;
  location / {
    proxy_pass http://backend;
  }
}

In this example, NGINX will listen for incoming requests on port 80 and forward them to the backend servers specified in the upstream block.

NGINX as a Forward Proxy

NGINX can also be used as a forward proxy to control access to the Internet or to provide additional security and privacy for clients. To configure NGINX as a forward proxy, you need to use the proxy_pass directive to forward the outgoing requests to the destination servers.

Here is an example configuration for NGINX as a forward proxy:

server {
  listen 8080;
  location / {
    proxy_pass http://destination.example.com;
  }
}

In this example, NGINX will listen for incoming requests on port 8080 and forward them to the destination server specified in the proxy_pass directive.


In addition to acting as a Layer 4 (L4) reverse proxy and forward proxy, NGINX can also be used as a layer 7 (L7) reverse proxy and forward proxy.

What is a Layer 7 Reverse Proxy?

A layer 7 reverse proxy is a type of reverse proxy that operates at the application layer (layer 7) of the OSI model. This means that it can examine and manipulate the content of the incoming requests and responses, based on the protocols and application-specific rules.

Layer 7 reverse proxies are often used to provide additional security and performance benefits for web applications. For example, a layer 7 reverse proxy can perform tasks such as authentication, rate limiting, and caching based on the content of the incoming requests.

NGINX as a Layer 7 Reverse Proxy

To configure NGINX as a layer 7 reverse proxy, you need to use the proxy_pass directive to forward the incoming requests to the backend servers, and use various other directives to manipulate the content of the requests and responses.

Here is an example configuration for NGINX as a layer 7 reverse proxy:

upstream backend {
  server backend1.example.com;
  server backend2.example.com;
}

server {
  listen 80;
  location / {
    proxy_pass http://backend;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

In this example, NGINX will listen for incoming requests on port 80 and forward them to the backend servers specified in the upstream block. The proxy_set_header directives are used to manipulate the headers of the incoming requests and pass them to the backend servers.

What is a Layer 7 Forward Proxy?

A layer 7 forward proxy is similar to a layer 7 reverse proxy, but it sits in front of client computers and acts as a intermediary for outgoing requests. It can examine and manipulate the content of the outgoing requests and responses based on the protocols and application-specific rules.

Layer 7 forward proxies are often used to control access to the Internet or to provide additional security and privacy for clients. For example, a layer 7 forward proxy can be used to block access to certain websites, or to encrypt the traffic between clients and servers to protect against eavesdropping.

NGINX as a Layer 7 Forward Proxy

To configure NGINX as a layer 7 forward proxy, you need to use the proxy_pass directive to forward the outgoing requests to the destination servers, and use various other directives to manipulate the content of the requests and responses.

Here is an example configuration for NGINX as a layer7 forward proxy:

server {
  listen 8080;
  location / {
    proxy_pass http://destination.example.com;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

In this example, NGINX will listen for incoming requests on port 8080 and forward them to the destination server specified in the proxy_pass directive. The proxy_set_header directives are used to manipulate the headers of the outgoing requests and pass them to the destination server.

Conclusion

In this article, we explained the basics of NGINX as a reverse proxy and forward proxy. NGINX is a powerful and flexible tool that can be used to examine and manipulate the content of incoming and outgoing requests and responses based on the protocols and application-specific rules. With its high performance and rich feature set, NGINX is a popular choice for web servers and reverse proxies.

The Moment Visit Hongkong in the Middle of Lock Down

Family Time @Hongkong Riverside

31 Desember 2019 Negara Tiongkok memperingatkan beberapa gejala terkait adanya virus tertentu, yang sekarang ini disebut sebagai virus korona. Beberapa hari kemudian banyak negara yang memberikan travel warning dan ada juga memberlakukan travel bann bagi pengunjung yang akan berwisata dan meninggalkan Tiongkok sekitarnya meliputi Hongkong, Macau dan Taiwan.

Continue reading

EPNM Automate Raw Data Meassurement to Excel

Following is code to Automate Raw Data Meassurement in Excel. Because by default EPNM doesnt have mechanism to export their own database to other.

EPNM store file database under /opt/CSCO/lumos/da/cdb , those files are EPNM application data files to store the performance data and it cannot be accessed from outside.

So following is script to export hourly data periodically via cron job to a csv file. And configured it in a way to get accessed via ftp.

cdbexport.sh : script to generate raw data files

ade # cat cdbexport.sh
#!/bin/bash

enddate="$(date -d "$(date +"%h %d %H:00:00")" +'%s')"
startdate=$(expr $enddate - 3600)
date -d @$startdate
echo StartTime=$startdate
echo EndTime=$enddate
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM CPU WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/CPU_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM MEMORY WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/Memory_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM CEPMINTERFACE WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/CEPMINTERFACE_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM ICMPJITTER WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/ICMPJITTER_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM ENVTEMP WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/ENVTEMP_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM CEPMQOS WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/CEPMQOS_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM OpticalSFP WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/OpticalSFP_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM DVAVAILABILITY WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/DVAVAILABILITY_"$(date +%Y-%m-%d-%H_00)".csv
/opt/CSCOlumos/da/bin/cdbq 'SELECT * FROM CEPMCRC WHERE TIME >= '$startdate'  AND TIME < '$enddate'' >> /localdisk/ftp/cdbexport/CEPMCRC_"$(date +%Y-%m-%d-%H_00)".csv

ade #

Above script will generate raw data to excel in each hours. And for capacity issue we will need to remove old unused file that already created in periodic time.

cdbexport_clean.sh : script to remove old files

ade # cat cdbexport_clean.sh
find /localdisk/ftp/cdbexport  -type f -mtime +15 -exec rm -rf  {} \;
ade #

And we will need to put this script to startup configuration contrab.

15 * * * * /root/cdbexport.sh >> cdbexport.log 2>&1
30 6 * * * /root/cdbexport_clean.sh >/dev/null  2>&1

SDH Multiplex Structure – SAToP and CESoPSN Cisco ASR900 Deployment

Overview

Synchronous Digital Hierarchy (SDH) is a CCITT standard for a hierarchy of optical transmission rates. Synchronous Optical Network (SONET) is an ANSI (American National Standards Institute) standard for North America, that is largely equivalent to SDH.

Both are widely spread technologies for very high speed transmission of voice and data signals across the numerous world-wide fiber-optic networks.
SDH and SONET are point-to-point synchronous networks that use TDM multiplexing across a ring or mesh physical topology.

The main difference between both standards are the some header/pointer informations and the transmission rates.
The base transport module of SDH is the synchronous transport module with a transmission rate of 155,52mbps (STM-1), SONET uses OC-1 (~51mbs) as base module.

SDH/SONET Transmission Rates

Continue reading

Traffic Load Balancing in Cisco UCS, ESXI and Windows Server

Ceritanya ada case menarik selama pekerjaan kali ini. Saat ini lagi ada kerjaan implementasi UCS dengan 2 buah Spine dan 2 buah Leaf (spine dan leaf menggunakan nexus 5k).

Saat ini sudah berjalan dengan baik, dari blade server menuju ke network sesama dan luar. Kondisi saat ini tiap blade server untuk 1 network itu mempunyai 2 adapter vnic dari UCS yang diarahkan ke FI yang berbeda dengan tujuan dibuat loadbalance.

Jadi tiap blade server itu dia punya 2 adapter, masing2 adapter koneksi mengarah ke ke FI (Fabric Interconnect) A dan FI B dengan mode fail over di sisi UCSnya.

Untuk yang didalem blade server os-nya memakai baremetal windows ada pula yg menggunakan ESXI. Sudah terset NIC teaming / bonding dengan native vlan, jadi komunikasi sudah berjalan dengan baik hanya menggunakan 1 FI saja.

Problemnya adalah meskipun sudah di set teaming/bonding dan jg sudah ditambahkan konfigurasi di sisi ucs supaya bisa lewat 2 adapter vnic. Namun ketika di cek sisi gateway (kedua leaf). Kedua leaf menampilkan arp yang berbeda, arp dari blade server cuman kedetek di sisi leaf 1 saja, yang harusnya menurut pemahaman untuk load balance sendiri untuk arp terdetek di leaf 1 dan leaf 2.

Note: Akses dr Leaf ke FI menggunakan port-channel biasa, bukan menggunakan VPC seperti best-practice yang dianjurkan. Continue reading