Demilitarized Zone (DMZ)

A Demilitarized Zone (DMZ) in networking is a physical or logical subnetwork that contains and exposes an organization’s external-facing services to an untrusted network, usually the internet. The primary purpose of a DMZ is to add an additional layer of security to an organization’s local area network (LAN); an external attacker only has access to equipment in the DMZ, rather than any other part of the network.

Its primary function is to provide an additional layer of security. Servers and services that need to be accessible from the internet are placed in the DMZ. This way, even if these services are compromised, the attacker does not gain direct access to the internal network.

Types of Network Zones

  1. Demilitarized Zone (DMZ): As described, it’s used to host public services like web servers, email servers, and DNS servers. These services need to be accessible from the internet but also need to be secured.
  2. Internal Zone (or Secure Zone): This area hosts sensitive data, applications, and systems that should not be accessible from the internet or untrusted networks. Access to the internal zone is tightly controlled and monitored.
  3. External Zone (or Public Zone): This is the network segment that is exposed to the internet. It does not contain sensitive information or critical systems. Its security level is lower.
  4. Wireless Zone: A separate network zone for wireless devices. Due to the inherent security risks of wireless communications, it’s often treated differently from wired networks.

A coffee shop provides free Wi-Fi to its customers. The Wi-Fi network is isolated in its own zone to prevent access to the shop’s internal POS systems and financial data.

  1. Management Zone: A network segment dedicated to network management systems and interfaces. Access is restricted to IT staff for the purpose of managing and monitoring network components.
  2. Specialized Data Zone: This might include a Payment Card Industry (PCI) zone for storing and processing credit card data, ensuring compliance with PCI DSS standards.

Distinguishing and Securing Zones

  • Firewall Security Groups: Firewalls are the cornerstone of network zoning. They control traffic between zones based on predefined security rules. Security groups are a form of virtual firewall rules applied to cloud resources to control inbound and outbound traffic.
  • Virtual LANs (VLANs): VLANs help segment network traffic logically without requiring physical separation. Different VLANs can be assigned to different zones, effectively isolating network traffic.
  • Access Control Lists (ACLs): ACLs provide a finer level of control over what traffic is allowed or denied in and out of network zones. They can be applied on routers, switches, and firewalls.

ACLs are more granular and can be applied to specific network interfaces on routers and switches. Security groups are applied to cloud resources, like virtual machines.

  • Network Address Translation (NAT): NAT is often used in conjunction with DMZs to hide the internal IP addresses of servers from the outside world, adding an extra layer of security.
  • A VPN extends a private network across a public network, allowing users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. This creates a secure connection over the internet by encrypting data in transit, thus providing privacy and security for the data exchange.

Once connected to the VPN, employees can securely access the web servers in the DMZ for maintenance and management tasks. The VPN connection ensures that this access is encrypted and secure, protecting against eavesdropping and unauthorized access.

Latest Features from Java 11 to Java 21

Java has continued to evolve significantly from Java 11 to Java 21, introducing a range of new features and improvements. Here’s a summary of key features from each version:

6 month release

Starting with Java 9, Oracle adpoted regular release for Java to become more agile and to collect more feedback from the community.

Verbosity improvement

Java is verbose, that what it make it a the best programming language for beginners, and it has another side: it provides explicit context and programmers. Which reduces mental overhead and simplifies reading the code.

There was many improvement to reduce Verbosity such as :

Records

Provide a concise and easy way to create simple, immutable data carrier classes. Before records, creating such classes involved writing a lot of boilerplate code: constructors, getters, equals(), hashCode(), and toString() methods. Records simplify this process significantly.

Key Characteristics of Records:

  1. Immutable by Default: The fields of a record are final and must be initialized in the constructor.
  2. Concise Syntax: Records reduce boilerplate code by automatically generating constructors, getters, equals(), hashCode(), and toString() methods based on the fields.
  3. Local Declarations: Records can be declared locally, within methods.
  4. Canonical Constructor: Records automatically have a canonical constructor (one with all the fields as parameters), but you can also define custom constructors.
public record Person(String name, int age) {
}

This one line of code does everything for you. It creates a class with two fields (name and age), a constructor to initialize these fields, and appropriate implementations of equals(), hashCode(), and toString() methods.

Accessing Fields and Using the Record:

public class Main {
    public static void main(String[] args) {
        Person person = new Person("Alice", 30);

        // Accessing fields
        System.out.println("Name: " + person.name());
        System.out.println("Age: " + person.age());

        // Using toString() method
        System.out.println(person);
    }
}

Records in Java simplify the creation of immutable data classes and are particularly useful in situations where you need to quickly define classes to hold data without much additional behavior or customization. This feature enhances the readability and maintainability of Java code, especially for data-centric applications.

Pattern Matching


Pattern Matching for instanceof in Java, enhances the Java language by reducing boilerplate code and improving readability.

Traditional Approach Without Pattern Matching:

Before the introduction of pattern matching, you typically checked the type of an object using instanceof, and then you had to explicitly cast the object to the target type to access its methods or fields. For example:

Object obj = "Hello, world!";

if (obj instanceof String) {
    String s = (String) obj;
    System.out.println(s.length());
}

In this traditional approach, you need two steps:

  1. Check if obj is an instance of String.
  2. Explicitly cast obj to String to access the length() method.

Using Pattern Matching for instanceof:

With pattern matching, Java simplifies this process. The instanceof operator can be used to perform both the type check and the casting in one step:

Object obj = "Hello, world!";

if (obj instanceof String s) {
    // 's' is now directly usable as a String within this block
    System.out.println(s.length());
}

In this updated approach:

  1. The instanceof check and the casting to String are combined.
  2. If obj is an instance of String, it is automatically cast to String and assigned to the variable s.
  3. The variable s is then directly usable within the scope of the if block.

Project LOOM

the primary goal of Project Loom is to support a high-throughput, lightweight concurrency model in Java. introduced changes on the JVM as well as the Java library.

Loom is to make concurrency easier for developers by introducing lightweight, user-mode threads (virtual threads) and enabling better resource management. It aims to address challenges and limitations associated with traditional concurrency models that rely on OS-level threads.

freestar

However, since Java uses the OS kernel threads for the implementation, it fails to meet today’s requirement of concurrency. There are two major problems in particular:

  1. Threads cannot match the scale of the domain’s unit of concurrency. For example, applications usually allow up to millions of transactions, users or sessions. However, the number of threads supported by the kernel is much less. Thus, a Thread for every user, transaction, or session is often not feasible.
  2. Most concurrent applications need some synchronization between threads for every request. Due to this, an expensive context switch happens between OS threads.

Virtual Threads

These are lightweight threads that are managed by the Java Virtual Machine (JVM), not the operating system. Unlike traditional threads (platform threads), virtual threads are cheap to create and destroy, consume less memory, and can number in the millions. They are particularly useful for IO-bound tasks, where threads spend a lot of time waiting.

Key Characteristics of Virtual Threads:

  1. Lightweight: They consume significantly less memory and resources compared to platform threads.
  2. Easy to Create in Large Numbers: Because of their lightweight nature, you can create millions of virtual threads, which is not feasible with platform threads.
  3. Simplifies Concurrent Programming: Virtual threads simplify the concurrent programming model, especially for IO-bound operations, as you no longer need to deal with the complexity of thread pools or managing a limited number of threads.

Example of Virtual Threads:

Here’s a basic example demonstrating how you might use virtual threads in Java. Let’s say you have a server application that handles multiple client connections. With virtual threads, each client connection can be handled by its own thread without worrying about overwhelming the system with too many OS threads.

import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;

public class VirtualThreadExample {
    public static void main(String[] args) throws IOException {
        ServerSocket serverSocket = new ServerSocket(8080);
        while (true) {
            Socket clientSocket = serverSocket.accept();

            // Creating a virtual thread for each client connection
            Thread.startVirtualThread(() -> handleClient(clientSocket));
        }
    }

    private static void handleClient(Socket clientSocket) {
        // Logic to interact with the client
        // This could involve IO-bound operations like reading from or writing to the socket
        System.out.println("Handling client: " + clientSocket);
    }
}

In this example:

  • A server socket listens for client connections.
  • For each incoming connection, a new virtual thread is created using Thread.startVirtualThread().
  • Each client connection is handled independently on its own virtual thread.

Benefits of Using Virtual Threads:

  1. Simplified Concurrency Model: No need for complex concurrency constructs like Executors or thread pools.
  2. Efficient Handling of IO Operations: Virtual threads are particularly beneficial for IO-bound tasks where threads often spend a lot of time waiting.
  3. Improved Scalability: Applications can handle a large number of concurrent tasks without the overhead associated with a similar number of platform threads.

Stopping a Virtual Thread:

  • Natural Completion: The most common and recommended way for a thread (virtual or traditional) to stop is by naturally completing its execution. Once the run() method (or lambda expression) finishes, the thread will terminate.
  • Interruption: If you need to stop a virtual thread prematurely, you can interrupt it, just like a regular thread. However, this requires the thread’s code to handle interruptions properly.

In the example of handling client connections with a virtual thread, each virtual thread will stop itself after it has completed handling its client connection :

import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;

public class VirtualThreadExample {
    public static void main(String[] args) throws IOException {
        ServerSocket serverSocket = new ServerSocket(8080);
        while (true) {
            Socket clientSocket = serverSocket.accept();

            // Creating a virtual thread for each client connection
            Thread virtualThread = Thread.startVirtualThread(() -> handleClient(clientSocket));

            // Example of interrupting a thread (in real-world, condition for interrupt would be more sophisticated)
            if (shouldInterruptThread()) {
                virtualThread.interrupt();
            }
        }
    }

    private static boolean shouldInterruptThread() {
        // Implement logic to decide whether to interrupt the thread
        // For example, based on some external condition or signal
        return false; // Placeholder
    }

    private static void handleClient(Socket clientSocket) {
        try {
            // Logic to interact with the client
            System.out.println("Handling client: " + clientSocket);

            // Check for interruption
            if (Thread.currentThread().isInterrupted()) {
                System.out.println("Thread was interrupted, stopping handling the client.");
                return;
            }

            // Continue handling the client...
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Project Valhala

The project valhala brings significant enhancements to the Java programming language and the Java Virtual Machine (JVM). It aims to introduce value types and improve the way Java handles memory, with the goal of increasing performance and reducing memory overhead, particularly for complex, data-heavy applications.

Value Types (Inline Classes without references): behave like primitives

  • Traditional Java objects are reference types, which means variables hold references to objects that are stored in the heap. Value types, on the other hand, are intended to be a new kind of type that is not a reference. They are intended to be more memory-efficient and faster in certain scenarios, particularly where large arrays of objects are used.
  • Value Types, as proposed in Project Valhalla for Java, are a fundamental shift in how Java handles data types. They are intended to provide a way to define types that behave like primitives in terms of efficiency and memory layout but can be used like objects in terms of encapsulation and methods. This concept is quite different from the traditional Java objects, which are reference types.

Understanding Value Types:

  1. Efficient Memory Usage: Unlike objects, value types are intended to be allocated on the stack (when used in a method scope) or stored directly in fields (when used as part of another object). This direct storage avoids the overhead of heap allocation and garbage collection.
  2. Immutable: Value types are immutable. Once created, their state cannot be changed, much like Java’s primitive types (int, float, etc.).
  3. No Identity: They do not have an identity, which means that their equality is based on the values of their fields, not on their identity in memory. There are no references or pointers to value types, and they do not have a separate object header like regular objects.
  4. Syntax and Usage: They are expected to be defined similarly to classes but with some restrictions due to their nature (e.g., immutability, no identity).

Example: Complex Numbers

Let’s take an example of a ComplexNumber class. In traditional Java, each instance of this class is a reference type.

public class ComplexNumber {
    private final double real;
    private final double imaginary;

    public ComplexNumber(double real, double imaginary) {
        this.real = real;
        this.imaginary = imaginary;
    }

    // Getters and other methods...
}

Creating an array of ComplexNumber objects involves storing references to these objects in the heap, which can be memory-inefficient:

ComplexNumber[] complexNumbers = new ComplexNumber[1000];
for (int i = 0; i < complexNumbers.length; i++) {
    complexNumbers[i] = new ComplexNumber(Math.random(), Math.random());
}

Using Value Types

With value types, you could define ComplexNumber as a value type. This is a hypothetical example, as the actual syntax and capabilities may vary:

public value class ComplexNumber {
    public double real;
    public double imaginary;
}

Now, when you create an array of ComplexNumber, each instance is stored inline in the array, just like an array of primitives. This means the memory layout is contiguous, and there’s no overhead of object headers or references.

ComplexNumber[] complexNumbers = new ComplexNumber[1000];
for (int i = 0; i < complexNumbers.length; i++) {
    complexNumbers[i] = new ComplexNumber(Math.random(), Math.random());
}


Certainly! Let’s consider a more concrete example to illustrate how value types could be used in Java and how they could be stored inline in an array, as envisioned by Project Valhalla. Please note that as of my last knowledge update in April 2023, value types were not yet part of the standard Java release, so the syntax and features may evolve.

Example: Complex Numbers

Let’s take an example of a ComplexNumber class. In traditional Java, each instance of this class is a reference type.

public class ComplexNumber { private final double real; private final double imaginary; public ComplexNumber(double real, double imaginary) { this.real = real; this.imaginary = imaginary; } // Getters and other methods... }

Creating an array of ComplexNumber objects involves storing references to these objects in the heap, which can be memory-inefficient:

ComplexNumber[] complexNumbers = new ComplexNumber[1000]; for (int i = 0; i < complexNumbers.length; i++) { complexNumbers[i] = new ComplexNumber(Math.random(), Math.random()); }

Using Value Types

With value types, you could define ComplexNumber as a value type:

public value class ComplexNumber { public double real; public double imaginary; }

Now, when you create an array of ComplexNumber, each instance is stored inline in the array, just like an array of primitives. This means the memory layout is contiguous, and there’s no overhead of object headers or references.

ComplexNumber[] complexNumbers = new ComplexNumber[1000]; for (int i = 0; i < complexNumbers.length; i++) { complexNumbers[i] = new ComplexNumber(Math.random(), Math.random()); }

In this scenario, complexNumbers is an array where each ComplexNumber is stored directly within the array’s memory structure, rather than as a separate heap object. This inline storage reduces memory consumption and can lead to performance improvements, especially when processing large arrays.

Impact on Java Memory Model:

  • Memory Efficiency: The array of ComplexNumber value types uses less memory than an array of ComplexNumber objects because there’s no overhead for object headers or references.
  • Performance Gains: Accessing the elements of the array is faster due to the contiguous memory layout, improving cache locality.
  • No Garbage Collection: These value types, when used in a method scope and stored on the stack, are not subject to garbage collection, reducing GC overhead.

Project Leyden

Project Leyden aimed at addressing a long-standing challenge in the Java ecosystem: startup time and memory footprint. The project’s primary focus is to introduce the concept of static images to the Java platform.
How It Works:

The process involves ahead-of-time (AOT) compilation, which compiles Java bytecode into native code before the application is run. This contrasts with the traditional Just-In-Time (JIT) compilation approach of the JVM, where bytecode is compiled into native code at runtime.

Key Objectives of Project Leyden:

  1. Static Images:
    • Concept: The idea is to create precompiled and optimized runtime images of Java applications. These images would be generated ahead-of-time (AOT) and would include all necessary components of the application and the runtime (like classes, libraries, and parts of the JVM itself).
    • Benefits: By compiling Java code ahead-of-time, startup times can be significantly reduced, as the JVM would have less work to do at runtime. This approach also reduces the memory footprint, as the static images are optimized for the specific application.
  2. Faster Startup Time:
    • Focus: One of the primary goals is to improve the startup time of Java applications, which is a critical factor in cloud and serverless computing environments where rapid scaling and efficient resource usage are essential.
  3. Reduced Memory Footprint:
    • Focus: By optimizing the runtime image for each specific application, the memory usage can be more efficient, which is especially beneficial in environments where resources are constrained or costly.
  4. Compatibility with Existing Java Ecosystem:
    • Goal: Project Leyden aims to achieve these objectives while maintaining compatibility with the existing Java ecosystem, ensuring that current Java applications can benefit from these improvements without significant changes.

Soft lockup issue vs SSSD

We identified a soft lockup on a Linux VM because it is monitored by a Collectd service. This was detected due to an interruption in the metrics (systemd) usually received from this machine.

SSSD

SSSD, or the System Security Services Daemon, is a software service in Linux that provides access to different identity and authentication providers. It is an integral part of Linux-based systems, especially in enterprise environments, due to its role in managing access to remote directories and authentication mechanisms.

When you access a Linux machine via SSH using a username and password, several components work together to authenticate your credentials, and SSSD can play a crucial role in this process, especially in a networked or enterprise environment. Here’s how it works:

  1. SSH Authentication Request: When you try to log into a Linux machine via SSH, the SSH server on that machine receives your login request, including your username and password.
  2. PAM (Pluggable Authentication Modules): Linux systems typically use PAM for authentication. PAM is a suite of libraries that provide a way to develop programs that are independent of authentication scheme. SSH is configured to use PAM for authentication.
  3. Interaction with SSSD: If the Linux system is configured to use SSSD for user authentication, the PAM configuration will include a module to interact with SSSD. When you attempt to log in, PAM passes the authentication request to SSSD.
  4. SSSD’s Role:
    • Identity Verification: SSSD checks the provided credentials against the configured identity sources, such as LDAP, Active Directory, or Kerberos.
    • Caching: If the credentials have been previously verified and cached by SSSD, it can authenticate the user even if the central identity source is temporarily unreachable.
    • Policy Enforcement: SSSD can also enforce certain access and authentication policies as defined by the system administrator.
  5. Authentication Response: If SSSD successfully authenticates your credentials, it informs PAM, which then allows the SSH server to establish the session. If the authentication fails, the login is denied.
  6. Additional Layers: In more complex configurations, SSSD might also be involved in additional layers of security, such as multi-factor authentication or access control policies.

In essence, SSSD acts as an intermediary between the SSH service and the identity providers.

Also whenever we are facing soft lockup problem, we can’t access the machine via ssh, when checking sssd logs :

Nov  5 19:09:41 serv1 sssd[sudo]: Shutting down
Nov  5 19:09:41 serv1 sssd[pam]: Shutting down

Nov  5 19:09:41 serv1 systemd: sssd.service: main process exited, code=exited, status=1/FAILURE
Nov  5 19:09:41 serv1 systemd: Unit sssd.service entered failed state.
Nov  5 19:09:41 serv1 systemd: sssd.service failed.

To fix this problem , so we can access the machine even when soft lockup occured , we add Restart=on-failure in sssd configuration file :

cat /usr/lib/systemd/system/sssd.service

[Unit]
Description=System Security Services Daemon
# SSSD must be running before we permit user sessions
Before=systemd-user-sessions.service nss-user-lookup.target
Wants=nss-user-lookup.target

[Service]
Environment=DEBUG_LOGGER=--logger=files
EnvironmentFile=-/etc/sysconfig/sssd
ExecStart=/usr/sbin/sssd -i ${DEBUG_LOGGER}
Type=notify
NotifyAccess=main
PIDFile=/var/run/sssd.pid
Restart=on-failure

[Install]
WantedBy=multi-user.target

soft lockup

A « soft lockup » in a Linux virtual machine (VM) is a situation where the kernel reports that a CPU core is stuck in a non-responsive state for a significant amount of time. This issue is different from a complete system crash or a « hard lockup, » where the system completely freezes and becomes unresponsive. In a soft lockup, the system might still be partly operational.

Here are some key points about soft lockups:

Cause of Soft Lockups: Soft lockups can be caused by various factors, including but not limited to:

  1. High CPU Load: When a process consumes an excessive amount of CPU time, it can lead to a soft lockup. This is especially common in virtual environments where resources are shared among multiple VMs.
  2. Resource Starvation: If the VM is not allocated enough resources (like CPU or memory), it can lead to situations where processes are unable to get the necessary resources, causing a lockup.
  3. Driver or Kernel Bugs: Problems within the kernel or with specific hardware drivers can also lead to soft lockups. This is particularly common with experimental or poorly-supported hardware.
  4. Virtualization Overheads: In a virtualized environment, additional layers of complexity and overhead can contribute to lockup scenarios, especially if the virtualization software has bugs or is misconfigured.

Detection: The Linux kernel has mechanisms to detect when a soft lockup occurs. It typically logs messages indicating that a CPU has been stuck for a specific number of seconds. These messages can be found in the system logs.

grep "soft lockup" /var/log/messages 
Nov  5 12:12:31 localhost kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [reader#4:11336]

Determine whether the network traffic is allowed on ports 1815 and 1814 on a Linux VM

If you want to determine whether the network flow is allowed on ports 1815 and 1814 on a Linux VM, you need to conduct a few tests and checks.

  • Check Security Group Rules:If you are using cloud platforms like AWS, GCP,Openstack or Azure, they provide their own mechanism for defining security groups and firewall rules. From your mention of « security group ingress« , it sounds like you might be using AWS. You can check the security group settings to ensure that incoming traffic on ports 1815 and 1814 is allowed.
  • Check Local Firewall Rules: The VM might have its own firewall like iptables or ufw that could block the traffic. For iptables:
sudo iptables -L -v -n
  • Use netstat :

You can check if there is any service currently listening on the ports:

netstat -tuln | grep -E '1815|1814'

Test with nc (netcat):

On the VM, you can start a dummy listener on the ports:

nc -lu 1815

And on another machine:

echo "test" | nc -u [VM_IP] 1815

Check on the receiving machine:

Since you have access to the machine (as seen by your use of the netstat command previously), one way to check would be to monitor incoming packets using a packet sniffer like tcpdump:

tcpdump -i any udp port 1815 -n -v -nn
# tcpdump -i any udp port 1815 -n -v -nn
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:26:42.720217 IP (tos 0x0, ttl 59, id 26750, offset 0, flags [DF], proto UDP (17), length 47)
    100.64.226.3.34906 > 100.66.197.91.1815: UDP, length 19
15:26:42.982835 IP (tos 0x0, ttl 59, id 36595, offset 0, flags [DF], proto UDP (17), length 47)
    100.64.226.2.38599 > 100.66.197.91.1815: UDP, length 19
15:26:47.710289 IP (tos 0x0, ttl 59, id 30353, offset 0, flags [DF], proto UDP (17), length 47)

Now we see packets targeting port 1815 on the destination machine 100.66.197.91.

Here’s the breakdown:

  • The traffic is coming from two different source IP addresses: 100.64.226.3 and 100.64.226.2.
  • The packets are UDP and are directed to port 1815 on the target machine.
  • Each packet has a payload of 19 bytes.

This confirms that the traffic on port 1815 is reaching the destination machine 100.66.197.91 without any issues. This means your security group setup is correct (at least for this specific communication), and your testing machine is correctly sending the packets to the intended port on the destination machine.

Squash your last two commits into one commit

To squash your last two commits into one commit, follow these steps:

  • Open your terminal and navigate to your repository.
  • Start an interactive rebase for the last two commits:
git rebase -i HEAD~2

Squash the commits: Your default text editor will open up with a list of commits. It’ll look something like this:

pick abc1234 Your more recent commit message
pick def5678 Your older commit message

To squash the newer commit into the older one, change pick to squash or simply s for the second line (the topmost commit):

pick def5678 Your older commit message
squash abc1234 Your more recent commit message
  • Save and close the editor.
  • Edit the commit message: Once you’ve chosen to squash, your editor will re-open to allow you to combine the commit messages. You can either edit it to a new message, keep one of the existing messages, or create a combination of both. Save and close the editor when you’re finished.
  • Complete the rebase: After you close the editor, Git will squash the two commits into one.
  • Push the changes to the remote repository: Since rebasing rewrites commit history, you’ll need to force push your changes to the remote repository. Be cautious with this, as it can overwrite changes on the remote that you don’t have locally, especially if you’re working with others.
git push origin HEAD:branch_name --force

What Is a Certificate Revocation List?

X.509 digital certificates are integral to public key infrastructure (PKI) and web security as a whole. But what happens when something goes wrong with one of those certificates or its keys? what’s better known as being added to a certificate revocation list (CRL).

We’ve seen mass certificate revocations happen before. For example, Apple, Google and GoDaddy (and a few other CAs) revoked millions of certificates last year due to the certificates having 63-bit serial numbers instead of 64-bit ones. And just earlier this year, we saw Let’s Encrypt facing a mass certificate revocation due to a bug in their code.

A Certificate Revocation List (CRL) is a list of digital certificates that have been revoked by a Certificate Authority (CA) before their scheduled expiration dates. Certificates may be revoked for various reasons, such as the compromise of a private key, the certificate holder’s violation of policy, or the certificate being issued in error.

The Purpose of Certificate Revocation

Much like the name implies, certificate revocation is a process that distinguishes invalid and untrusted certificates from valid trusted ones. Basically, it’s a way for CAs (or CRL issuers) to make it known that one or more of their digital certificates is no longer trustworthy for one reason or another. When they revoke a certificate (a process that’s sometimes known as PKI certificate revocation), they essentially invalidate the cert ahead of its expiration date.

So, should a CA need revoke a certificate for your website, it makes Google Chrome display a lovely warning message like this to your site visitors:

CRL involves several tasks:

  1. Distribution: CAs periodically issue updated CRLs that must be distributed to clients and servers that rely on the CA’s certificates. CRLs are typically made available for download from the CA’s website or other public repositories.
  2. Retrieval: Clients and servers need to periodically retrieve and update the CRLs for all CAs they trust. This is typically done automatically by the software or operating system they are running.
  3. Verification: When establishing an SSL/TLS connection, clients and servers should verify whether the presented certificate is listed on a CRL. If it is, the connection should be rejected as the certificate is no longer valid.
  4. Monitoring: Administrators should monitor the status of CRLs to ensure they are updated in a timely manner and to detect any anomalies that could indicate a security breach or other issues.

TCP retransmission settings

Efficient handling of connection timeouts is crucial for maintaining optimal performance and user experience in networked applications. In this article, we will discuss connection timeouts, their impact on application performance.

Recently i faced a problem where i had java app that implements LittleProxy HttpProxyServer, let’s call it Poxy

The problem

When server1 sends http request to Proxy to get a result from Server2, Server1 waits arround 40s then receives a 502 Bad Gateway, knwing that Server2 is unreachable (Network issue)
The issue we don’t know why the timeout is arround 40s, and there is not set up of connectionTimeout in the code of proxy java app.

but after searching i reliaze that By default, the library LittleProxy does not enforce connection or read timeouts.

LittleProxy is a popular library for creating HTTP proxy servers in Java applications. By default, the library does not enforce connection or read timeouts.

Understanding Connection Timeouts

Connection timeouts are an essential aspect of network programming, as they ensure that an application does not wait indefinitely for a response from a remote server. There are different types of timeouts that developers should be aware of:

  1. Connection Timeout: The maximum amount of time an application should wait while trying to establish a connection with a remote server.
  2. Read Timeout: The maximum amount of time an application should wait for receiving data from an already established connection.
  3. Write Timeout: The maximum amount of time an application should wait while trying to send data over an established connection.

When a timeout occurs, the application typically raises an exception, allowing developers to handle the situation gracefully, such as displaying an error message or attempting to reconnect.

What connectionTimeout is applied when there is no customize connectionTimeout set on the app ?

The 40-second timeout is more likely to be a system-level setting, which is the default socket timeout on some operating systems. When a socket tries to connect to a remote host and doesn’t receive any response (due to the iptables DROP rule), the operating system waits for the default socket timeout duration before giving up.

This when things got interesting, is that the OS has a TCP retransmission settings when you don’t set it on your app (or there is no defaut value set like the one on HttpProxyServer) this setting is applied then.

To check the current timeout settings for your application running on a Red Hat Enterprise Linux system, you can follow these steps:

  1. Check the current TCP retransmission settings: The timeouts you experience might be related to the TCP retransmission timeout settings. You can check these settings by running the following command in your terminal:
sysctl net.ipv4.tcp_retries2

This command will show you the current value of tcp_retries2. The default value is typically 15, which roughly corresponds to a timeout of 30-40 seconds, depending on the system, which specifies how many retries.

Set customize connectionTimeout :

1234HttpProxyServer server = DefaultHttpProxyServer.bootstrap()    .withConnectTimeout(10_000) // Set connection timeout to 10 seconds (10,000 milliseconds)    // ... other configurations ...    .start();

By setting a custom connection timeout, you can control how long your application waits before giving up on a connection attempt. Keep in mind that setting a very low timeout may cause connection issues in case of temporary network latency, while setting a very high timeout may cause your application to wait unnecessarily long in case of unreachable hosts.

By doing this the server1 reveived his reponse arround 10s

Tests to reproduce the problem

i did 2 tests, one with an iptables DROP rule the other with an iptables REJECT rule

Test1 : at 12:06:40 with a DROP rule, response 502 obtained at 12:07:20 after 40s

Test2: at 12:09:49 with a REJECT rule, response 502 obtained immediately (no waiting of 40s)

When you use an iptables DROP ruleit silently drops the packets without sending any notification to the sender. As a result, the sender (your application) has no way of knowing if the packets were received or not. Your application then waits for a response, which never comes. It will keep waiting until a timeout occurs, which in your case appears to be 40 seconds.

On the other hand, when you use an iptables REJECT rule, the rule sends an error response (usually an ICMP « destination unreachable » message) back to the sender. This way, the sender (your application) is immediately notified that the request has been rejected. Consequently, your application doesn’t have to wait for a timeout to occur, and you receive the 502 response immediately.

In summary, the DROP rule causes your application to wait for the timeout because it does not receive any feedback about the dropped packets, whereas the REJECT rule sends an error message back to the sender, allowing your application to immediately react and return the 502 response.