View Categories

What is a Network Switch?

7 min read

A network switch is a hardware device that connects multiple devices within a local area network (LAN) and intelligently forwards data only to the intended destination — unlike a hub, which broadcasts to every port. Switches are the foundational building blocks of any enterprise, campus, or data center network.

Definition #

network switch is a Layer 2 (or Layer 3) networking device that receives incoming data frames, inspects their destination MAC (or IP) address, and forwards them only to the specific port where the destination device is connected — rather than broadcasting to all ports as a hub would.

Switches operate at the Data Link layer (Layer 2) of the OSI model, maintaining a MAC address table (also called a CAM table) that maps hardware addresses to physical ports. Layer 3 switches extend this capability to handle IP routing, acting as both switch and router within a single device.

In modern enterprise, campus, and data center environments, switches are the primary method of connecting servers, workstations, access points, storage arrays, and other network devices within a facility. Throughput ranges from 1 Gigabit (edge access) to 800 Gigabit per second per port in cutting-edge AI and hyperscale data center deployments.

How a network switch works #

When a switch receives a frame on one of its ports, it performs four key steps:

The switch reads the incoming frame’s source and destination MAC addresses from its Ethernet header.

The source MAC and the port it arrived on are recorded in the CAM table, so the switch learns the network topology dynamically.

The switch looks up the destination MAC. If found, the frame is forwarded to that port only. If unknown, it is flooded to all ports except the source.

On managed switches, frames can be tagged with VLAN IDs (802.1Q) and prioritised with QoS markings before egress.

This selective forwarding is why switches dramatically outperform hubs: instead of every device competing for the same shared medium, each port operates in full-duplex, doubling effective bandwidth and eliminating collisions.

Types of network switches #

Connect end devices (PCs, phones, APs) to the network. Typically 1G–2.5G ports with PoE. Deployed at the network edge.

Aggregate traffic from multiple access switches before forwarding to the core. Often Layer 3–capable with 10G–25G uplinks.

High-speed backbone switches in the data center or campus core. 100G–800G port speeds with ultra-low latency (sub-μs).

Deliver both data and electrical power over Ethernet (802.3af/at/bt) to devices like APs, IP cameras, and phones.

Top-of-rack switches connect servers directly within a rack. High port density, 25G/100G server-facing, 100G–400G uplinks.

Purpose-built for GPU-to-GPU traffic in AI clusters. Ultra-low latency (<500 ns), RoCEv2 support, and 400G–800G ports.

Layer 2 vs Layer 3 switches #

The OSI layer at which a switch operates defines its capabilities and the scale of network it can serve:

FeatureLayer 2 SwitchLayer 3 Switch
Forwarding basisMAC address (CAM table)IP address (routing table)
Inter-VLAN routing✗ Requires external router✓ Hardware-accelerated
Protocols supportedSTP, LACP, LLDP, 802.1QOSPF, BGP, EVPN, PIM, VXLAN
Typical deploymentAccess edge, small LANsDistribution, core, data center
CostLowerHigher (ASIC complexity)
SONiC support✓ Primary target

Most modern enterprise and data center deployments rely on Layer 3 switches throughout to enable efficient inter-VLAN communication, ECMP load balancing, and BGP/EVPN-based overlay networks — all without routing traffic through a separate router appliance.

Managed vs unmanaged switches #

CapabilityUnmanagedSmart / Web-managedFully managed
VLAN configurationLimited✓ 802.1Q, QinQ
CLI / API accessWeb GUI only✓ CLI, REST, gNMI, NETCONF
QoS / traffic shapingBasic✓ Granular per-class
Redundancy (LACP, STP)Partial✓ Full
Port mirroring / SPAN
Open NOS (SONiC) support✓ Fully programmable
Target use caseHome / SOHOSMB / branchEnterprise / data center

For enterprise and data center environments, only fully managed switches provide the visibility, automation, and protocol support required. They expose standard management interfaces — SSH, SNMP, gNMI streaming telemetry, and REST/NETCONF APIs — that integrate with orchestration platforms and monitoring stacks.

SONiC and open networking switches #

Traditionally, network switches were closed systems: the hardware ASIC, the operating system, and the management software all came from a single vendor. This created lock-in, inflated margins, and slow innovation cycles. Open networking breaks this model by decoupling hardware from software — much like how the PC industry separated the CPU from the OS.

Open Networking OS

What is SONiC?

Software for Open Networking in the Cloud (SONiC) is a Linux-based, open-source network operating system originally developed by Microsoft for its Azure data centers. It runs on commodity switch ASICs from multiple silicon vendors (Broadcom, Marvell, Intel, Barefoot) through a standardised Switch Abstraction Interface (SAI) layer.
SONiC’s modular, containerised architecture separates each network function (BGP daemon, LLDP, SNMP, Port Manager) into an independent Docker container. This means individual components can be updated, restarted, or replaced without reloading the entire switch — enabling zero-downtime upgrades and fine-grained troubleshooting.

Key advantages of SONiC-based open switches

Why enterprises and hyperscalers choose open networking

Hardware/software decoupling: Choose the ASIC that matches your port-speed and latency requirements, then run SONiC regardless of the hardware vendor.
Community innovation: SONiC is backed by Microsoft, Alibaba, Dell, Broadcom, and hundreds of contributors — new features ship faster than any single-vendor roadmap.
Programmability: Full gNMI/gRPC streaming telemetry, NETCONF, and REST APIs enable integration with Prometheus, Grafana, Ansible, and Kubernetes-native network automation.
Cost reduction: Removing the closed-OS license tax typically reduces total switch cost by 30–60% compared to equivalent proprietary platforms.
Protocol completeness: BGP (FRRouting), OSPF, EVPN-VXLAN, PTP, ECMP, LACP, and QoS are all available in SONiC out of the box — enterprise-grade, not stripped down.

Enterprise SONiC: addressing community limitations

Community SONiC is powerful, but requires significant internal expertise to deploy, harden, and maintain in production. Enterprise SONiC distributions — like AsterNOS from Asterfusion — address this by layering a tested, supported software stack on top of the open-source foundation, adding:

  • Certified hardware compatibility across the full 1G–800G port-speed range
  • Long-term support (LTS) release tracks with security backports
  • Unified management through a centralised controller (AsterMOS) covering campus APs, switches, and routers
  • Zero Touch Provisioning (ZTP) for rapid, automated deployment at scale
  • Professional support SLAs replacing the on-call forum model of community software

ASIC families supported Marvell Teralynx, Prestera, Falcon; Broadcom Tomahawk, Trident

Port speeds 1G · 2.5G · 10G · 25G · 100G · 200G · 400G · 800G

Latency (cut-through) ~500 ns (data center fabric switches)

Protocols BGP, OSPF, EVPN-VXLAN, PTP, ECMP, LACP, 802.1Q, 802.3ad

Management interfaces CLI, REST API, gNMI, NETCONF, SNMP, Prometheus/OpenTelemetry

AI data center features RoCEv2, ECN, PFC, lossless Ethernet for GPU-to-GPU workloads

Use cases by deployment scenario #

ScenarioSwitch tierRecommended port speedKey features needed
Campus / office floorAccess1G–2.5G PoE+/PoE++PoE budget, VLAN, 802.1X NAC, OpenWiFi controller integration
Campus coreDistribution / core10G–100GLayer 3 routing, OSPF/BGP, redundant power, stacking
Enterprise data center (TOR)Top-of-rack25G server / 100G uplinkEVPN-VXLAN, ECMP, low latency, SONiC + gNMI telemetry
AI / HPC cluster fabricSpine / fabric400G–800GRoCEv2, lossless Ethernet (PFC + ECN), sub-μs latency, large buffer
Internet exchange / service providerCore100G–400GFull BGP table, MPLS, EVPN, high availability, 99.999% uptime

Frequently asked questions #

A switch operates at Layer 2 and forwards frames within a single network segment using MAC addresses. A router operates at Layer 3 and forwards packets between different network segments (including across the internet) using IP addresses. A Layer 3 switch combines both functions, handling intra-VLAN routing at wire speed using dedicated ASIC hardware — typically at lower cost and latency than a separate router.

A hub is a Layer 1 device that broadcasts every incoming signal to all ports simultaneously, causing all devices to share the same collision domain. A switch learns which device is on which port and delivers frames only to the intended destination, giving each port a private, full-duplex connection. Hubs are effectively obsolete in modern networks.

In cut-through mode, the switch starts forwarding a frame as soon as it reads the destination MAC address (typically after the first 14 bytes), without waiting for the entire frame to be received. This dramatically reduces latency — to as low as 500 ns on modern data center ASICs — compared to store-and-forward mode, which buffers the complete frame before forwarding. Cut-through switching is essential for latency-sensitive workloads such as AI training, high-frequency trading, and storage-area networks.

VXLAN (Virtual Extensible LAN) is an overlay protocol that encapsulates Layer 2 frames inside UDP packets, allowing Layer 2 networks to be stretched across a Layer 3 underlay. Modern data center switches — particularly those running SONiC — use VXLAN together with BGP EVPN as the control plane to build scalable, multi-tenant overlay networks. This enables workloads to migrate between physical servers or racks without changing their IP addresses, while the underlying physical network remains a simple, stable Layer 3 fabric.

SONiC (Software for Open Networking in the Cloud) is an open-source Linux-based network OS originally developed by Microsoft for its hyperscale data centers. It abstracts the underlying hardware ASIC through a standardised Switch Abstraction Interface (SAI), meaning the same SONiC software stack can run on switch hardware from multiple ASIC vendors. For organisations, this means freedom from vendor lock-in, access to a rapidly evolving open-source feature set (BGP, EVPN, PTP, gNMI telemetry, and more), and typically 30–60% lower total cost compared to proprietary alternatives.

Modern enterprise and data center switches span a wide range: 1G and 2.5G for access-layer PoE ports connecting workstations, phones, and APs; 10G and 25G for server-facing ports in data centers; 100G and 200G for aggregation and spine links; and 400G and 800G for the latest AI fabric and hyperscale deployments. Open networking switches — such as those offered by Asteraix — cover the full spectrum from 1G to 800G with enterprise SONiC software, enabling consistent management regardless of tier.