Introduction to Core Switch Configuration  
Home Press Resource Library
Resource Library

Introduction to Core Switch Configuration

Time:2019-11-14 Source:UTEPO


A switch that functions as part of a router and operates at the third layer of the OSI network standard model, the network layer. The most important purpose of the layer 3 switch is to speed up the data exchange within the large LAN, and the routing function is also for this purpose. It can do one route and multiple forwarding. What configuration does a core switch have?


EXTENSIBILITY SHOULD INCLUDE TWO ASPECTS

1. Number of slots. The slot is used to install various function modules and interface modules. Since each interface module provides a certain number of ports, the number of slots fundamentally determines the number of ports that the switch can support. In addition, all functional modules (such as super engine module, IP voice module, extension service module, network monitoring module, security service module, etc.) need to occupy one slot, so the number of slots fundamentally determines the scalability of the switch.


2. Module type. There is no doubt that the more supported module types (such as LAN interface module, WAN interface module, ATM interface module, extension function module, etc.), the more scalable the switch is. Taking LAN interface module as an example, rj-45 module, GBIC module, SFP module and 10Gbps module should be included to meet the needs of complex environment and network use in large and medium-sized networks.


BACKPLANE BANDWIDTH

Bandwidth is the maximum amount of data that can be transferred between the switch interface processor or interface card and the data bus, as is the total number of lanes in an overpass. Since all inter-port communication needs to be completed through the backplane, the bandwidth provided by the backplane becomes the bottleneck of inter-port concurrent communication. The larger the bandwidth, the larger the available bandwidth provided to each port, and the higher the data exchange speed. The smaller the bandwidth, the smaller the available bandwidth for each port, and the slower the data exchange. In other words, the backplane bandwidth determines the data processing capacity of the switch. The higher the backplane bandwidth is, the stronger the data processing capacity is.Therefore, the larger the backplane bandwidth, the better, especially for convergent layer switches and central switches. In order to realize full duplex non-blocking transmission, the minimum backplane bandwidth must be satisfied.


FORWARDING RATE

Data in the network is composed of packets, and the processing of each packet consumes resources. Forwarding rate (also known as throughput) is the number of packets that pass per unit of time without losing packets. Throughput is like the traffic flow of an overpass, and is the most important parameter of a layer 3 switch, which indicates the specific performance of the switch. If the throughput is too small, it will cause a network bottleneck, which will negatively affect the transmission efficiency of the whole network. Switches should be able to achieve wire-speed switching, which is the rate at which data is transferred on a transmission line, to minimize switching bottlenecks. For gigabit switches, the packet forwarding rate of each gigabit wire speed port is 1.488Mpps, and the packet forwarding rate of each gigabit port is 0.1488Mpps, if the network is to achieve non-blocking transmission.


LAYER 4 SWITCHING

The layer 4 switching is used for fast access to network services. In this switching, transmission is determined not only by MAC address (layer 2 bridge) or source/destination address (layer 3 routing), but also by TCP/UDP(layer 4) using port Numbers that are planned for use in high-speed intranets. In addition to load balancing, the layer 4 switching supports transport flow control based on usage type and user ID. In addition, this switching sits directly on the front end of the server and understands the use of session content and user permissions, making it an ideal platform for preventing unauthorized access to the server.


ROUTING REDUNDANCY

HSRP and VRRP protocols are used to guarantee the load sharing and hot backup of core equipment. In case of problems in a certain switch between core switch and dual aggregation switch, three-layer routing equipment and virtual gateway can be quickly switched to realize redundant backup of dual lines and ensure the stability of the whole network.

Sales Inquiry

Contact Details

+86-755-83898016-863

+86-1501-2669-765

info@utepo.net

Note: To speed up our service to you, please make sure the field with " * " mark is filled before you click on "Submit" button, Thank you!