Postesy

Post for all Platform

refurbished blade servers

How simple it is to connect refurbished blade servers into your existing data centre networks.  it’s all about blade server in data centre after you’re  done seeing these    article  I would  appreciate if you could leave me your  comments and feedback thanks again for  viewing in this series of short  article.  I  will introduce you to show you how it  simplifies deployment of Dell m-1000  blade server into your existing land in the I will do overview of this  technology then I will explain how it  automatically configuration enables  late server connectivity into your  existing networks as you cable up to get  blade connectivity right away it can be  as simple as plug and cable up then  second video talks about how seamlessly  you could deploy plate servers into your  existing virtual LANs within your data centre.

 

refurbished blade servers

Then the talks about  how to group blade servers with  aggregator groups to support specific  traffic flows and connectivity to meet  your IT needs in future I’ll be adding  more  article  to this series to cover  remaining simple connect capabilities  such as tagging like I said these simple  connect LAN modules are built to  simplify and automate  common day-to-day deployment tasks  enabling you to easily connect and  deploy blade servers after plugging  these modules you just how to connect  cables to get immediate connectivity  between dell blade server and your existing  network infrastructure it is as simple  as plug-and-play no need to configure no  need to mess with spanning tree protocol  or any other layer 2 switch protocols  this way you’re simply avoiding  management complexity an overhead that  came along with integrating switches  into blade chassis because of this  simplicity Systems admins could own and  deploy these modules with ease while  deploying server systems for  connectivity it is a technology built to  help systems admins to simply deploy  blade servers while still implementing  the existing industry standard  techniques like layer 2 forwarding our  VLANs are link aggregation groups etc  Dell blade m62 20s m63 48s m80 24s land  modules implement this technology

M-series power Connect switches support  this technology and can be enabled to  run a simple switch more instead of  normal switch mode this graphic shows  two simple connect LAN modules plugged  into IO fabrics lots of M1000e blade  chassis with 16 plate servers which can  be connected  to uplink switches in your existing  network as depicted by the animation now  connect the cable from simple connect  module to uplink switch as you can see  all blade traffic now flows through that  uplink cable which provides immediate  connectivity into your existing network  via that uplink switch simple connect  modules will detect cables as you plug  in and they are configured with settings  needed to automatically enable blade  connectivity seamlessly from the  graphical user interface our  command-line interface of this module  you can see by default simple connect  module has all internal ports connected  to the blade’s group together in an  aggregator group a g1 in addition first  eight external ports of the module which  can be connected to existing uplink  switches in your LAN are also part of  aggregation Group one by selecting port  configuration summary tab in your GUI  you can see that all internal ports are  member ports of aggregation Group one  external ports are enabled as active  member ports of aggregation Group one  when cables are plugged in and as those  links become live to demonstrate I just  plug a cable in now you can see simple  connect just detected and added that  specific external port as active member  of aggregation Group one  now consider that you want more up links  connected to meet your bandwidth needs.

 

How it does it just add  more cables for more bandwidth as  depicted a dynamic lag is formed  automatically you can see the traffic is  being load balanced effortlessly across  those two up links within the lag now  check in from the port configuration  summary tab from GUI it shows that those  up links are active members of  aggregation group one by default those  uplink ports are configured for  industry-standard LACP link aggregation  control protocol that dynamically  enables link aggregation group lag as  you cable up as those up links are  bundled together providing higher  bandwidth as desired it will also  prevent loops automatically so no need  for spanning tree protocol all you have  to make sure before connecting more  cable sees on the uplink switch those  connected ports have LACP enabled if not  only the first uplink port connected  will be enabled on the simple switch  module so still providing continued  blade connectivity but now through only  is the first port right once you enable  LACP on rest of these connected ports on  the uplink switch all ports will become  active and negotiate lag now and load  balance the blade traffic across as you  need you could add cables up to eight up  links to increase  bandwidth and share traffic load those  uplink ports will become active members  of aggregation Group 1 and dynamic lag  for redundancy – lags can be set one has  primary the other as secondary backing  each other in failover scenario as  depicted here in this animation.

You can  configure such that it automatically  enables secondary lag as active after  disabling primary lag when number of  active links in it fall below the set  threshold  let me show you how easy it is to do it  from the global configuration menu in  the GUI enable lag failover had been  moored then select minimum active  members in active lag to keep it  active if number of active links fall  below this threshold setting simple  connect will automatically enable  failover to secondary lag it also lets  you enable SNMP trap to monitor the fail  or you can configure secondary lag using  remaining available uplink ports that  are not used in primary lag once you  can’t figure that apply changes in case  you want to use 10 gigabit uplink ports  instead of one gigabit then first delete  all the one gigabit uplink ports from  that aggregation group then add 10  gigabit uplink ports and configure lag  roles primary and secondary remember  that you could use either 1 gigabit or  10 gigabit ports but not mixed together  in a lag  this is an industry standard limitation  for Network redundancy you can configure  same on the redundant simple connect  module in the blade chasers as shown  here this setup will work great with  server NIC teaming and failover features  so if primary NIC fails secondary picks  up the traffic as depicted.

Leave a Reply

Your email address will not be published. Required fields are marked *