vBrownBag
Yesterday the VMworld team send out e-mails to everyone whose session got chosen. As always there are people who submit a session and hear the news that sadly their session didn’t make it. Did you submit a session, got rejected but still want to share it? Run a #vBrownBag session!

The ProfessionalVMware #vBrownBag is a series of online webinars held using GotoMeeting and covering various Virtualization & VMware Certification topics.

Want to get in? Just contact them for a presentation based in the US, EMEA, APAC or LATAM. You can also contact them on Twitter.

Don’t throw out your session, share it!

As last year there will also be a live #vBrownBag event at VMworld!

Veeam logo

Veeam is giving away prices on a monthly base and this time you can win a full pass to VMworld 2013, VMware’s big event full on announcements and sessions. You can win tickets for either US or EU.

VMworld 2013

You can enter the contest by just filling in the form over here. The winner will be selected on April, 22. Good luck everyone!

Session abstract

In this session Technical Marketing automation experts William Lam and Alan Renouf will take you through What’s New in ESXCLI and PowerCLI for the vSphere 5.1 release. In this session Alan and William will take you through the exciting features available to use when automating VMware products, both beginners and experts will learn how to use new features to make your life easier and more productive.

The session

The session was split into 2 parts, first William talked about (some of) the new ESXCLI functions and improvements with demo’s. First of all ESXCLI 5.1 still works with vSphere 4.0, 4.1 and 5.0.

There are 82 new commands in the new release:

  • 7 hardware
  • 2 sched
  • 47 network
  • 15 storage
  • 11 system

One of the best things in my opinion is SNMP support which got improved. Furthermore William showed some demo’s about several commands like:

  • Host maintenance operations
  • Network coredump check improvement
  • SR-IOV configurations
  • Network statistics and monitoring
  • SSD monitoring

There are quite some new operations and showing them all would probably take a while :-).

Next up was Alan who talked about the PowerCLI. He started with a small overview on what PowerCLI is. PowerCLI is a free product containing over 300 cmdlets (PowerShell Commands) to use. They can manage every aspect of your VMware environment and it is integrated into PowerShell.

PowerCLI 5.1 is backwards compatible all the way down till ESX 4.0 and Virtual Center 4.0!

PowerCLI has several snap-in’s:

  • Core: managing vSphere
  • Image Builder: working with Image Profiles
  • Auto Deploy: deploy ESXi using PXE
  • License: work with vSphere Licensing
  • Cloud: vCloud Director Providers
  • Tenant (NEW!): vCloud Director Tenants

We then received some information on each part with the “what’s new” list and finally information regarding the Tenant Snap-in.

There are 361 cmdlets in PowerCLI for Admins and 56 cmdlets for Tenants!

In the end we saw a new “cool demo” which will be released into a fling. This basicly deployed a full vCloud in a few minutes. I can’t wait till they release this and I will sure be posting about it when it gets released!

Session abstract

While Veeam works great right out of the box, it also provides a lot of options for configuring and tuning your backup infrastructure. Anton Gostev (@Gostev) and Doug Hazelman (@VMdoug) share expert insights and practical advice for VM backup. What’s the best and safest way to configure direct SAN backups? When is network processing mode a better choice than hot add? How to efficiently write Veeam backups to deduplicating storage, to another site, or to the cloud? What are the best practices for deploying your backup management server? Whether you currently use Veeam Backup & replication or just want a better understanding of image-level backups of virtual machines, this session on advanced disaster recovery and business continuity is for you.

The session

This session started of with Doug Hazelman making an introduction about how Veeam 6 introduced backup proxies and how it works. He then gave the word to Anton Gostev who went through tips and tricks in an interesting way: “The good, the bad, the ugly“.

Scaling your backups
This was the first point in the session and it answered some frequently asked questions but also gave some basic tips when you start working with Veeam (or if you already are which you should do).

I am not going to list them all but some important ones:

  • Disable the default proxy on the management server
  • Allocate enough RAM for job manager processes
  • Don’t go nuts on backup proxy servers, this will create too much load on both storage and network
  • Be carefull with the reversed incremental backup model
  • Limit your concurrent jobs to a reasonable amount, don’t kill your storage or backup repo

Backup repositories
These are the most commonly reported bottleneck. They either be a Windows or Linux server (they can even be the same as (one of) the backup proxy servers). If you can afford it: use RAID10!

The session went one with good, bad and ugly things about each backup model:

  • Direct SAN Access
  • Hot Add
  • Network mode

Direct SAN Access is probably the best because of several reasons:

  • Fastest processing mode
  • Least impact on your production!
  • It doesn’t impact your consolidation ratio

A disadvantage about this that it only support block storage (iSCSI or FC) and if you have a FC SAN it is required that you use a physical backup proxy server.

Some tips/tricks:

  • Present VMFS LUNs as read-only
  • Disable automount on your backup proxy servers!
  • Disable disk management on the proxy servers!
  • Disabling MPIO might increase performance.

Another great tip if you are using iSCSI SAN is that you can tweak TCP/IP on your backup proxy. According to Anton this can improve performance by 2 times the normal handling. You can change this using the following command:

netsh interface tcp set global autotuninglevel = disable

Hot Add is a different story compared to Direct SAN Access. It’s easier to setup and it supports all types of storage (NFS, iSCSI, FC, local storage). A great tip is that you can use any Windows for this so re-use you current Windows servers and save on the licensecost ;-). The problem with Hot Add is that it slower because the process itself takes a while.

The ugly part about Hot Add is that you might run into snapshot problems due the locking mechanism. Another thing is that you must disable CBT on your backup proxy (very important).

Some tips/tricks:

  • Add extra virtual SCSI controllers to your backup proxy server
  • Keep vSphere and Veeam up to date
  • Avoid cloning a backup proxy VM

Finally the session had a small talk about Network Mode. This is by far the slowest method you can use if you are on a 1GbE network. Keep in mind that a restore of a full backup can take a very long time. A big tip is that if you have to use Network Mode: upgrade to Veeam B&R 6.1 as there is improved network location awareness with this release (and higher).

A final (big) tip: disable VDDK logging on your virtual machines! This can easily save 5 minutes (or even more) of processing time.

Anton also talked about deduplicating storage. He mentioned that the dedupe with Windows Server 2012 is amazing and it’s one of the best they ever saw (and it’s free).

If you currently have storage with inline deduplication keep in mind that this makes vPower slow. Disable it if you can and find another solution. Another tip is that you can reduze the block size (WAN: 256KB / LAN: 512KB) but keep in mind that this might impact your backup performance.

The future

At the end of the session Doug talked about the new upcoming 6.5 release which includes several great tools.

  • Veeam Explorer for Exchange
  • Veeam Explorer for SAN Snapshots

More information about 6.5 can be found on their blogpost.

This session was one of the best I attended, this was really useful for people who start with Veeam but also for people who have been using it for a while now.

Session abstract

This session will describe exciting new developments in implementing VMware vSphere® Fault Tolerance (VMware FT) for multiprocessor virtual machines. This new technology allows continuous availability of multiprocessor virtual machines with literally zero downtime and zero data loss, even surviving server failures, while staying completely transparent to the guest software stack, requiring absolutely no configuration of in-guest software. In this technical preview, we will outline the virtues of VMware FT, provide a detailed look at the new technology enabling VMware FT for multiprocessor virtual machines, offer guidance on how to plan and configure your environments to best deploy these capabilities, examine performance data and showcase a live demo of the technology in action.

The session

This technical preview session was about how VMware is planning on how to get Fault Tolerance working on virtual machines with more than 1 vCPU (which is the current limit in ESXi 5.1).

The session started with the fact that VMware can offer protection at every level by using several techniques like High Availability, (Storage) vMotion, (Storage) DRS, Site Recovery Manager (SRM),… but the thing with these techniques is that this might be noticeable. You might have 1 or 2 ping timeouts depending on your environment. With Fault Tolerance on multiprocessor virtual machines VMware wants to lower(/remove) this.

VMware calls this idea “Continous Availability“:

  • Zero downtime.
  • Zero data loss.
  • No loss of TCP connections.
  • Completely transparent to guest software:
    • No dependency on Guest OS applications.
    • No application specific management and learning.

Fault Tolerance was introduced in 2009 with the release of vSphere 4.0, had an upgrade with 4.1 in 2010 and then with vSphere 5.0 they made some more improvements but the problem always was that it was limited to 1 vCPU and therefor many people couldn’t use it on important virtual machines.

The way Fault Tolerance currently works is that they use the vLockstep protocol (keeps both virtual machines in sync), the 1GbE dedicated network for FT logging and at the moment both virtual machines use shared VMDKs which reside on your shared storage.

With the new release on Fault Tolerance all this will change:

  • vLockdown is “dead”: it is now replaced with “SMP FT protocol“.
  • A dedicated 10GbE FT logging network is needed.
  • Both virtual machines now have their own VMDKs.

Basicly when you enable FT on a virtual machine you are creating 2 machines and not just 1 as it is with the current release. The VMDKs will be split on seperate datastores (you can choose which datastore according to the demo) but there is still one shared datastore which they call the “Tie Break Datastore”. This is a backup system when FT logging fails to do the job it’s supposed to do.

After the technical information the presenter showed a demo using a vCenter which had 4 vCPU’s and 16GB of RAM. He showed information using esxtop and then did a reboot on one of the hosts which was running the primary virtual machine. As expected this worked fine and the machine stayed online without a loss of TCP connection.

This was a really great session (and demo) and I am sure when this will be released people will want to use it. The only problem I can see is the use of the 10GbE FT logging network which isn’t something everyone already has.