KB2926765 – Long Install, High CPU Usage

And yet again, it is my displeasure to announce that I have encountered another recently released Microsoft patch which has caused me distress due to an incredibly long install time on some systems.  The patch this time around is KB2926765.  Most of the servers that this was installed on ran it without too much of a problem, but there were definitely some delays.  Just as in one of my other posts, things come to a crawling halt.  Sometimes the CPU usage was maxed, while other times it was only around 15%; notably, VMs tended to have the hardest time with this patch.

As noted in the previous post, some of those who left comments stated that SEP was the culprit.  Once this unfortunate process began, I tried disabling SEP.  That didn’t seem to make much of an impact on the systems which were currently installing the patch.  On the systems which still did not yet have the install process initiated, I disabled SEP and the installs seemed to go through quicker.  I’m starting to think that I just need to disable SEP at the beginning of every server patching.  Eventually (and I mean 30-60 minutes), the patch does install.  You just have to be patient, even if it means that the cleaning crew has already come, gone and shut out the lights, leaving you in the only lit room and all of the AC shut off.

Good luck with this patch, and if you happen to have any insight, please leave a comment.  Your contributions last time resulted in dozens of happy people posting here to thank us.

EDIT: I’d also like to include that you may experience an error code 80240016, stating that updates have failed.  This is the general error code for another installation was running while your system attempted to install another update.  What I found was that even though Windows Update declared that the updates failed, a reboot and recheck for updates will show that the installations were actually successful.


Backup Exec: Single-item(s) Restore Fails with ApplicationImpersonation Error

Quick Answer: delete the registry key that relates to the proxy configuration settings.

Many years ago, if you wanted to restore items from a user’s mailbox that got deleted, you had to restore the entire mailbox.  It wasn’t a very effective manner, so Backup Exec started to allow single-item restore jobs through GRT restores.  One important thing to note is that in order for the restore to process, it has to stage the entire Information Store onto the Exchange server before it can extract the data.  That being said, make sure you have enough space on your server to hold at least two times the size of your Information Store.

Now to the task at hand: user Jane accidentally deletes a folder from her mailbox and has requested that it be restored.  This seems pretty straight forward, assuming that no Exchange Redirection option is selected.  An Information Store the size of 175 GB will take almost 2 hours to stage itself.  Once completed, it will attempt the restore.  However, you may come across the job failing for the following reason:

V-79-57344-905 – The resource credentials for the restore job were unable to create  a role assignment for ApplicationImpersonation. Review the credentials to ensure it has the rights that are required for ApplicationImpersonation.

Quick searches will bring you to Symantec KB articles like this one: http://www.symantec.com/business/support/index?page=content&id=TECH168297.  Assuming that you have set up Backup Exec properly, the information in such KB articles will prove to be completely worthless and a total waste of time, especially given that each attempt after minor tweaks will take 2 hours to process and leave you with the same ApplicationImpersonation error.  Quite honestly, it’s maddening.

If you have support with Symantec for Backup Exec (see below for details why I did not, which is 100% Symantec’s fault), then you could call them up and request support.  The initial support tier will send you to the same article (or similar ones; there are 3 that say the exact same thing for the most part), which again is more of your time wasted.  Once they have determined that the initial support level cannot help you, you’ll get elevated and pretty much start from scratch with the new tier, which will also take about a 3-5 days to find the resolution (if you are lucky, they’ll actually find it).  I know this because this is what I went through 2 years ago.  Because of our lack of a support contract (again, see below), I did not have this option, but thankfully saved all of the emails regarding this case and eventually found the answer after sifting through all the details:


On the Exchange server, you need to delete the following registry key (remember to export the key first, in case you run into any issues), which is related to the proxy server configuration:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\WinHttpSettings.

Delete this key, then retry the job.  It works.  None of the documents, KB articles, or forums that you will go to will ever mention this key.  The only reason I can offer is the same as my confusion: Why on Earth would the proxy configuration settings inhibit the ability to restore single items to an Exchange mailbox?

It is very important to note that this has to be done on the Exchange server and not the Backup Exec server.  Deleting the key form the Backup Exec server doesn’t affect the outcome.  If you are performing an Exchange redirection, I cannot say which server it needs to be performed on, but my gut says to perform this task on the server where the restore will finish on.


Every single year (for the last 7 years), I spend way too much time with Symantec to get all of our licenses and agents to start and end on the same date.  Managing support contracts this way is insanely more simple than multiple dates throughout the year.  This isn’t rocket science, but Symantec manages to screw this up every single time.  It doesn’t help that we changed our legal name, so it took 6 years to get all licenses to show the same name, which is obscene because that is very easy to take care of (still, Symantec messed that up, too).  Starting in 2012, they began giving me trouble with some of the renewals, stating that I didn’t have the licenses that I know I did have, despite showing proof.  This lead to weeks of discussions which resulted in my sales rep at Symantec falling off the face of the Earth.  I lost all contact, despite all attempts to contact him or his manager.  I had to then contact resellers who shoved me off to Symantec, who then said they wouldn’t help me.  I eventually got everything taken care of after contacting a head honcho who was quite appalled by the scenario, but it took 5 months to sort out.  Yes, 5 frustrating months to handle a simple request.  When all was said and done, I was promised that this would be the last time I would ever encounter this issue.

In the summer of 2013, it was time again to renew, but I ran into the exact same problem.  The Symantec sales rep would send me an Excel workbook that was beyond cryptic and includes companies/agencies that were not ours and expect me to figure out what we needed.  In addition, he told me that I couldn’t renew some because he had no record of me owning certain agents, which was complete hogwash; renewals are far less expensive than new purchases, hence why I was putting up a fight.  I then began working with a different sales rep after complaining to SMB renewals managers.  I was being forced to dig through all of my old purchase orders to prove to them that I did indeed have valid support contracts for all of my agents.  Even then, they toyed with the numbers and I spent weeks of back-and-forths with my sales rep just to get the quote correct so that I could process the order.  Yet once again, she fell off the face of the Earth.  This was started in June.  It is now November, and she and her manager will still not return calls or emails.  Let’s just say that I’m shopping for a new backup solution and that I will never ever deal with Symantec for anything ever again.

Microsoft KB2868116 – Very Slow Install Process

With today being Patch Tuesday, it’s time to get cracking on those systems.  Everything seemed fine for the most part, but one patch (KB2868116) gave me some issues on a few servers.  The specific issue is that it took over 45 minutes to install for some reason.  I’m not sure why or what causes this, but it’s something to take note of.

One system was a virtual machine that hosted a yet-to-be-used SQL server.  The other major time-stalled server was the server with Hyper-V that hosted the virtual machine SQL server.  Neither should have taken this long.  No events were logged in the event viewer to suggest an issue.  The Task Manager didn’t show anything too obscene in terms of system resources, although TrustedInstaller.exe did hit 100MB at one point.

Regardless, if you are planning on installing KB2868116, just plan accordingly.  Given what I went through, I would recommend installing all of the other updates, rebooting, then trying to install KB2868116 so that the system can attempt the installation with the caches flushed.  I’m not saying this will have an effect, but it is worth giving a shot.

PROPOSED SOLUTION: Thanks to the many people who commented on this post, apparently the culprit is Symantec Endpoint Protection (SEP).  These users have stated that disabling SEP will allow update KB2868116 to install quickly.  While this is a very easy workaround for single instances, if you are deploying via WSUS, that’s a royal pain to deal with.  Should you be using WSUS, I would consider approving the update late on a Friday so that it will install over the weekend.  While the PC may still require a reboot, at least users won’t get stuck for almost an hour with a wicked slow computer.

EDIT: I’m not sure if this helped “unstick” the process or if it was just a coincidence, but on two servers where it took almost an hour to install said update, I tried to stop the installation by hitting the cancel or stop button.  About 5-10 minutes after doing that, the installation process stopped, but showed that 28686116 had installed.  After a reboot, I installed the remaining updates without a problem. Hitting cancel won’t help the process go along.  I akin it to waiting for a long time at a red light and then flashing your high beams thinking you made the lights change.