Wednesday, October 27, 2010

Use Windows 7 Event Viewer to track down issues that cause slower boot times

Overview

Windows 7’s Event Viewer includes a new category of event logs called Applications and Services Logs, which includes a whole host of subcategories that track key elements of the operating system. The majority of these subcategories contain an event log type called Operational that is designed to track events that can be used for analyzing and diagnosing problems. (Other event log types that can be found in these subcategories are Admin, Analytic, and Debug; however, describing them is beyond the scope of this article.)

Now, within the operating system section is a subcategory titled Diagnostic-Performance with an Operational log that contains a set of a Task Category called Boot Performance Monitoring. The Event IDs in this category are 100 through 110. By investigating all the Event ID 100 events, you will be able to find out exactly how long it took to boot up your system every time since the day you installed Windows 7. By investigating all the Event ID 101 thru 110 events, you will be able to identify all instances where boot time slowed down.


Getting started

You can find and launch Event Viewer by opening the Control Panel, accessing the System and Security category, selecting the Administrative Tools item, and double-clicking the Event Viewer icon. However, you can also simply click the Start button, type Event in the Start Search box, and press Enter once Event Viewer appears and the top of the results display.

Creating a Custom View

Once you have Event Viewer up and running, you can, of course, drill down through the Applications and Services Logs and locate the Diagnostic-Performance Operational log and begin manually looking through the events recorded in the log. However, you can save yourself time and energy by taking advantage of the new Custom View feature, which is essentially a filter that you can create and save.

To do so, pull down the Action menu and select the Create Custom View command. When you see the Create Custom View dialog box, leave the Logged option set at the default value of Any Time and select all the Event level check boxes. Next, select the By Log option button, if it is not already selected, and click the dropdown arrow. Then, drill down through the tree following the path: Applications and Services Logs | Microsoft | Windows | Diagnostics-Performance. When you open the Diagnostics-Performance branch, select the Operational check box, as shown in Figure A.

Figure A

When you get to the Diagnostics-Performance branch, select the Operational check box.

To continue, type 100 in the Includes/Excludes Event IDs box, as shown in Figure B, and then click OK.

Figure B

Event ID 100 records how long it takes to boot up your system.

When you see the Save Filter to Custom View dialog box, enter a name, as shown in Figure C, and click OK.

Figure C

To save the filter as a Custom View, simply provide an appropriate name, such as Boot Time.

You’ll now repeat these steps and create another Custom View, and this time, you’ll type 101-110 in the Includes/Excludes Event IDs box and name it Boot Degradation.

Investigating Boot Time

To investigate your Windows 7 system’s boot time, select Boot Time in the Custom Views tree and then sort the Date and Time column in ascending order. When you do, you’ll see a complete history of every time you have booted your system since the day you installed Windows 7. In Figure D, you can see that I have hidden the Console Tree and the Action Pane to focus on the events.

Figure D

By sorting the Date and Time column in ascending order, you’ll see a complete history of every time you have booted your system since the day you installed Windows 7.

As you can see, the first recorded Boot Time on my sample system was 67479 milliseconds in October 2009. Dividing by 1,000 tells me that it took around 67 seconds to boot up. Of course, this was the first time, and a lot was going on right after installation. For example, drivers were being installed, startup programs were being initialized, and the SuperFetch cache was being built. By December 2009 the average boot time was around 37 seconds.

In any case, by using the Boot Time Custom View, you can scroll through every boot time recorded on your system. Of course, keep in mind that there will be normal occurrences that may lengthen the boot time, such as when updates, drivers, and software is installed.


Now, if you click the Details tab, you’ll see the entire boot process broken down in an incredible amount of detail, as shown in Figure E. (You can find more information about the boot process in the “Windows On/Off Transition Performance Analysis” white paper.) However, for the purposes of tracking the boot time, we can focus on just three of the values listed on the Details tab.

Figure E

The Details tab contains an incredible amount of detail on the boot time.

MainPathBootTime

MainPathBootTime represents the amount of time that elapses between the time the animated Windows logo first appears on the screen and the time that the desktop appears. Keep in mind that even though the system is usable at this point, Windows is still working in the background loading low-priority tasks.

BootPostBootTime


BootPostBootTime represents the amount of time that elapses between the time that the desktop appears and the time that you can actually begin using the system.

BootTime

Of course, BootTime is the same value that on the General tab is called Boot Duration. This number is the sum of MainPathBootTime and BootPostBootTime. Something that I didn’t tell you before is that Microsoft indicates that your actual boot time is about 10 seconds less that the recorded BootTime. The reason is that it usually takes about 10 seconds for the system to reach an 80-percent idle measurement at which time the BootPostBootTime measurement is recorded.

Investigating Boot Degradation

To investigate instances that cause Windows 7 system’s boot time to slow down, select Boot Degradation in the Custom Views tree and then sort Event ID column in ascending order. Each Event ID, 101 through 110, represents a different type of situation that causes degradation of the boot time.

While there are ten different Event IDs here, not all of them occur on all systems and under all circumstances. As such, I’ll focus on the most common ones that I have encountered and explain some possible solutions.

Event ID 101

Event ID 101 indicates that an application took longer than usual to start up. This is typically the result of an update of some sort. As you can see in Figure F, the AVG Resident Shield Service took longer than usual to start up right after an update to the virus database. If you look at the details, you can see that it took about 15 seconds for the application to load (Total Time), and that is about 9 seconds longer than it normally takes (Degradation Time).


Figure F

Event ID 101 indicates that an application took longer than usual to start up.

An occasional degradation is pretty normal; however, if you find that a particular application is being reported on a regular basis or has a large degradation time, chances are that there is a problem of some sort. As such, you may want to look for an updated version, uninstall and reinstall the application, uninstall and stop using the application, or maybe find an alternative.

(In the case of my friend’s Windows 7 system, there were several applications that were identified by Event ID 101 as the cause of his system slowdown. Uninstalling them was the solution, and he is currently seeking alternatives.)

Event ID 102

Event ID 102 indicates that a driver took longer to initialize. Again, this could be the result of an update. However, if it occurs regularly for a certain driver or has a large degradation time, you should definitely look in to a newer version of the driver. If a new version is not available, you should uninstall and reinstall the driver.

Event ID 103

Event ID 103 indicates that a service took longer than expected to start up, as shown in Figure G.


Figure G

Event ID 103 indicates that a service took longer than expected to start up.

Services can occasionally take longer to start up, but they shouldn’t do so on a regular basis. If you encounter a service that is regularly having problems, you can go to the Services tool and experiment with changing the Startup type to Automatic (Delayed Start) or Manual.

Event ID 106

Event ID 106 indicates that a background optimization operation took longer to complete. On all the Windows 7 systems that I investigated, this event identified the BackgroundPrefetchTime as the culprit, as shown in Figure H. Since the Prefetch cache is a work in progress, this should not really represent a problem.

Figure H

Event ID 106 indicates that a background optimization operation took longer to complete.

If you encounter regular or long degradation times related to Prefetch, you may want to investigate clearing this cache and allowing the operating system to rebuild it from scratch. However bear in mind that doing so can be tricky and instructions on doing so are beyond the scope of this article.

Event ID 109

Event ID 109 indicates that a device took longer to initialize. Again, if this is happening occasionally, there shouldn’t be anything to worry about. But if it is occurring regularly, you should make sure that you regularly back up your hard disk and begin investigating replacing the device in question.

What’s your take?

In addition to providing improved performance and a new user interface, Windows 7’s Event Viewer provides you with the ability to investigate boot time and problems that cause boot degradation. Have you used Windows 7’s Event Viewer to investigate boot problems? Have you encountered other Event IDs in the 101 to 110 range that I didn’t describe? If so, what were they? As always, if you have comments or information to share about this topic, please take a moment to drop

Unix vs. Microsoft Windows: How system designs reflect security philosophy

One of the key differences between the Unix approach to system security and the MS Windows approach is that significant security characteristics of Unix systems are a consequence of good architectural design. Many of these same characteristics, when there is any attempt at all to incorporate them into MS Windows, are implemented as features on top of the OS instead of designed into the system architecture.

For instance, privilege separation in Microsoft Windows has long been a problem for Windows security. Some privilege separation does exist in MS Windows at the architectural level, but it is only a half-hearted implementation, dependent upon user-level features behaving well and being used as intended.

Modularity within the system is another example of architectural security in Unix, but lacking in MS Windows. There are applications that tie into every major part of the MS Windows system in such a promiscuous fashion that something as apparently trivial as a browser exploit can actually reach into kernel space, and from there affect the entire system. The same kind of close coupling between parts of the system does not exist in the base system of Unix.

The importance of privilege separation

Some might complain that all the information you want to protect on your system is stored where your user account can access it, so that privilege separation does not really help security much. These people fail to grasp the full extent of what security benefits you gain from separation of privileges, however. Privilege separation does more than prevent infections and intrusions from gaining access to root privileges.

Malware that makes its way to the system via the network is hindered by the fact that server processes typically run under specialized user accounts on Unix systems. This means that getting in through some network port usually gets the intruder no further than the affected service. This is even true of many services that are started from a normal user account, because those services are typically configured to switch user account “owners” when they start to take advantage of the benefits of privilege separation.

Many tools of malicious security hackers require administrative access to work effectively for them. Keyloggers are one of the major bogeymen of MS Windows security, but they require access to administrator-level components of the system to operate effectively on Unix. This means that a keylogger inserted into the system via some unprivileged user account does not have the access it needs to do its job.

Other security threats, such as rootkits, trojan horses, and botnet clients, also require root access on a Unix system to work. On MS Windows, the lack of rigorous privilege separation short-circuits this defense against malware.

User control and automatic execution

Microsoft Windows is well known for its tendency toward virus and worm infections. This is in large part because of the fact that MS Windows tries too hard to do everything for the user. Arbitrary malware often automatically executes when effectively unrelated tasks are performed. When opening what appears to be a Microsoft Word document, but is, in fact, a cleverly designed malware executable, MS Windows will helpfully redirect the execution of the file from Word to what is actually needed to execute the file.

By contrast, Unix systems do not do this sort of thing by default. It is more normal on Unix systems to execute a program with the file in question as an argument to the program execution. Thus, if you try to execute a cleverly disguised piece of malware pretending to be an OpenOffice.org document using OO.o to do so, the operating system will not just automatically ditch OO.o and execute the file by whatever means seems appropriate. Instead, the word processor will just fail to properly open the file, because it is not the right type of file for that application.

Other examples of unwarranted automatic execution in MS Windows include AutoRun. As detailed in U.S. military compromised by removable media malware, the United States Department of Defense was compromised by malware carried on removable media that was automatically executed every time the media was read by an MS Windows computer. While it is possible to turn off AutoRun functionality, it is not always easy, and that functionality should not be the default anyway. Even worse, Windows Update has been known to surreptitiously reactivate capabilities like AutoRun.

A difference in philosophy

These differences in the design and relative security of Unix and Microsoft OSs illustrate a distinct difference in philosophy between them. Unfortunately, the difference appears to be that where Unix has a philosophy of security built into the fundamental design of the system by default, MS Windows has a philosophy of “Who cares about security?”

MS Windows is not alone, however. Certain variants of Unix-like systems appear to be headed down that road as well. While Linux distributions like Ubuntu seem to run afoul of the common negative correlation between security and popularity just like MS Windows, they still have a ways to go to achieve the same level of blatant disregard for security. Part of the reason for this is the Unix-like foundations of the system.

Sadly, it seems all too likely that gap will be bridged in time.

Security vs. popularity

Security is not obscurity. Popularity is not the only reason MS Windows is so poorly secured in general use. Maybe.

One idea in particular keeps coming up in discussions amongst IT professionals and software partisans: that the popularity of a piece of software is inversely correlated with its security. The assumption is that greater popularity of a piece of software makes it a more tempting target, and being a more tempting target makes it less secure.

There is some truth in that idea, but not nearly as much as many people think. If all else is equal, the more-popular software will be compromised first. On the other hand, all else is not equal, and being first is not necessarily the same as being only:

After the most popular piece of software is targeted, the next-most popular will also be targeted, if it has enough of an installation base to make it worthwhile to compromise.
It does not take much, in terms of market share percentage, for a piece of software to be popular enough to attack. For the most widely used types of software, a single percentage point can mean millions of deployments.
Software that is used on more high-value targets will be targeted first, all else being equal. That software is usually not the most popular software.
Software that can be used best as a staging ground for attacking other systems will be targeted first, all else being equal, if for no other reason than the fact that it widens the scope of the attack on more popular software.
The second most popular Web server software is far less secure in practice than the most popular Web server software.
All of this adds up to evidence and reasoning that contradicts the notion that popularity is the proximate cause of a poor security record. The last of these five points is, in fact, a direct counterexample of the idea, so that even making claims of causes — based on nothing but correlations — does not support the argument, despite the fact that is the entire argument. Correlation does not imply causation.

There is, however, another way to look at the relationship between popularity and security. While popularity is not the proximate cause of a poor security record, it might have some influence on that security record.

The influence is not, for the most part, because of attracting evildoers to attack the more popular system. If it is also a very well-secured system in the vast majority of deployments, it will provide a difficult enough challenge that many malicious security crackers (especially those who do not target millions of victims at a time) will choose other targets that are less popular but easier to crack.

The influence of popularity has an effect on security through the roundabout effects of a large user base on the way the system is designed. As more people clamor for particular features and interface changes, developers are under increasing pressure to appease those people’s demands. Doing so can easily lead to ill-considered security design decisions, out of control growth of complexity, and development mistakes. This is how poorly secured bloatware generally comes to be.

Microsoft Windows is the most popular end-user, general purpose operating system in the world. Depending on who you ask, and what assumptions you make about how such things are counted, Apple MacOS X is the second most popular. Canonical’s Ubuntu Linux is arguably third, if a guess is needed. Interestingly, that is also the order in which we could rank their security problems.

Microsoft Windows has an atrocious record for dealing with vulnerabilities. It also uses a deeply security-unconscious architecture, and is built on the philosophy that “more is more” — far from a minimalist “less is more” philosophy that recognizes the connection between simplicity and security. These and other difficulties result in a design that simply begs to be compromised. While a number of security focused initiatives have been undertaken to turn the poor security reputation of Microsoft around, many relentlessly bad security policies coupled with certain realities of featuritis and other lack-of-design features add up to a losing battle.
Apple MacOS X is built on a much stronger core architecture, including a microkernel, a primarily BSD Unix userland beneath the GUI, and an innovative high-level API taken straight from ’90s acquisition NeXT Software. Despite all this, Apple’s strict policies — bordering on “control freak” in some cases, and willful ignorance in others — conspire to undermine that foundation and infect Mac OS X with poor security characteristics. One symptom of this is the unconscionably slow response to security vulnerabilities, in many cases actually making MS Windows patching policy look good by comparison.
Finally, Canonical’s Ubuntu Linux is, with every release, rapidly approaching the sort of bloat we have come to expect and loathe from Microsoft’s flagship operating system. At least in part because it primarily relies on open source software developed outside of Canonical, and benefits from the often better security policies of those outside projects, Ubuntu does not suffer the same rate of creeping corruption of security that afflicts Mac OS X. That creeping corruption is still an ongoing problem, however. Ever-more bloat, ever-tighter coupling between system components, and increasing focus on superficial end user enticements as a higher priority than good system design: these things lead to a system that resembles its more popular, less well secured competitors, more and more all the time.
By contrast, consider the case of some less-popular operating systems that have, to some extent, remained unpopular because of their focus on correct design decisions, security conscious maintenance, and keeping the system reasonably lean and stable. Among these operating systems are:

More technically oriented Linux distributions like Debian and Slackware
The “popular” BSD Unix system, FreeBSD
The most security conscious BSD Unix systems — correctness obsessed NetBSD and security auditing obsessed OpenBSD
That is, in fact, arguably the order of these systems from least secure to most secure, as well as from most popular to least popular. It correlates very strongly with their level of disdain for the most widespread popularity where it conflicts at all with good system design. Even the least popular among them have millions of users around the world, in one capacity or another, and would thus be quite worthy targets for malicious security crackers. In fact, the tendency for those on the more-secure end of that spectrum is to be used for public-facing servers, thus also making them on average higher value targets, on a case by case basis. Despite all this, their security records are much more admirable than those of MS Windows, Apple MacOS X, and Ubuntu Linux.

Popularity does not correlate with the failure of real security just because malicious security crackers avoid the second- and third-most popular options. It does, however, correlate well with the failure of real security when that popularity produces social pressures that undermine the security of system design and maintenance.

10 things you should know about IPv6 addressing

Over the last several years, IPv6 has been inching toward becoming a mainstream technology. Yet many IT pros still don’t know where to begin when it comes to IPv6 adoption because IPv6 is so different from IPv4. In this article, I’ll share 10 pointers that will help you understand how IPv6 addressing works.

1: IPv6 addresses are 128-bit hexadecimal numbers

The IPv4 addresses we are all used to seeing are made up of four numerical octets that combine to form a 32-bit address. IPv6 addresses look nothing like IPv4 addresses. IPv6 addresses are 128 bits in length and are made up of hexadecimal characters.

In IPv4, each octet consists of a decimal number ranging from 0 to 255. These numbers are typically separated by periods. In IPv6, addresses are expressed as a series of eight 4-character hexadecimal numbers, which represent 16 bits each (for a total of 128 bits). As we’ll see in a minute, IPv6 addresses can sometimes be abbreviated in a way that allows them to be expressed with fewer characters.

2: Link local unicast addresses are easy to identify

IPv6 reserves certain headers for different types of addresses. Probably the best known example of this is that link local unicast addresses always begin with FE80. Similarly, multicast addresses always begin with FF0x, where the x is a placeholder representing a number from 1 to 8.

3: Leading zeros are suppressed

Because of their long bit lengths, IPv6 addresses tend to contain a lot of zeros. When a section of an address starts with one or more zeros, those zeros are nothing more than placeholders. So any leading zeros can be suppressed. To get a better idea of what I mean, look at this address:

[FE80:CD00:0000:0CDE:1257:0000:211E:729C]
If this were a real address, any leading zero within a section could be suppressed. The result would look like this:

[FE80:CD00:0:CDE:1257:0:211E:729C]
As you can see, suppressing leading zeros goes a long way toward shortening the address.

4: Inline zeros can sometimes be suppressed

Real IPv6 addresses tend to contain long sections of nothing but zeros, which can also be suppressed. For example, consider the address shown below:

[FE80:CD00:0000:0000:0000:0000:211E:729C]
In this address, there are four sequential sections separated by zeros. Rather than simply suppressing the leading zeros, you can get rid of all of the sequential zeros and replace them with two colons. The two colons tell the operating system that everything in between them is a zero. The address shown above then becomes:

[FE80:CD00::211E:729C]
You must remember two things about inline zero suppression. First, you can suppress a section only if it contains nothing but zeros. For example, you will notice that the second part of the address shown above still contains some trailing zeros. Those zeros were retained because there are non-zero characters in the section. Second, you can use the double colon notation only once in any given address.

5: Loopback addresses don’t even look like addresses

In IPv4, a designated address known as a loopback address points to the local machine. The loopback address for any IPv4-enabled device is 127.0.0.1.

Like IPv4, there is also a designated loopback address for IPv6:

[0000:0000:0000:0000:0000:0000:0000:0001]
Once all of the zeros have been suppressed, however, the IPv6 loopback address doesn’t even look like a valid address. The loopback address is usually expressed as [::1].

6: You don’t need a traditional subnet mask

In IPv4, every IP address comes with a corresponding subnet mask. IPv6 also uses subnets, but the subnet ID is built into the address.

In an IPv6 address, the first 48 bits are the network prefix. The next 16 bits are the subnet ID and are used for defining subnets. The last 64 bits are the interface identifier (which is also known as the Interface ID or the Device ID).

If necessary, the bits that are normally reserved for the Device ID can be used for additional subnet masking. However, this is normally not necessary, as using a 16-bit subnet and a 64-bit device ID provides for 65,535 subnets with quintillions of possible device IDs per subnet. Still, some organizations are already going beyond 16-bit subnet IDs.

7: DNS is still a valid technology

In IPv4, Host (A) records are used to map an IP address to a host name. DNS is still used in IPv6, but Host (A) records are not used by IPv6 addresses. Instead, IPv6 uses AAAA resource records, which are sometimes referred to as Quad A records. The domain ip6.arpa is used for reverse hostname resolution.

8: IPv6 can tunnel its way across IPv4 networks

One of the things that has caused IPv6 adoption to take so long is that IPv6 is not generally compatible with IPv4 networks. As a result, a number of transition technologies use tunneling to facilitate cross network compatibility. Two such technologies are Teredo and 6to4. Although these technologies work in different ways, the basic idea is that both encapsulate IPv6 packets inside IPv4 packets. That way, IPv6 traffic can flow across an IPv4 network. Keep in mind, however, that tunnel endpoints are required on both ends to encapsulate and extract the IPv6 packets.

9: You might already be using IPv6

Beginning with Windows Vista, Microsoft began installing and enabling IPv6 by default. Because the Windows implementation of IPv6 is self-configuring, your computers could be broadcasting IPv6 traffic without your even knowing it. Of course, this doesn’t necessarily mean that you can abandon IPv4. Not all switches and routers support IPv6, just as some applications contain hard-coded references to IPv4 addresses.

10: Windows doesn’t fully support IPv6

It’s kind of ironic, but as hard as Microsoft has been pushing IPv6 adoption, Windows does not fully support IPv6 in all the ways you might expect. For example, in Windows, it is possible to include an IP address within a Universal Naming Convention (\\127.0.0.1\C$, for example). However, you can’t do this with IPv6 addresses because when Windows sees a colon, it assumes you’re referencing a drive letter.

To work around this issue, Microsoft has established a special domain for IPv6 address translation. If you want to include an IPv6 address within a Universal Naming Convention, you must replace the colons with dashes and append .ipv6.literal.net to the end of the address — for example, [FE80-AB00--200D-617B].ipv6.literal.net.

The best tools and methods to track down suspect IP addresses and URLs

There are many reasons why you might need to track down an IP address. You might have discovered a hacking attempt in one of your logs. You might think you have found a spammer that you want to add to a black list. The “why” are as many as are the “how.” Every operating system has different tools for helping you track down an IP address. Compounded with this is that any tool that makes use of an IP address also has different tools for this purpose. So where do you start? What’s the easiest way to find IP addresses and help locate their sources?

I’m assuming you know what an IP address is and what it does, but that’s about it. Much of this information will be common knowledge to the seasoned administrator., but new administrators or support techs might glean some useful information here.

Finding the URL for an IP address

Let’s say whatever application you are using gives you a URL for an address that you want to block or track (for whatever reason). If you need the IP address of that URL there is a very simple way to do that - use ping. Let’s use google.com as an example. To find the IP address of that URL I would open up a command prompt in Windows (launch Terminal in Mac or from the command line in Linux) and type:

ping google.com

From that command you should see something like:

64 bytes from iwanttoblockthis.com 74.125.159.104: icmp_seq=1 ttl=52 time=29.0

As you can see, the ping tool locates the IP address associated with the URL google.com. In this example the address 74.111.159.104. Now this can be a bit misleading because that IP address might be only one address of many associated with the domain. You can find out all of the IP addresses associated with a URL using the nslookup command like so:

nslookup google.com

The above command should report something similar to:

Non-authoritative answer:
Name:    google.com
Address: 74.111.159.104
Name:    google.com
Address: 74.111.159.105
Name:    google.com
Address: 74.111.159.106

Name:    google.com
Address: 74.111.159.107
Name:    google.com
Address: 74.111.159.108
Name:    google.com
Address: 74.111.159.109

From the above information you should notice that the answers received are non-authoritative, which means none of those addresses are in charge of the domain. Let’s use the same tool to find the authoritative address for the domain. To do this ,first issue the command nslookup with no arguments. This will bring you a prompt that looks like:

>

Now set the querytype like so:


> set querytype=soa

and then enter the domain:

> google.com

You will then see output that looks like that shown in Figure A.

Figure A

Now you can see the IP address in charge of the domain google.com com is 216.239.32.10.


Finding the URL for an IP address

If you ping an IP address you will not receive a domain back. I know, I know…it’s unfair, but it’s the way it goes. So, how can you get the URL from an IP address? Simple, you take advantage of nslookup again. To do this, issue the command:

nslookup google.com

And you will see something like:

Non-authoritative answer:
10.32.239.216.in-addr.arpa    name = ns1.google.com.

You instantly know that the IP address is associated with google.com. Of course you could also just enter the IP address in your web browser and, if that IP address is associated with a web server, you will see the results instantly. If the IP address is not associated with a web browser you will have to do more research.

You can find out even more information using the whois command like so:

whois  216.239.32.10

The above command will report something like this:

NetRange:       216.239.32.0 - 216.239.63.255
CIDR:           216.239.32.0/19
OriginAS:
NetName:        GOOGLE
NetHandle:      NET-216-239-32-0-1
Parent:         NET-216-0-0-0-0
NetType:        Direct Allocation
NameServer:     NS2.GOOGLE.COM
NameServer:     NS3.GOOGLE.COM
NameServer:     NS4.GOOGLE.COM
NameServer:     NS1.GOOGLE.COM
RegDate:        2000-11-22
Updated:        2001-05-11
Ref:            http://whois.arin.net/rest/net/NET-216-239-32-0-1
OrgName:        Google Inc.
OrgId:          GOGL
Address:        1600 Amphitheatre Parkway
City:           Mountain View
StateProv:      CA
PostalCode:     94043
Country:        US
RegDate:        2000-03-30
Updated:        2009-08-07
Ref:            http://whois.arin.net/rest/org/GOGL
OrgTechHandle: ZG39-ARIN
OrgTechName:   Google Inc
OrgTechPhone:  +1-650-253-0000
OrgTechEmail:  arin-contact@google.com
OrgTechRef:    http://whois.arin.net/rest/poc/ZG39-ARIN
RTechHandle: ZG39-ARIN
RTechName:   Google Inc
RTechPhone:  +1-650-253-0000
RTechEmail:  arin-contact@google.com
RTechRef:    http://whois.arin.net/rest/poc/ZG39-ARIN
#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html

Now, if you have someone (either URL or IP address) attacking you or sending you spam that you want to discover, or you need to block, report, or contact  them, you can get the information you need.

You have neither an IP nor URL

What if you are sure you’re being attacked, but you have no idea by whom or what. The first place to look is your server’s log files. But if those escape you (you either have no idea where to find them or they don’t give you the information you need), you might need to employ a network monitoring tool. There are plenty of tools available for this task. One of my favorites is Wireshark. This is a very powerful, open source, cross-platform tool that can monitor your PC or your entire network. From this monitor you will see any and all traffic flowing through your network. Should anything look suspicious, you have the IP address that will then help you gain valuable information.

Sometimes “they” are just too good

There are times when you will be attacked, spammed, spoofed, etc. and you simply will not be able to track down the source. This is an unfortunate truth in the world of a networked computer. And when/if that time comes you will have to do your best to tighten down your security to make sure each and every computer is safe. Just remember, if a computer is attached to the network, no matter what operating system is on it, it is insecure. No machine, no operating system, no firewall, no anti-virus, no anti-malware is perfect.

The most important thing you can do is arm yourself with the tools and knowledge that will allow you to track down an address should you need to. And once you have the address (be it URL or IP address) you can always report the address to your service provider as well as sites like LiveIPMap.


Final thoughts

If you can get the IP address of someone doing nefarious deeds to your system or network you need to have the tools to enable you to gather the information in order to report the suspected address or culprit. Although the most challenging task in this process is actually locating the address, half of the battle is in the information recon. With the tools and methods outlined here, you should have everything you need.

KeyScrambler: How keystroke encryption works to thwart keylogging threats

hanks to the Internet, financial transactions and purchasing never have been easier. But, that convenience comes at a cost. We have to divulge personal financial information. That becomes a problem if our banking credentials get into the wrong hands. One way that happens is through malware that employs keylogging applications. In fact, that’s what financial malware is all about. Type in your credit-card information, the keylogger records it, sends it to the attacker, and well you know the rest. Thankfully, there is an answer.

Fight back


There are two approaches that help thwart keylogging applications. Anti-malware programs by design will remove malware including keylogging apps. We all have our favorite anti-malware program. Just make sure it is effective against keylogging malcode.

Keystroke encryption is the second approach. It uses a different methodology. It doesn’t care whether a keylogging app is installed or not. The keystrokes are encrypted and all the keylogger records is gibberish.

I have tried several keystroke encryption programs and settled on KeyScrambler by QFX Software.Qian Wang developed KeyScrambler and is the President and CEO of QFX Software. Here are Qian Wang’s credentials:

“Qian has been a programmer since age 12 and has had experience working on cutting edge projects at both the M.I.T. Media Lab and the M.I.T. Laboratory for Computer Science. Qian holds a B.S. and a Master’s in Electrical Engineering and Computer Science from M.I.T.”

Questions about KeyScrambler
Before I ran my tests on KeyScrambler I wanted to understand it better. I contacted Qian Wang and he obliged me by answering the following questions:

Himanshu Kohli: Preventing keystrokes from being logged, stopping screen and clipboard captures, and keylogging software removal are some of the capabilities including in anti-keylogging programs. What features are included in KeyScrambler?

Qian Wang: KeyScrambler, as the name implies, focuses on preventing keystroke logging by encrypting the user’s keystrokes. At QFX Software, we are big believers in “Do one thing, and do it well”, so we are currently concentrating on providing the best possible protection for the users’ keystrokes.

Himanshu Kohli: The web site mentions says, “KeyScrambler encrypts keystrokes at the keyboard driver level, deep in the operating system, to defeat existing and future keyloggers.” Could you go into more detail on how that is accomplished?

Qian Wang: To understand how KeyScrambler works, it helps to look briefly at how an operating system like Windows actually processes keystroke data. When you type on your keyboard, it looks like the keystrokes are directly sent to the application you’re working on. In reality, they have to go through quite a long path to get there.

The keystrokes first arrive at a hardware controller on the computer’s motherboard, which forwards them to the Windows kernel’s keyboard input stack. They are then processed by the windowing system’s input manager, which sends them to a queue belonging to the application window that currently has input focus.

The application then retrieves the keystrokes from the queue and interprets them according to its own context, and finally the user sees the result of the keys that are pressed. This is a simplified view of what happens, without considering such complex issues as inputting non-English languages.

Many places along this path, there are ways to intercept the keystroke data. Any of these points can be used to perform keylogging, which is why it’s such a thorny problem.
What KeyScrambler does is to try to get to the keystrokes as early as possible in the Windows kernel using our encryption module. That way, as they get passed along the different layers of the OS, it won’t matter if they get logged, because the keystrokes are completely indecipherable.
When these encrypted keystrokes finally arrive at the intended application, the decryption component of KeyScrambler goes to work and turns them back into the keys the user originally typed.
If you are familiar with how SSL/TLS work to encrypt network traffic, this is basically the same principal applied to your keystrokes. And because KeyScrambler isn’t focused on defeating any particular technique or scanning for any particular signature, it doesn’t matter if a keylogger is well-known or brand new.

Himanshu Kohli: As KeyScrambler’s developer, what do you feel makes it unique?
Qian Wang: As far as I’m aware, when we released KeyScrambler in 2006, it was the first widely available keystroke-encryption product on the market. So for a while we were unique simply by being first.

More importantly, KeyScrambler is a new approach in dealing with the problem of keylogging. What we did was to look at keyloggers specifically, find out what data they’re after, and how they worked to get it. Then we thought about how to protect the data instead. In a sense, KeyScrambler isn’t so much focused on anti-keylogging as it is on keystroke-data protection.
Another feature is the display of the live encrypted stream of keystrokes. I think all too often security software take a “Trust us” stance and only bothers the users when something goes wrong. KeyScrambler tries to show both when and how it’s working.

Himanshu Kohli: We mentioned the two types of anti-keylogger applications used against software keyloggers. Why did you choose the encryption route?

Qian Wang: The “scan and remove” method is the traditional way. It’s the way most anti-malware programs work. The limitations of this approach, such as the length of time it takes to deal with new threats and the potential for false-positives are pretty well known.

Still, such software continues to be useful. In fact, we recommend it as a baseline even when you use KeyScrambler. Most of our users do have a general purpose “scan and remove” type product installed on their computers.

Having the same type of program specifically aimed at keyloggers doesn’t buy you anything new, and it’ll have the same limitations. KeyScrambler complements traditional defenses by providing an additional layer of security.

Himanshu Kohli: Many anti-keylogging apps also prevent screen captures. Is that something that might be included in KeyScrambler?

Qian Wang: Once we feel like we’ve perfected our keystroke-encryption system, we’ll take a close look at some of these other problems. We have some ideas already, but we try not to lose focus. We think the world doesn’t need another tool that promises to do everything, but doesn’t do any one thing particularly well.

Himanshu Kohli: I noticed KeyScrambler works with several password managers including RoboForm. Are there any plans to include the password manager LastPass?

Qian Wang: Since LastPass works as a browser add-on, it should already be supported if it’s used in a browser that’s supported by KeyScrambler. We will retest the latest LastPass version to see if anything has changed. It shouldn’t be a problem to add support for it if it now has a standalone component.

Himanshu Kohli: I wanted to make sure I asked you about hardware keyloggers and if KeyScrambler was able to defeat them.

Qian Wang: KeyScrambler currently does not defeat hardware keyloggers since it only starts working once the keystrokes have reached the Windows kernel. It’s something that we will address with a future version of KeyScrambler, although I think for the average user the threat from hardware keyloggers is much smaller than from software keyloggers.

Himanshu Kohli: I have written several articles about financial malware such as ZeuS and Carberp. A key element of their success is the ability to log keystrokes. Will KeyScrambler prevent that from happening?

Qian Wang: As you’ve noted in your articles, Zeus and Carberp are complex beasts with many variants. KeyScrambler should work as usual against variants that log keystrokes directly.
But, some variants steal information directly from an HTML form before it is submitted. Such attacks would fall outside KeyScrambler’s protection envelope at this time. One thing users can do, as I know you’ve suggested, is use a browser such as Google Chrome that has better handling of user data

Testing KeyScrambler

The first thing that concerned me was the amount of resources KeyScrambler would be using. The application is on all the time, yet it did not tax my computer as shown below:

One thing that makes KeyScrambler unique is the visual indicator of key strokes being encrypted. If so desired, KeyScrambler displays the encryption process in real-time as shown in the screenshot below:


It would not be a good test if I trusted that encryption was indeed taking place. So I enlisted the help of an application called Anti-Keylogger Tester. The test software was written by Guillaume Kaddauch of FirewallLeakTester.com. The first slide shows how Anti-Keylogger Tester is able to capture my keystrokes:

The next slide is with KeyScrambler turned on and Anti-Keylogger is not registering any recognizable keystrokes:

I would be remiss if I did not mention that KeyScrambler comes in three flavors. It is important to check out this web page if interested. It will help you decide which version fits your needs.

Final thoughts


Life today is complicated. Being able to shop and bank online helps simplify that complexity. So when that’s in jeopardy, we need to fight back. Besides, we worked hard for our money and deserve to keep it.

The beauty of a program like KeyScrambler is: Once installed, that’s it. Forget about it and let KeyScrambler be another layer of protection in the fight against financial malware.


Thursday, October 14, 2010

It's Microsoft Patch Tuesday: October 2010

This month’s patches represent a new record. Microsoft kept the out-of-band patches to a minimum, and did respond very, very quickly to a top-tier .NET vulnerability mid-month, by issuing manual fix information within a day or two, and a patch a few days later. I give kudos for the right response on that issue.

Some of these patches are absolutely depressing, patching more than ten vulnerabilities. I almost ran out of adjectives to describe them (mega, jumbo, and giant). In all fairness, though, many of the vulnerabilities look like the same problem replicated in different applications or Windows components. One oddity was a patch that fixed a vulnerability that is only in Windows 2008 R2.

This blog post is also available in the PDF format in a TechRepublic Download. The previous months’ Microsoft Patch Tuesday blog entries are also available.

Security PatchesMS10-071/KB2360131 - Critical (XP, Vista, 7)/Important (2003, 2008, 2008 R2): A whopping ten vulnerabilities are fixed with this one mega-patch for IE 6, 7, and 8. Some of these are remote code execution attacks. You should get this patch installed immediately. 3.7MB - 48.4MB

MS10-072/KB2412048 - Important (SharePoint Services 3, SharePoint Foundation 2010, Office Web Apps, Office SharePoint Server 2007, Groove Server 2010): Issues with “SafeHTML” can allow attackers to have access to information that they should not on a variety of Microsoft collaboration platforms. It’s an important patch, but only if you use these tools. 12.0MB - 21.MB

MS10-073/KB981957 - Important (XP, Vista, 7, 2003, 2008, 2008 R2): Vulnerabilities in the Windows kernel-mode drivers allows a variety of attackers to occur, including escalations of privileges. Luckily, the attacker must be logged on locally, which reduces the area of attack dramatically. Install this patch during your next scheduled patch window. 1.0MB - 5.6MB

MS10-074/KB2387149 - Moderate (XP, Vista, 7, 2003, 2008, 2008 R2): Problems with the MFC library can allow remote code execution attacks if a user who is logged on as a local administrator runs an application that uses MFC. This patch can wait until your normal patch day. 560KB - 1.6MB

MS10-075/KB2281679 - Critical (7)/Important (Vista): An issue with the Windows Media Player Network Sharing Service allows malformed packets to execute remote code execution attacks. This should only be an issue within your own network, unless you set up your network to allow access from the outside; this patch is not urgent. 342KB - 763KB

MS10-076/KB982132 - Critical (XP, Vista, 7, 2003, 2008, 2008 R2): The font system can be exploited with a malformed font embedded in a file to execute a remote code execution attack. Since fonts can be embedded in all sorts of files, you should install this patch as quickly as possible. 81KB - 818KB

MS10-077/KB2160841 - Critical (XP, Vista, 7, 2003, 2008, 2008 R2): This is the second patch in a few months to handle problems with the XAML Browser Applications (XBAPs) that were introduced in .NET 4. You will want to install this patch immediately. 159KB - 314KB

MS10-078/KB2279986 - Important (XP, 2003): Another issue with font handling, this time it is an escalation of privileges attack that requires the attacker to be logged on locally. You can hold off until your normal patch time for this one. 642KB - 1.3MB

MS-079/KB2293194 - Important (Office XP, Office 2003, Office 2007, Office 2010, Office 2004 for Mac, Office 2008 for Mac, Open XML File Format Converter for Mac, Office Compatibility Pack for Office 2007, Microsoft Word Viewer, Office Web Apps): This jumbo sized patch handles eleven Office security vulnerabilities that are exposed when opening malformed Word files. The attacks are remote code execution attacks that grant the attacker the user’s rights. I recommend that you apply this patch as soon as you can, due to the use of Word files as the attack vector. 3.3MB - 333MB

MS10-080/KB2293211 - Important (Office XP, Office 2003, Office 2007, Office 2004 for Mac, Office 2008 for Mac, Open XML File Format Converter for Mac, Excel Viewer, Office Compatibility Pack for Office 2007): Thirteen Excel problems are fixed with this giant patch, which involve remote code execution attacks with malformed Excel and Lotus 1-2-3 files. Like the previous patch, you should install this one ASAP. 5.0MB - 333MB

MS10-081/KB2296011 - Important (XP, Vista, 7, 2003, 2008, 2008 R2): A problem with the Windows Common Control Library allows a third-party SVG viewer to execute remote code execution attacks with the logged-on user’s rights. Microsoft thinks this is “important” but I think that you will want to consider it “critical.” 1.0MB - 3.8MB

MS10-082/KB2378111 - Important (XP, Vista, 7, 2003)/Moderate (2008, 2008 R2): Windows Media Player can allow remote code execution exploits if it opens malformed media files that grants the same rights as the logged on user. Again, the common nature of these files warrants more urgency that the problem would normally justify. 2.4MB - 19.1MB

MS10-083/KB979687 - Important (XP, Vista, 7, 2003, 2008, 2008 R2): This one fixes a remote code execution hole in WordPad and the Windows Shell, of all things, and can be triggered by opening a WordPad file or following (or even selecting!) a shortcut on a network or WebDAV share. Once again, this patch is much more critical than the technical details would indicate due to the attack vectors. 193KB - 5.2MB

MS10-084/KB2360937 - Important (XP, 2003): A local procedure call issue allows execution of escalation of privileges attacks by a locally logged on user. You can wait until your usual patch time for this one. 793KB - 3.3MB

MS10-085/KB2207566 - Important (Vista, 7, 2008, 2008 R2): Issues with how IIS handles SSL traffic can allow denial of services attacks. Patch this during your usual time. 143KB - 488KB

MS10-086/KB2294255 - Moderate (2008 R2): There is an odd issue in Windows Server 2008 R2 that allows users to modify the administrative shares on failover cluster disks. You only need this patch if you use failover cluster disks. 1.7MB - 2.3MB

Other UpdatesKB2345886: This patch brings the Extended Protection for Authentication to the Server service. 431KB - 1.7MB

“The Usual Suspects”: Updates to the Malicious Software Removal Tool (12.0MB - 12.4MB).

Updates since the last Patch TuesdayThere has been one security update release out-of-band:

MS10-070/KB2418042 - Critical (XP, Vista, 7, 2003, 2008, 2008 R2): This is the patch for the super-critical .NET vulnerability that was announced in September. This vulnerability allows attackers to read data encrypted on the server including view state, which can be used to exploit many .NET apps. If you have not installed this on your IIS servers, you need to do it immediately. 601KB - 14.3MB

There have been a number of minor items added and updated since the last Patch Tuesday:

Fix for crashes with external USB video devices (KB979538): 179KB - 264KB

IE Compatibility View update (KB2362765): 27KB

Daylight Savings Time update (KB2158563): 151KB - 1.0MB

Changed, but not significantly:

IE 8 update for W7 and 2008 R2 (KB2398632)

Wednesday, October 13, 2010

Microsoft Office for Mac 2011 Arrives in October

Microsoft has announced that the long-awaited Office for Mac 2011 will hit U.S. store shelves at the end of October with three versions in 13 languages.

The Mac version of the popular productivity software will make its debut in three flavors: Office for Mac Home and Student 2011, Office for Mac Home and Business 2011 and Office for Mac Academic 2011. Home and Student 2011, which includes Word, PowerPoint, Excel and Messenger, will retail for $119 for an individual install and $149 for a Family Pack (three installs).

Office for Mac Home and Business 2011 will retail for a higher price: $199 for one install and $279 for two installs. However, it comes with a significant addition: Outlook for Mac, which replaces the less capable Entourage e-mail client. The Academic edition of Office, which includes all of the features of the Home and Business version, will cost $99, but is only available for students, faculty and staff of higher education institutions.

Office for Mac 2011 comes in 13 languages, including two new ones: Polish and Russian. This is on top of the English, Danish, Dutch, Finnish, French, German, Italian, Japanese, Norwegian, Spanish and Swedish versions, most of which will be released by the end of the year.

Mac users, are you going to get the new version of Office? What features do you want the most? Let us know in the comments.

Sunday, October 10, 2010

Top 17 Free Email Services

1. Gmail - Free Email Service
Gmail (Google Mail) - Free Email ServiceGoogle
Gmail is the Google approach to email, chat and social networking. Practically unlimited free online storage allows you to collect all your messages, and Gmail's simple but very smart interface lets you find them precisely and see them in context without effort. POP and powerful IMAP access bring Gmail to any email program or device.
Gmail puts contextual advertising next to the emails you read.
Gmail Review | Gmail Resources | Top 50 Gmail Tips | All Gmail Tips

2. AIM Mail - Free Email Service
AIM Mail - Free Email Service
AIM Mail, AOL's free web-based email service, shines with unlimited online storage, very good spam protection and a rich, easy to use interface.
Unfortunately, AIM Mail lacks a bit in productivity (no labels, smart folders and message threading), but makes up for some of that with very functional IMAP (as well as POP) access.
AIM Mail Review | AIM Mail Tips

3. GMX Mail - Free Email Service

GMX Mail - Free Email Service
GMX Mail is a reliable email service filtered well of spam and viruses whose 5 GB of online storage you can use not only through a rich web interface but also via POP or IMAP from a desktop email program.
More and smarter ways to organize mail could be nice.
GMX Mail Review | GMX Mail Tips

4. Yahoo! Mail - Free Email Service

Yahoo! Mail - Free Email Service
Yahoo! Mail is your ubiquitous email program on the web and mobile devices with unlimited storage, SMS texting and instant messaging to boot.
While Yahoo! Mail is generally a joy to use, free-form labeling and smart folders would be nice, and the spam filter could catch junk even more effectively.
Yahoo! Mail Review | Yahoo! Mail Resources | Yahoo! Mail Tips

5. Gawab.com - Free Email Service

Gawab.com - Free Email Service
Gawab.com is a speedy, stable and very usable free email service with 10 GB online space, POP and IMAP access as well as many a web-based goodie.
It's a pity Gawab.com's IMAP implementation does not give you access to labels, and full message search is missing from the web interface.
Gawab.com Review

6. Inbox.com - Free Email Service

Inbox.com
Inbox.com not only gives you 5 GB to store your mail online but also a highly polished, fast and functional way to access it via either the web (including speedy search, free-form labels and reading mail by conversation) or through POP in your email program.
Unfortunately, IMAP access is not supported by Inbox.com, and its tools for organizing mail could be improved with smart or self-teaching folders.
Inbox.com Review | Inbox.com Tips

7. FastMail Guest Account - Free Email Service

FastMail Free Guest Account - Free Email Service
FastMail is a great free email service with IMAP access, useful features, and a stellar web interface.
It's a pity FastMail does not offer truly effective spam filtering for free accounts, and more storage space for all users.
FastMail Review | FastMail Resources | FastMail Tips

8. Windows Live Hotmail - Free Email Service

Windows Live Hotmail - Free Email Service
Windows Live Hotmail is a free email service that gives you 5 GB (and growing) of online storage, fast search, solid security, POP access and an interface easy as a desktop email program.
When it comes to organizing mail, Windows Live Hotmail does not go beyond folders (to saved searches and tags, for example), its spam filter could be more effective, and IMAP access to all online folders would be nice.
Windows Live Hotmail Review | Windows Live Hotmail Resources | Windows Live Hotmail Tips

9. Yahoo! Mail Classic - Free Email Service
Yahoo! Mail Classic - Free Email Service
Yahoo! Mail Classic is a comfortable, reliable and secure email service with unlimited storage. A pretty good spam filter keeps the junk out, and you can send rich emails using Yahoo! Mail's HTML editor.
Yahoo! Mail Classic Review | Yahoo! Mail Resources | Yahoo! Mail Classic Tips

10. BigString.com - Free Email Service

BigString.com - Free Email Service
BigString.com is a free 2 GB email service that includes rich secure and certified mail services and lets you password-protect, expire or edit sent messages, for example.
Unfortunately, BigString.com is not equally well equipped for handling incoming mail and lacks organizing tools.
BigString.com Review


Saturday, October 9, 2010

10 tips for troubleshooting PC system slowdowns

When PC performance slows to a crawl, a systematic troubleshooting plan will help you zero in on the cause. Himanshu Kohli runs through likely culprits and describes steps you can take to improve system performance.

Windows 7 has been out for almost a year, and the PCs you bought right after its release may be slowing down now. User complaints are minimal when new PCs are rolled out. They start up quickly, and programs seem to open in a snap. But over time, users begin to notice that their systems are slower or hang up more and more often. While the possible causes of system slowdown are endless, this article identifies 10 common troubleshooting areas you should examine before you consider drastic steps such as reformatting and reimaging or buying new computers.

1: Processor overheating

Chipmakers have recently been working to make processors more efficient, which means they generate less heat. Nonetheless, some modern processors still generate a lot of heat. That’s why all processors require some sort of cooling element, typically a fan of some type. A system’s Thermal Design Point (TDP) rating indicates, in watts, how much heat it can safely dissipate without exceeding the maximum temperature for the chip. When the processor temperature goes over spec, the system can slow down or run erratically (lock up) or may simply reboot. The processor fan may fail for several reasons:

Dust is preventing the fan from spinning smoothly.
The fan motor has failed.
The fan bearings are loose and jiggling.
Often, you can tell if there is a fan problem by listening and/or touching the computer. A fan that has loose bearings starts jiggling and vibrates the case, making a characteristic noise. As time goes by, the sounds and vibrations will become so prominent that you’ll change the fan out just to regain some peace and quiet.

You don’t always need to replace the fan. If it is covered with dust, you can often spray away the dust with compressed air. But even though you might get the fan running again, its life span has likely been reduced because of the overwork. You should keep an extra fan in reserve in case of failure.

Processors may also overheat because the heat sink is not properly placed above the processor or the thermal paste is not of good quality or was applied incorrectly (or not at all) when the system was built. This is more likely to be a problem with home-built systems but can happen with commercially manufactured ones as well. The paste can break down over time, and you may need to reapply it.

Case design is another element that can contribute to or help prevent overheating. Cases with extra fans, better vents, and adequate room inside for good airflow may cost more but can provide superior cooling performance. Small cases that squeeze components together can cause overheating. For this reason, laptops with powerful processors are prone to overheating.

Tip
Another common reason for processor overheating is overclocking. Until heat begins to take its toll, overclocking does allow for significant performance improvements. Because processor overclocking can really cook a processor, most dedicated overclockers do not use regular processor fans. Instead, they use complex — and expensive — water-cooling systems. For more information on overclocking, check out overclockers.com.


--------------------------------------------------------------------------------

Overheating can also be caused by the external temperature (that is, the temperature in the room). Computers no longer have to be kept in cold rooms as they did in the early days of computing, but if the room temperature goes above 80, you may find your computers exhibiting the symptoms of overheating. If the temperature is uncomfortable for you, it’s probably too high for your computers. Adequate ventilation is also important.

Most computers today have an option to display the CPU temperature in the BIOS. There are also a number of utilities that will track the temperature of your processor and case, such as Core Temp. If you want to look for other such utilities, check out TechRepublic’s software library and use the search term “temperature.”

2: Bad RAMSeveral situations can lead to RAM-related performance problems with a particular machine:

RAM timing is slower than optimal machine spec.
RAM has minor flaws that appear only on detailed testing.
RAM is overheating.
There is insufficient RAM.
In the old days of Fast Page RAM, buying new RAM for your computer was a simple affair. You just needed to know what speed your motherboard supported and the maximum each slot would take. Today, there are many types and speeds of RAM, and the better motherboards may be tolerant of using RAM that does not match the motherboard’s maximum specs. For example, your motherboard may support PC133 RAM but will still work with PC100 RAM. But be aware that you may see performance decreases if you install RAM that is slower than the maximum spec. Some motherboards will even allow you to mix speeds but will default to the slowest RAM installed.

Minor flaws in RAM chips can lead to system slowdowns and instability. The least expensive chips often have minor flaws that will cause your system to slow down or Blue Screen intermittently. Although built-in mechanisms may allow the system to keep working, there is a performance hit when it has to deal with flawed RAM chips.

In the past, no one worried about RAM chips getting hot, because they didn’t seem to generate much heat. But that’s changed with newer RAM types, especially SDRAM. To check for overheating, open your computer’s case, power down, and pull the plug out. Ground yourself and touch the plastic on one of your RAM chips. Ouch! They get pretty hot. If you find that your RAM chips are overheating, you should consider buying a separate fan to cool your memory. If your motherboard doesn’t support a RAM fan, you might be able to get enough additional cooling by installing a fan card that plugs in to a PCI slot.

Of course, one common reason for poor performance that’s related to RAM is simply not having enough of it. Modern operating systems such as Windows 7 and today’s resource-hungry applications, combined with our increasing tendency toward extreme multitasking, result in a need for more RAM. The minimal specified system requirements may not cut it if you’re doing lots of multimedia or running other memory-intensive applications. 32-bit Windows is limited to using 4 GB of RAM, but 64-bit Windows 7 can handle from 8 to 192 GB, depending on the edition. If your system allows, adding more RAM can often increase performance.

3: Hard disk issuesTraditional hard drives are mechanical devices that eventually wear out. There are many signs of imminent failure before a hard disk finally gives up. Some of these signs include:

Slow access times on the affected drive.
An increasing number of bad sectors when running scandisk and chkdsk.
Unexplained Blue Screens.
Intermittent boot failures.
An “Imminent Hard Disk Failure” warning.
Detecting a failing hard disk can be tricky because the early signs are subtle. Experienced computer professionals can often hear a change in the normal disk spin. After the disk deteriorates further, you’ll see the system slow to a crawl. Write processes will take a long time as the system tries to find good blocks to write to. (This will occur if you’re using a robust file system such as NTFS; other file systems will likely Blue Screen the computer.)

When you notice the system slowing down, run scandisk or chkdsk, depending on your operating system. If you notice a bad sector where a good sector existed earlier, that’s a clue that the disk is going bad. Back up the data on the disk and prepare for it to fail soon. Make sure you have a spare disk ready so you can replace it when it fails or replace the disk as soon as you notice the early signs of failure.

Disk noise and scandisk/chkdsk are your best indicators for identifying a failing drive that’s leading to a system slowdown. However, if you are managing a system remotely, or you can’t take the system down for a full chkdsk/R, you can use tools that monitor disk health, such as Executive Software’s DiskAlert.

You may also get a warning message from SMART hard drives that failure is imminent. Sometimes, you’ll get these warnings when the hard drive is fine, due to problems with the hard drive device driver, the chipset driver, or the way the BIOS interfaces with the drive. Check for newer versions of the drivers and BIOS firmware.

Even if it’s operating properly, your hard disk may be a bottleneck that’s slowing down the rest of your system. See the next item for more information on what you can do about that.

4: Disk type and interfaceOnce upon a time, buying a hard drive to work with your system was easy. Today, things are more complicated, with many types of drives available, offering differing levels of performance. Most modern motherboards will support more than one type.

For best performance, you may want to dump the old IDE PATA type drives and upgrade to SATA, which comes in several speeds from 1.5 Gb/s to 6 Gb/s. Obviously, the faster drives will also be more expensive. Some new computers also have eSATA connectors for attaching a SATA drive externally. Other options for attaching drives externally include USB and Firewire/IEEE 1394.

Slowdowns may be caused by installing programs or often-used files on slow external drives. If you must use external drives for such files, go with the latest version, such as USB 3.0 (which is up to four times faster than USB 2.0) or Firewire 800. If you don’t have ports to support the faster version, you can install a card to add support.

New Solid State Drives (SSDs), which generally connect via SATA, can often provide better performance than other drive types, but cost much more per GB of storage space. Windows 7 includes support for TRIM, which optimizes SSD performance. SCSI drives are still around, too, notably in the form of Serial Attached SCSI (SAS) with super fast access times — but they’re expensive and noisy and used primarily for servers.

5: BIOS settingsOne frequently ignored cause of system slowdown is the machine’s BIOS settings. Most people accept the BIOS settings as they were configured in the factory and leave them as is. However, slowdowns may occur if the BIOS settings do not match the optimal machine configuration. Often, you can improve machine performance by researching your motherboard’s optimal BIOS settings, which may not be the same as the factory defaults.

There is no centralized database of optimal BIOS settings, but you can employ a search engine such as Google or Bing and use your motherboard name and BIOS as keywords to find the correct settings.

6: Windows servicesMany Windows services are enabled by default. A lot of these services, however, are not required for your machine to run properly. You should review the services running on your Windows XP/Vista/7 computer and disable those that you don’t need.

One way to see which services are running is to use the Services applet found in the Administrative Tools menu. In Windows 7, click Start and type “Services” in the search box, then select Component Services. In the console’s left pane, click Services (Local) to display the list of services, shown in Figure A.

Figure A


Use the Component Services console to identify the services running on your system.

Important information contained in the Services console includes the service Name, Status, and Startup Type. You can get more details on a service by double-clicking on it to bring up the service’s Properties, shown in Figure B.

Figure B

The Properties sheet for the service provides detailed information.

You can stop the service by clicking the Stop button. If you are sure that you don’t need the service, click the down arrow in the Startup Type drop-down list box and set the service to Disabled. If you are not sure if you need the service, change the Startup Type to Manual. Then you’ll have the option of manually starting the service if you find that you need it.

Another way of controlling which services start is using the msconfig utility (see Figure C). In Windows 7, click Start and in the search box, type msconfig. Click msconfig.exe.


Figure C

Use the System Configuration utility to control the behavior of services.

Note that some secure Microsoft services cannot be disabled. These are considered essential for running the computer. For a list of some Windows 7 services you may be able to disable, see Disable unwanted services and speed up Windows 7.

7: Runaway processes

Runaway processes take up all of the processors’ cycles. The usual suspects are badly written device drivers and legacy software installed on a newer operating system. You can identify a runaway process by looking at the process list in the Windows Task Manager (see Figure D). Any process that takes almost 100 percent of the processing time is likely a runaway process.

Figure D


Use the Task Manager to identify processes that are slowing the system.

We see an exception to this rule, however, if we click the button to Show Processes From All Users. On a smoothly running system, the System Idle Process should be consuming the majority of the processor cycles most of the time. If any other process were to take up 98 percent of the processor cycles, you might have a runaway process.

If you do find a runaway process, you can right-click it and click the End Process command. You may need to stop some processes, such as runaway system services, from the Services console. If you can’t stop the service using the console, you may need to reboot the system. Sometimes a hard reboot is required.

For more detailed information about running processes, check out Process Explorer 12.04, shown in Figure E. This is a handy little utility written by Mark Russinovich that includes powerful search capabilities.

Figure E

Process Explorer gives you more detailed information about running processes.

8: Disk fragmentation

As files are added, deleted, and changed on a disk, the contents of the file can become spread across sectors located in disparate regions of the disk. This is file fragmentation. All Windows operating systems subsequent to Windows NT have built-in disk defragmentation tools, but there are also third -party programs available that give you more options.

If you have traditional hard disks, disk fragmentation can significantly slow down your machine. The disk heads must move back and forth while seeking all the fragments of a file. A common cause of disk fragmentation is a disk that is too full. You should keep 20 percent to 25 percent of your hard disk space free to minimize file fragmentation and to improve the defragmenter’s ability to defrag the disk. So if a disk is too full, move some files off the drive and restart the defragmenter.

Note that SSDs work differently and can access any location on the drive in essentially the same amount of time. Thus, they don’t need to be defragmented.

9: Background applications

Have you ever visited an end user’s desktop and noticed a dozen icons in the system tray? Each icon represents a process running in either the foreground or background. Most of them are running in the background, so the users may not be aware that they are running 20+ applications at the same time.

This is due to applications starting up automatically in the background. You can find these programs in the Startup tab of the System Configuration utility, as shown in Figure F. Uncheck the box to disable the program from starting at bootup.

Figure F


You can disable programs from starting when you boot Windows.

10: File system issues and display options

Some file systems work better than others for large disk partitions. Windows 7 should always use the NTFS file system for best performance.

Cleaning up the file system will also help speed performance. You can use the Disk Cleanup tool to:

  • Remove temporary Internet files.
  • Remove downloaded program files (such as Microsoft ActiveX controls and Java applets).
  • Empty the Recycle Bin.
  • Remove Windows temporary files such as error reports.
  • Remove optional Windows components you don’t use.
  • Remove installed programs you no longer use.
  • Remove unused restore points and shadow copies from System Restore.

To run Disk Cleanup in Windows 7, click Start and type “Disk Cleanup” in the search box. Select the drive you want to clean up.

Another way to increase performance is by turning off some of the visual effects that make Windows 7 look cool but use valuable system resources. In Control Panel, click the System applet and in the left pane, click Advanced System Settings. Under Performance, click the Settings button and then the Visual Effects tab. Here, you can disable selected Aero effects or just click Adjust For Best Performance, as shown in Figure G, which disables them all.


Figure G

You can turn off selected (or all) visual effects to increase performance.

Conclusion

When troubleshooting a system slowdown, you should always look for potential hardware problems first. Then, investigate the common software problems. If you use a systematic troubleshooting plan, you should be able to improve the performance of most computers suffering from system slowdown.

Thursday, October 7, 2010

Firefox 4 Beta for Android and Maemo is Now Available

Firefox 4 beta for mobile is now available to download and test. It’s built on the same technology platform as Firefox for the desktop and optimized for browsing on a mobile phone. Firefox beta for mobile comes with many of your favorite Firefox desktop features like Firefox Sync, Add-ons and the Awesome Bar.

A major focus of this release is to increase performance and responsiveness. Two of the big architecture changes are Electrolysis and Layers. Our alpha contained Electrolysis which allowed the browser interface to run in a separate process from the one rendering Web content, resulting in a much more responsive browser. This beta brings the Layers pieces which improve overall performance and in graphics areas such as scrolling, zooming and animations. For more technical details, see Mozilla mobile engineer Matt Brubeck’s blog.

Firefox 4 Beta includes Firefox Sync to create a seamless Web browsing experience between desktop and mobile. With Firefox Sync, you can take your browsing history, bookmarks, tabs, passwords and form-fill data with you anywhere so you never have to retype passwords or long URLs again. Your Firefox data is completely encrypted end-to-end between your computers so that only you have access to it. (For those using Firefox Sync, be sure you’re up to date.)

Firefox 4 Beta for mobile is significant step forward in sharing a personalized, seamless and encrypted Web experience across devices. Developers have the power to use the latest Web technologies like HTML5, CSS and JavaScript to to build fast, powerful and beautiful mobile apps and add-ons that can reach millions of devices. We are excited to see the innovative and valuable mobile add-ons that developers will build for Firefox.

Half of Microsoft's employees aren't Ballmer fans

Microsoft employees have had a rough time in the last few years, Vista wasn’t the success they’d hoped for, Windows 7 was, KIN flopped, and so on. It’s a rollercoaster ride, and probably quite frustrating for those softies that want to see Microsoft succeed. CNNMoney.com reports that an ongoing survey of over 1,000 Microsoft employees shows that many recent failures may be being blamed internally on Microsoft’s CEO. In the survey, only half of the respondents believed that Ballmer’s performance was satisfactory.

The mobile space has been particularly troublesome for Microsoft in recent years, with Apple and Google both taking Microsoft on with well designed and marketed products. Apple in particular has all but changed the mobile computing landscape with the iPad, a market that Microsoft had previously tried to create with its Tablet PC and Origami projects. As a Microsoft employee, the lack of response from the software giant (including the cancellation of Courier) must be frustrating. Add those issues to the departure of Bach and Allard, and you can quickly see where why that frustration may be blamed on Ballmer.

With stocks underperforming, and falling confidence, Microsoft has revealed that it will not be paying Ballmer his maximum bonus this year. Could it be time for a leadership change? There were some crazy rumors in 2006 that Bill Clinton could be the next CEO. What do you think? Is it time for Ballmer to leave, and if so, who should replace him?

Saturday, October 2, 2010

10 red flags that shout 'Stay away from this project!'

Over time, I have been involved in some of the worst projects ever as a freelancer, consultant, or some other “non-employee” relationship. When you are a direct hire to a company, you do not have the freedom to pick and choose what you work on. But as an outside person being paid to work specifically on one project, you do have the choice. I have been burned so many times it isn’t funny, but I have learned a lot from my mistakes. Here are 10 of the biggest red flags I’ve encountered. I am sure you have more to share in the discussion thread.

Note: This article is also available as a PDF download.

1: No clear spec or goalsAll too often, I’ve been approached to work on a project, but the person trying to arrange the deal can’t tell me what they really need done. It’s not that they are under some strange code of silence. They really have no clue what they want. They have a general idea of what the finished product should look like and a really good understanding of its differentiating factors or killer features, but outside of that they have not thought it through. This is one of the most common and most significant danger signs! How many hours’ worth of work do you want to throw away on a regular basis because the client realized after you built it that what they asked for wasn’t what they needed? At the very least, these kinds of projects should be contracted only at a per-hour rate.

2: Funding problemsAre you being offered to be paid a percentage of revenue or paid over time? If so, that is a warning that you probably won’t get paid at all. It’s not that they intend to rip you off, but a company that can’t afford to pay cash on the barrelhead probably lacks the financial power to get the job finished. Even if you get the project completed, they are still going to be struggling to come up with a marketing department, hire a support staff, and so on. Sure, giving someone a slice of the pie is a good incentive. But the only way to make that work well is to pay people and offer stock options in addition to their pay.

3: Product pre-sold to a clientOne of the worst mistakes companies make is pre-selling a product to a client. All too often, the customer saw some fake screenshots of an application that didn’t exist and wrote a check. Now, you are being asked to bail out your customer because they sold vaporware. You’ll be under the gun the whole time on a project like this. The end customer has (rightfully) certain expectations on functionality and timelines. Meanwhile, no one has even determined whether the functionality can be done (a lot of it probably can’t be done reasonably well) or whether the timeline can be met (it can’t; I guarantee it). Stay away from this project! Do you really want to do business with the kind of client who sells something that does not exist yet? And do you really want to have the pressure of rescuing them on your back?

4: Spending money in the wrong placesHere’s another bad scenario: The customer has a snazzy logo and a gorgeous Web site but no product yet, and they want to cut your rates to the bone. What does this tell you? That their spending priorities are in the wrong spot. A client who seems to have all of the marketing in the world but no ability to actually produce a product is a dangerous client. Why? Because if they have limited funds to begin with, they’ve been spent on the sizzle but they forgot to buy the steak. Another very real concern is that a client who is so marketing focused will have a tendency to sell the product before it’s ready to be sold, which we know leads to disaster.

5: A long string of previous consultantsHas the client had two or more consultants already working on this project? If so, it’s likely that there is something very wrong with it. Ask them why the previous consultants are gone. If there is a lot of hedging, that is a bad sign. Sometimes when you ask about previous consultants, the client badmouths them: “They were all a bunch of jerks who cared more about sending bills than doing work” or perhaps, “They sold us a bill of goods and didn’t deliver.” While this is often the case with consultants (we all know bad consultants are out there), it is also how a bad client often perceives a consultant that can’t fulfill unrealistic demands or assumptions. Another problem is that you don’t always want to come in behind other workers who may have left in a hurry over a billing dispute or other issue. Chances are, the documentation is nonexistent and the work is a disaster.

6: The wrong workersOne of the worst situations I have seen is a client who has been trying to get a group of college students (or worse, high school students) to do the job. Usually, after the project has dragged on forever and nothing real has been accomplished, the client starts looking for a true professional to fix the mess. Of course, they’ve lost a huge amount of time and spent a fair amount of money on the amateurs. This is usually only a problem with the smallest companies, luckily. These kinds of situations stink, because the client is already behind schedule and over budget, and if they really were able to pay a pro, they would have done so since Day 1. The sure sign of this scenario is a warped vision of pay scale reality. When they offer you a per-hour price worthy of “Would you like to gigantic size that?” you know you are being asked to replace a part-timer who is new to the industry.

7: “Good buddy syndrome”“Good buddy syndrome” (GBS for short) is when a customer has a close friend or relative they trust and take advice from who you will never get to meet to refute. Usually, these good buddies have no clue what they’re talking about. For example, the project’s specifications make it clear that you really need to use a product that costs a bit of money, but then good buddy insists that freeware product XYZ can be made to work if you put enough effort into it. After you do the work the way good buddy says it should be done, your bill is bigger than the purchase price of the product you spec’ed would have been, and now the project is behind schedule, too. Sadly, GBS is usually hard to detect until you are actually working on the project, but sometimes there are signs you can see in advance. For example, you can ask the client if there is anyone who advises them from time to time or if they have someone they trust on technical matters who isn’t at the table. If they say “yes,” try to meet that good buddy and find out whether they can keep their distance or offer useful advice — or whether they will be interfering in a negative manner from afar.

8: Lack of experienceAll too often, a “client” is really just one or two people with little experience in running a project or a company that has an idea for a product. These clients are usually the most forgiving in terms of understanding that you are a freelancer who has other clients and maybe even a day job. But they also tend to lack the things you will need to do your job well and get paid. For example, they may not have a stable money supply or will be counting on an unrealistically high revenue stream in the beginning to fund continued development. If you are doing business with an inexperienced client, you will need to be extremely diligent when making up your mind to work with them. Due the same level of research that an investor would do, because by taking them on as a client, you are essentially investing in their project.

9: Hostile employeesSometimes, a company will bring in a consultant whom their employees do not want to be there. Perhaps the employees are concerned that their jobs will be lost or maybe they wanted to work with the technologies you are being asked to implement. Maybe egos are bruised or perhaps the employees are simply against the project entirely, regardless of who is doing the work. No matter what the root cause is, taking on a contract where the full time employees do not want you there is always a challenge, and the pay usually does not justify the hassle. It is pretty easy to spot these situations; there will be one or two people constantly sniping or making sarcastic remarks, there will be an edge of constant tension on phone calls, and some people will be challenging your abilities in public. Unless this is a project that is critical for your long-term survival, you should avoid this scenario whenever possible.

10: The “skunk works” projectOnce in a while, an outside consultant will be used by a department as a “solution” to an internal political problem. For example, the IT department might refuse to tackle a project, so the stakeholder decides to use operational budget to bring in a consultant to do it anyway. This kind of project combines some of the worst red flags listed above into the mother of all debacles. First of all, there are not just hostile employees (#9), there are usually entire “hostile departments.” The project is officially off limits. Next, you have a budget problem (#2); the project has not been specifically budgeted and never will be, due to the official resistance to the project. On top of that, the stakeholder wants to get you in and out as quickly as possible, because they have no idea how long they can get away with having you working on the project. Of course, there is going to be a total lack of planning (#1) because this was never a fully thought out project to begin with. All said and done, this is a bad situation you want to steer clear of.