Saturday, July 31, 2010

The U.S. needs cyberwarriors

If you’re looking for an interesting field to get into that has some job openings, you might want to consider cybersecurity. Last week, NPR’s Tom Gjelten reported on the shortage of cyber technicians and engineers.

According to Jim Gosler, CIA, a veteran of CIA, the National Security Agency, and who is currently working for the Energy Department, there are currently only about 1,000 people in the U.S. with the skills needed for frontline cyberdefense and that 20 or 30 times that amount are needed.

A report soon to come from The Center for Strategic and International Studies says the shortage is now desperate, with the United States losing ground to China.

To answer the need, officials are looking in a couple of places. First, they’re turning their eyes toward cybercriminals. After all, who better to find flaws in your system than someone who can hack it?

Second, some members of Congress are promoting a U.S. Cyber Challenge, a national talent search to find up to 10,000 potential cyber warriors, ready to play both offense and defense. This would entail schools around the country to create technical teams that would compete against one another on being able to hack into other systems.

Makes sense. If you want to know where your vulnerabilities are, tap those who are the best at finding them through the same means as terrorists would.

Saturday, July 24, 2010

How to Hack Wireless Internet Connections

How to Hack Wireless Internet Connections






Explains how to ethically hack a wifi wireless internet connection using free hacking software.




Wireless InternetHave a laptop, or a wireless internet card in your PC? Have you ever been in the position that where you lost your WEP / WPA key, and you interested on retrieving it back? Well with Aircrack you can.



Aircrack is a set of tools for auditing wireless networks. It consists of: airodump (an 802.11 packet capture program), aireplay (an 802.11 packet injection program), aircrack (static WEP and WPA-PSK cracking), and airdecap (decrypts WEP/WPA capture files).



I have used aircrack to try and hack my own wireless network and I happy to say I am as secure as I can get wirelessly. Again Aircrack comes with the four following pieces of software to help you secure your wireless internet connection



  • airodump (an 802.11 packet capture program)


  • aireplay (an 802.11 packet injection program)


  • aircrack (static WEP and WPA-PSK cracking)



  • airdecap (decrypts WEP/WPA capture files)



    External Links


  • Aircracks Official Homepage


  • Visit Aircrack on Freshmeat.com



    Download


  • Download the Package (Working Mirror)




    Tutorial


  • Cracking_WEP_and_WPA_Wireless_Networks.pdf


  • Case_of_a_Wireless_Hack.pdf
  • Friday, July 23, 2010

    How to recover Windows XP administator password

    (1)Lost User Passwords. (Windows XP)

    If you have lost or forgotten a user account password in Win XP, simply log in as the computer administrator, and go to control panel, user accounts. Here you will be able to reset the password for any of the systems user accounts.

    Lost Administrator Password. (Windows XP)

    Slightly more work needed if you lose or forget the Windows XP administrator password.

    First reboot Windows XP in safe mode by re-starting the computer and pressing F8 repeated as the computer starts up. Then (in safe mode) click Start and then click Run. In the open box type “control userpasswords2” without the quotes – I have just used quotes to differentiate what you have to type.

    You will now have access to all the user accounts, including the administrators account and will be able to reset the lost password.

    Just click the administrators user account, and then click Reset Password.

    You will need to add a new password in the New password and the Confirm new password boxes, and confirm by clicking OK.
    All done, you have recovered the lost administrators password!


    (2)Can't Log On to Windows XP?

    If that’s your only problem, then you probably have nothing to worry about. As long as you have your Windows XP CD, you can get back into your system using a simple but effective method made possible by a little known access hole in Windows XP.

    This method is easy enough for newbies to follow – it doesn’t require using the Recovery Console or any complicated commands. And it’s free - I mention that because you can pay two hundred dollars for an emergency download of Winternals ERD with Locksmith which is a utility for unlocking lost Windows passwords. See here http://www.winternals.com/products/repairandrecovery/locksmith.asp

    ERD is an excellent multi purpose product, but you should know it is not a necessary one if you have a healthy system and your sole problem is the inability to logon to Windows due to a forgotten password. Not necessary because you can easily change or wipe out your Administrator password for free during a Windows XP Repair. Here’s how with a step-by-step description of the initial Repair process included for newbie’s.

    1. Place your Windows XP CD in your cd-rom and start your computer (it’s assumed here that your XP CD is bootable – as it should be - and that you have your bios set to boot from CD)

    2. Keep your eye on the screen messages for booting to your cd Typically, it will be “Press any key to boot from cd”

    3. Once you get in, the first screen will indicate that Setup is inspecting your system and loading files.

    4. When you get to the Welcome to Setup screen, press ENTER to Setup Windows now

    5. The Licensing Agreement comes next - Press F8 to accept it.

    6. The next screen is the Setup screen which gives you the option to do a Repair.

    It should read something like “If one of the following Windows XP installations is damaged, Setup can try to repair it”

    Use the up and down arrow keys to select your XP installation (if you only have one, it should already be selected) and press R to begin the Repair process.

    7. Let the Repair run. Setup will now check your disks and then start copying files which can take several minutes.

    8. Shortly after the Copying Files stage, you will be required to reboot. (this will happen automatically – you will see a progress bar stating “Your computer will reboot in 15 seconds”

    9. During the reboot, do not make the mistake of “pressing any key” to boot from the CD again! Setup will resume automatically with the standard billboard screens and you will notice Installing Windows is highlighted.

    10. Keep your eye on the lower left hand side of the screen and when you see the Installing Devices progress bar, press SHIFT + F10. This is the security hole! A command console will now open up giving you the potential for wide access to your system.

    11. At the prompt, type NUSRMGR.CPL and press Enter. Voila! You have just gained graphical access to your User Accounts in the Control Panel.

    12. Now simply pick the account you need to change and remove or change your password as you prefer. If you want to log on without having to enter your new password, you can type control userpasswords2 at the prompt and choose to log on without being asked for password. After you’ve made your changes close the windows, exit the command box and continue on with the Repair (have your Product key handy).

    13. Once the Repair is done, you will be able to log on with your new password (or without a password if you chose not to use one or if you chose not to be asked for a password). Your programs and personalized settings should remain intact.

    I tested the above on Windows XP Pro with and without SP1 and also used this method in a real situation where someone could not remember their password and it worked like a charm to fix the problem. This security hole allows access to more than just user accounts. You can also access the Registry and Policy Editor, for example. And its gui access with mouse control. Of course, a Product Key will be needed to continue with the Repair after making the changes, but for anyone intent on gaining access to your system, this would be no problem.

    And in case you are wondering, NO, you cannot cancel install after making the changes and expect to logon with your new password.

    Cancelling will just result in Setup resuming at bootup and your changes will be lost.

    Ok, now that your logon problem is fixed, you should make a point to prevent it from ever happening again by creating a Password Reset Disk. This is a floppy disk you can use in the event you ever forget your log on password. It allows you to set a new password.

    Here's how to create one if your computer is NOT on a domain:

    * Go to the Control Panel and open up User Accounts.
    * Choose your account (under Pick An Account to Change) and under Related Tasks, click "Prevent a forgotten password".
    * This will initiate a wizard.
    * Click Next and then insert a blank formatted floppy disk into your A: drive.
    * Click Next and enter your logon password in the password box.
    * Click Next to begin the creation of your Password disk.
    * Once completed, label and save the disk to a safe place

    How to Log on to your PC Using Your Password Reset Disk

    Start your computer and at the logon screen, click your user name and leave the password box blank or just type in anything. This will bring up a Logon Failure box and you will then see the option to use your Password Reset disk to create a new password. Click it which will initiate the Password Reset wizard. Insert your password reset disk into your floppy drive and follow the wizard which will let you choose a new password to use for your account.

    Note: If your computer is part of a domain, the procedure for creating a password disk is different.

    Restart Windows without restarting the Computer

    When you click on the SHUTDOWN button, make sure to simultaneous press SHIFT Button. If you hold the Shift key down while clicking on SHUTDOWN button, you computer would restart without restarting the Computer.
    This is equivalent to term “HOT REBOOT”

    Installing xp in 10 minute

    Here is a gr8 secret that we can bypass the 39 minute of time while we are installing windows xp.

    We all know that after loading the file or copying the file from the boot disk to temporary space the system requires a first time reboot.

    Now if we press shift+f10 then the task manager will open and there we will find that a process is running named setup.exe

    now our task is to make the priority of this process maxm by right clicking on that.

    We are done.

    Find ur xp installed in 10 min with tolerance 2 min

    Top 10 unknown Google tricks

    Below is a list of our top ten Google tricks many users don’t know about.

    1. Definitions – Pull up the definition of the word by typing define followed by the word you want the definition for. For example, typing: define bravura would display the definition of that word.
    2. Local search – Visit Google Local enter the area you want to search and the keyword of the place you want to find. For example, typing: restaurant at the above link would display local restaurants.
    3 .Phone number lookup – Enter a full phone number with area code to display the name and address associated with that phone number.
    4. Find weather – Type weather followed by a zip code or city and state to display current weather conditions and forecasts for upcoming days.
    5. Track airline flight – Enter the airline and flight number to display the status of an airline flight and it’s arrival time. For example, type: delta 123 to display this flight information if available.
    6. Track packages – Enter a UPS, FedEx or USPS tracking number to get a direct link to track your packages.
    7 .Pages linked to you – See what other web pages are linking to your website or blog by typing link: followed by your URL. For example, typing link:xxxxxxxxxxx displays all pages linking to Computer Hope.
    8. Find PDF results only – Add filetype: to your search to display results that only match a certain file type. For example, if you wanted to display PDF results only type: “dell xps” filetype:pdf — this is a great way to find online manuals.
    9. Calculator – Use the Google Search engine as a calculator by typing a math problem in the search. For example, typing: 100 + 200 would display results as 300.
    10. Stocks – Quickly get to a stock quote price, chart, and related links by typing the stock symbol in Google. For example, typing: msft will display the stock information for Microsoft.

    Thursday, July 22, 2010

    The changing face of IT: Five trends to watch

    1. The consumerization of IT

    We have been discussing the consumerization of IT on TechRepublic since 2007 when The Wall Street Journal published tips to help business professionals circumvent their IT departments. Back then, it was primarily an annoyance involving a few power users who were bringing their own Palm Treos into the enterprise and using a some unauthorized Web tools to get their work done.

    Since then, consumerization has developed into a full-blown trend that nearly every organization — except for the ones with the tightest security or the most centralized IT departments — have to deal with. Workers are bringing their own laptops and smartphones into the office and connecting them to corporate systems. More people than ever are telecommuting or working from home for a day or two a week. And, the number of Web-based tools has increased dramatically, including many that have become favorites of business users, such as Evernote, Dropbox, and Google Docs.

    This puts the onus on IT to craft pragmatic and effective computing policies and to help users understand which tools are safe to use and for which kinds of activities.
    2. The borderless network

    The old security model was for IT to build a big moat around the corporate network and only let trusted, authorized employees come across the well-guarded drawbridge and into the proverbial castle. However, that model has broken down as companies have had to make more and more exceptions — for example, VPN users working from home, smartphone users on the go, and extranet users via company partnerships.

    As a result, today’s IT security model is more about risk management than network protection. Companies have to identify their most important data and then make sure it’s protected no matter who’s accessing it and from wherever and whatever device they’re accessing it from.
    3. The cloudy data center

    One of the most expensive and cumbersome aspects of the company headquarters — and even some large regional offices — can be the data center. It can make it difficult to reconfigure buildings because you always have to worry about the data center ramifications, which can be extremely costly and limiting.

    That’s why some companies are looking to break the cycle and either consolidate and minimize their own internal data centers or outsource the data centers themselves. Some are doing it by going with more cloud computing applications like Salesforce.com. Some of renting server capacity from vendors such as Amazon AWS and Rackspace. Others are going the more traditional route and simply renting data center space from third party data centers that have already solved problems like power, cooling, and telecom redundancy.

    Vendors such as EMC and Microsoft see this happening and they want to be part of the mix as well, so they are encouraging companies to virtualize all of their servers and create a “private cloud” that has the flexibility of a cloud solution and the privacy and security of a homegrown server solution.
    4. The state of outsourcing

    Every time you mention the word “outsourcing” among IT professionals (especially in the U.S.) there’s a predictable knee-jerk reaction. In most cases, they are associating outsourcing with “off-shoring,” the practice of moving entry-level help desk and programming jobs to foreign countries (usually in Southeast Asia) where the labor costs are much cheaper.

    However, outsourcing is a much larger trend, and off-shoring is just one part of it. Outsourcing is thriving in many different forms, and it’s reasonable to expect that it will accelerate. Big companies such as IBM, HP, and Verizon Business are offering to take over many of the maintenance functions for IT departments. In many cases, they’ll even keep IT pros on staff and on-premises but those IT pros will now get their paycheck from the vendor. The big benefit here is 24/7 monitoring since these large vendors have engineers in their sophisticated NOCs at all times, plus they have specialists who can solve more difficult problems when the need arises.

    When companies move their maintenance portions of the IT department to outsourcers, that leaves business analysts and project managers as the primary job roles left for the internal IT department.
    5. The mobilization paradigm

    The computer revolution has put a PC on virtually every desk in the business world and in lots of other places where people work, from the sales counter to the warehouse to the patient exam room. While PCs still make sense on the desks of knowledge workers, for all of these other workers who regularly move around as part of their daily job, the stationary PC often changes the natural flow of their routine because they have to stop at a system to enter data or complete a task. That’s about to change.

    Mobile computers in the form of smartphones and touchscreen tablets (like the iPad) have taken a big leap forward in the past four years. They are instant-on, easy to learn because of the touchscreen, and they have a whole new ecosystem of applications designed for the touch experience. In the years ahead, we’re going to see more and more development done on these mobile platforms, which will untether workers from their stationary PCs and allow them to interact with people and products in much more natural ways.

    Researchers expose a pattern for white-collar crime

    The 12 steps embroiled in white-collar crime involve positions of power, feelings of superiority and the need for control, according to a new study.

    Researchers examined roughly 80 cases of white-collar crime all over the world and talked to dozens of the perpetrators to identify events that typically transpire during episodes of criminal behavior.

    Their research included many high-profile cases involving companies such as Hollinger International Inc., which formerly owned the Sun-Times Media Group, and Enron Corp.

    “I saw a pattern that repeated in cases where managers engaged in white-collar crime,” said researcher Carey Stevens, a Canadian clinical psychologist who has consulted for large corporations for over 15 years. “It was clear to me that the dynamic between individuals in the organization was a critical factor in these managers’ fall from grace.”

    Stevens collaborated with Canadian researchers Ruth McKay, a business professor at Carleton University in Ottawa, and Jae Fratzl, a workplace bullying expert and psychotherapist. Their paper will appear in the International Journal of Business Governance and Ethics.

    The researchers analyzed nearly 50 cases of white-collar crime through face-to-face discussions with employees of companies facing white-collar crime, incarcerated people who participated in white-collar crime and whistle-blowers. The researchers also examined 30 first-person accounts and court records involving other criminal cases, including cases that bilked Barings Bank, WorldCom Inc. and Enron Corp.

    The study proposes the following 12-step process of white-collar crime:

    1. Perpetrator is hired into a position of power
    2. Perpetrator develops a sense of superiority and engages in illegal activity
    3. Co-workers recognize perpetrator’s misconduct and become passive participants
    4. Passive participants recognize opportunity
    5. Passive participants reluctantly follow the perpetrator
    6. Perpetrator distrusts followers and feels the need for control
    7. Perpetrator senses his power over followers and is emboldened
    8. Perpetrator bullies followers and relishes his hold on them
    9. Perpetrator intensifies white-collar crimes as followers feel increasingly trapped
    10. Followers struggle with the conflict between their values and actions
    11. Perpetrator loses control as a whistle-blower steps forward
    12. Perpetrator denies or admits to wrongdoing and shows lack of remorse

    The researchers believe organizations could use this pattern to identify symptoms of white-collar crime and to prevent its occurrence.

    “The model really indicates the need for organizations to consider the relationships between those at the senior level of an organization and those at the controls of the organization,” Stevens said. “Maybe we need to question the paradigm of business where entitlement is an indicator of success. The financial crisis has made governments and financial institutions revisit the pay structures for executives and all organizations should be doing this.”

    Business academics, however, are divided in their assessment of the study, with some expressing concern regarding the methodology.

    “Rather than being described as a ‘study’ in the academic sense, I would call this an expression of ideas based on the authors’ observations and readings,” stated Neal Ashkanasy, a management professor at the University of Queensland in Brisbane, Australia.

    Bruce Gurd of the University of South Australia agreed in an email. “The article uses informal case evidence,” he noted. “This is very useful but academics often look at a large number of cases and come up with more generalizable findings.”

    McKay maintains that the study was only intended to explain the phenomenon of white-collar crime and that further research is being conducted to address these limitations.

    “The research was initially qualitative in nature,” she said. “Qualitative research is not designed to be vastly generalizable.”

    Still, Giles Burch of the University of Auckland in New Zealand stated that the study is “significant” as there is considerable interest in white-collar crime.

    “I think their 12-step process breaks down these kinds of behaviors into useful chunks, which help clarify what is going on,” he said.

    Profiling and categorizing cybercriminals

    Those “in the know” in law enforcement will tell you that criminal profiling is both an art and a science. It’s all about generalizations, but knowing what types of people generally commit specific types of criminal offenses can be very helpful in catching and prosecuting the perpetrator of a specific crime. That information can also be useful in protecting your digital assets from cybercriminals.
    a criminal profile is a psychological assessment made without knowing the identity of the criminal. It includes personality characteristics and can even include physical characteristics. “Fitting the profile” doesn’t mean a person committed the crime, but profiling helps narrow the field of suspects and may help exclude some persons from suspicion. Profilers use both statistical data (inductive profiling) and “common sense” testing of hypotheses (deductive profiling) to formulate profiles. Profiling is only one of many tools that can be used in an investigation.
    The typical cybercriminal

    What does profiling tell us about the “typical” cybercriminal - the person who uses computers and networks to commit crimes? There are always exceptions, but most cybercriminals display some or most of the following characteristics:

    * Some measure of technical knowledge (ranging from “script kiddies” who use others’ malicious code to very talented hackers).
    * Disregard for the law or rationalizations about why particular laws are invalid or should not apply to them.
    * High tolerance for risk or need for “thrill factor.”
    * “Control freak” nature, enjoyment in manipulating or “outsmarting” others.
    * A motive for committing the crime - monetary gain, strong emotions, political or religious beliefs, sexual impulses, or even just boredom or the desire for “a little fun.”

    That still leaves us with a very broad description, but we can use that last characteristic to narrow it down further. This is especially important since motive is generally considered to be an important element in building a criminal case (along with means and opportunity).
    Motives for cybercrime

    Let’s look at some common motivating factors:

    * Money: This includes anyone who makes a financial profit from the crime, whether it’s a bank employee who uses his computer access to divert funds from someone else’s account to his own, an outsider who hacks into a company database to steal identities that he can sell to other criminals, or a professional “hacker for hire” who’s paid by one company to steal the trade secrets of another. Almost anyone can be motivated by money - the young, old, male, female, those from all socio-economic classes - so in order to have meaningful data, we have to break this category down further. The white collar criminal tends to be very different from the seasoned scam artist or the professional “digital hit man.”
    * Emotion: The most destructive cybercriminals often act out of emotion, whether anger/rage, revenge, “love” or despair. This category includes spurned lovers or spouses/ex-spouses (cyber-stalking, terroristic threats, email harassment, unauthorized access), disgruntled or fired employees (defacement of company web sites, denial of service attacks, stealing or destroying company data, exposure of confidential company information), dissatisfied customers, feuding neighbors, students angry about a bad grade, and so forth. This can even be someone who gets mad over a heated discussion on a web board or in a social networking group.
    * Sexual impulses: Although related to emotion, this category is slightly different and includes some of the most violent of cybercriminals: serial rapists, sexual sadists (even serial killers) and pedophiles. Child pornographers can fit into this category or they may be merely exploiting the sexual impulses of others for profit, in which case they belong in the “money” category.
    * Politics/religion: Closely related to the “emotions” category because people get very emotional about their political and religious beliefs and are willing to commit heinous crimes in the name of those beliefs. This is the most commonly motivator for cyberterrorists, but also motivates many lesser crimes, as well.
    * “Just for fun”: This motivation applies to teenagers (or even younger) and others who may hack into networks, share copyrighted music/movies, deface web sites and so forth - not out of malicious intent or any financial benefit, but simply “because they can.” They may do it to prove their skills to their peers or to themselves, they may simply be curious, or they may see it as a game. Although they don’t intentionally do harm, their actions can cost companies money, cause individuals grief and tie up valuable law enforcement resources.

    How cybercriminals use the network

    Cybercriminals can use computers and networks as a tool of the crime or incidentally to the crime. Many of the crimes committed by cybercriminals could be committed without using computers and networks. For example, terroristic threats could be made over the telephone or via snail mail; embezzlers could steal company money out of the safe; con artists can come to the door and talk elderly individuals out of their savings in person.

    Even those crimes that seem unique to the computer age usually have counterparts in the pre-Internet era. Unauthorized access to a computer is technically different but not so different in mindset, motives and intent from unauthorized access to a vehicle, home or business office (a.k.a. burglary) and defacing a company’s web site is very similar in many ways to painting graffiti on that company’s front door.

    Computer networks have done for criminals the same thing they’ve done for legitimate computers users: they’ve made the job easier and more convenient.

    Some cybercriminals use the Internet to find their victims. This includes scam artists, serial killers and everything in between. Police can often thwart these types of crimes and trap the criminals by setting up sting operations in which they masquerade as the type of victim that appeals to the criminal. We think of this in relation to crimes such as child pornography and pedophilia, but it’s the same basic premise as setting up a honeypot on a network to attract the bad guys.

    In other cases, criminals use the networks for keeping records related to their crimes (such a drug dealer’s or prostitute’s list of clients) or they use the technology to communicate with potential customers or their own colleagues-in-crime.

    Amazingly, a significant number of criminals use their own corporate laptops or email accounts to do this. This is a situation whereby IT professionals may stumble across evidence of a crime inadvertently - including crimes that are not, themselves, related to computers and networks.
    The cybercriminal mindset: white collar crime

    All cybercriminals are most definitely not created equal. They can range from the pre-adolescent who downloads illegal songs without really realizing it’s a crime to the desperate white collar worker in dire financial straits who downloads company secrets to sell to a competitor to pay her family’s medical bills, knowing full well that what she’s doing is wrong, to the cold hearted sociopath who uses the network to get whatever he wants, whenever he wants it and believes there’s no such thing as right or wrong.

    White collar crime is such a large category that some police agencies have entire investigative divisions devoted exclusively to it. White collar criminals often use computers to commit offenses because it’s easy to manipulate electronic databases to misappropriate money or other things of value. Some white collar criminals are highly organized and meticulous about details, stealing only limited amounts from any one source and may go on for years or decades without being caught. Others do it on impulse; for instance, they may be angry about a bad evaluation or being passed over for promotion and “strike” back at the company by taking money they believe they deserve.

    Signs of a possible white collar criminal include:

    * Refusal to take time off from work or let anyone else help with his/her job, lest they uncover what’s been going on.
    * Attempts to avoid formal audits.
    * A lifestyle far above what would be expected on the person’s salary with no good explanation for the extra income.
    * Large cash transactions.
    * Multiple bank accounts in different banks, especially banks in different cities or counties.

    There may be other reasons for any of these “symptoms.” Some older workers (and in today’s unstable banking climate, some younger ones, too) don’t trust banks, may be afraid of the collapse of the economic system and thus deal in cash as much as possible. Many folks with legitimate large incomes are afraid to invest in the stock market or other non-insured investments and split their money among different banks to keep it covered by FDIC.

    This article outlines some common patterns seen in white collar crime.

    A dilemma for IT personnel is that white collar criminals are often in upper management positions in the company. If you discover evidence that the boss is stealing from the company, blowing the whistle could put your own job in jeopardy.

    In a future installment of this column, we’ll discuss what you can do if you uncover indications of criminal activity during the course of doing your IT job, who to report it to and how, how to preserve the evidence, and what to expect in the aftermath.

    How do I allow Windows 7 users to run only specific applications?






    There are times and instances where you, as the administrator of a network or group of machines, only want the users to be able to run certain applications. Kiosk machines, library machines, educational machines, community machines - there are plenty of reasons for doing this and a few methods for achieving it. One of those methods is built into Microsoft Windows 7 (with the exception of Windows 7 Home) with the Group Policy Editor. This tool is powerful and offers numerous features including the ability to limit a user’s ability to run applications.

    Using this method a network administrator can limit the users to executing applications based on name. So if you allow the execution of the name Firefox.exe, that means a user can execute an application named Firefox.exe. This will not stop a user from re-naming ApplicationX.exe to Firefox.exe and running that. So this method does presume users will not either know instinctively, or be willing to figure out, how to get around this basic access control.


    Prior to undertaking this process it might be wise to backup the folder C:\WINDOWS\system32 in case this configuration goes south. Should that happen, you can then restore the backup and you will be back to where you started. This backup method isn’t fool proof, but it sure beats winding up with a system that can not start any applications.

    So, with that said, this How do I document will walk you through the process of enabling users to only execute specific applications using the built-in Group Policy Editor of Windows 7.

    This blog post is also available in the PDF format in a TechRepublic Download.

    Step 1

    The first thing you must do is open up the Group Policy Editor. You won’t find a menu entry for this tool. Instead you start the tool by clicking the Start menu and then entering the command gpedit.msc. When this tool opens up you will find yourself looking at a dual-paned window that looks deceptively simple to use (Figure A).


    Figure A

    There are quite a few settings that can be tweaked in this tool. I wouldn’t advise toying with any of these settings unless you know what you are doing.

    Step 2

    The next step is to navigate to the correct location of the configuration option we want to change. This is to be found in the following path:

    User Configuration | Administrative Templates | System

    When you navigate to that path you will want to click on the System entry to reveal the available settings in the right pane (Figure B).

    Figure B


    Scroll down in the right pane until you see the entry for Run only specified Windows applications.

    Step 3

    Double click on the entry for “Run only specified Windows applications” to open up the preferences for this setting. When this is opened (Figure C) you will need to first make sure Enabled is checked. Once you have done that the Show button will become available.

    Figure C

    You can add comments in this window in order to keep track of when this was set up and why. Documentation and tracking is always important for when things are brought up and questioned.

    Step 4

    The next step is to click the Show button which will open up a small window where you can enter the allowed applications (Figure D). In this window you will add, one per line, the executable file name (including extension) for each of the applications you want the users to be allowed to execute.


    Figure D

    Make sure you are thorough in your listing so your users are able to start all necessary applications for work, otherwise you’ll be revisiting this window to add more mission-critical applications.

    Once you have completed your list of allowed applications click the OK button and then click OK on the remaining windows to dismiss them. Once these windows are gone, you have completed this task.

    After this is set up, when a user attempts to launch an application that is not on the allowed list, they will receive a warning that states “The operation has been cancelled due to restrictions in effect on this computer. Please contact your system administrator.”

    Final thoughts

    It’s not a perfect system, and on a system with savvy users it’s fairly easy to get around. But for basic purposes it will stop most of the average users from launching anything not on an allowed list. Also note that this method does not disable any applications that are system processes. So you won’t stop everyone using this method - but you will stop plenty of users from launching applications you don’t want them to launch.

    Disable Windows Server 2008's PnP-X and Port Sharing services via Group Policy

    When a new operating system becomes available, one of the first things I look at is what is new on the basic components of the server; for example, I did this for scheduled tasks on Windows Server 2008 (including R2). One area that I look at most carefully is the Windows Services inventory. Two services that caught my eye in Windows Server 2008 are the PnP-X IP Bus Enumerator and Net.TCP Port Sharing services.

    The PnP-X IP Bus Enumerator service, which first came with Windows Vista, functions to connect devices over the network such as printers through Plug and Play Extensions. It uses Simple Service Discovery Protocol (SSDP) and WS-Discovery to provide an abstraction layer between the network and the devices. These two methods utilize communication protocols over the network that may not be something most administrators want to utilize in the client or server spaces.

    The Net.TCP Port Sharing service is described as a user-mode mechanism to accept connections in processes in net.tcp:// format. The service manages connections by inspecting the transmission and forwarding to a destination address, from the application perspective. In terms of managing security and traffic flow, I can’t imagine administrators liking this capability. This MSDN blog post explains Net.TCP Port Sharing and its use case, but in favor of keeping network traffic at face value, I’d opt to disable the service.

    To disable these services for computer accounts, navigate in Group Policy to the Computer Configuration | Policies | Windows Settings | Security Settings | System Services area of the Group Policy Management Editor. Figure A shows this for a Windows Server 2008 R2 domain.


    Figure A


    PnP-X IP Bus Enumerator and Net.TCP Port Sharing are disabled by default for Windows Server 2008 installations, but that doesn’t keep Windows 7, Windows Vista, or server side programs from utilizing these services by changing the startup type.

    Do you go through the extra effort to implement this type of protection for services that you want to prohibit even if you don’t foresee using them? Let us know in the discussion.

    How much security is enough?

    Security professionals have an interesting job. They manage existing security controls while eternally looking for gaps in their organization’s defenses. Recognized gaps often result in analysts running to the favored security vendor for another application, appliance, or service. But is this the best approach financially or “defensively”?  Probably not.

    The challenge

    Many organizations approach security like players of Whack-a-Mole.  (See Figure A, 360digest.com.)  Placing basic security controls in place, they wait for the next emerging threat and whack it with a virtual mallet.  This process continues without end and without any real protection strategy. The one-off approach may protect the organization, but related costs are probably much higher than necessary. In addition, integrating new business solutions simply expands the number of holes from which moles might stick out their little furry heads.


    Figure A

    Management of disparate security controls, implemented in answer to new threats or regulatory requirements, requires a large number of soft dollars. These often redundant and “un-unified” components may actually create holes in the organization’s defenses, making it harder to protect information assets and to deploy new business solutions.

    This is the way the security profession began its tenure. However, business managers are tiring of what they see as nickel-and-diming the IT budget. So it is past time when we need to manage security objectives via a security architecture. The architecture must support other enterprise business frameworks intended to achieve business objectives. By building all solutions within these architectures, we are assured of a “secure enough” environment, flexible enough to safely accommodate new or existing business systems.

    The solution

    Figure B depicts a set of enterprise architectures designed to achieve expected business results.  First come the outcomes based on a clearly stated business strategy.  The outcomes are then translated into an information architecture that actually defines the business.  Network and system architectures are designed to support processes that create, massage, and deliver information relevant to management and operational teams.  Security’s role is to continuously enable safe and available operation of components built within these architectures.


    Figure B

    The enabling goals of security do not play well with the Whack-a-Mole approach.  What is needed is a well-defined, documented, and manageable security architecture.  Using it, IT teams build systems and network segments with security built-in.  Required is what Novell’s Jay Roxe calls a unified framework.

    According to Roxe,

    The first step is to get away from the ad-hoc, piecemeal approach to regulatory compliance and risk management. That’s achieved best by initially taking inventory all of the organization’s risk management processes and controls, and then making sure they are clearly defined and documented. As there may be hundreds to thousands of controls and policies in place, depending on the size of the organization, this could take some time and effort. But, it’s well worth it (ZDNet, 2010).


    This step results in a list of all controls currently in place, their function, and a view of existing gaps and redundancies. We performed this task at ManorCare several years ago. We used a spreadsheet like the one shown in Figure C.

    Figure C

    The Layer/Required Control column represented the expectations surrounding the new security strategy we’d written.  The strategy was based on a combination of COBIT, HIPAA, and ISO/IEC 27002 considerations, encompassing all data and system protection objectives. The purpose of the strategy was to define and document an architecture that:


    • Met regulatory requirements
    • Protected employee data
    • Protected confidential business information
    • Ensured availability of critical processes in accordance with management expectations
    • Enabled the safe delivery of data anytime, anywhere
    • Allowed us to pass both internal and external audits

    Subsequent columns showed how each existing security solution met or did not meet each control requirement.  The process took about 60 days, but the results were immediate.


    Using the matrix, we eliminated several controls due to redundancy or by enabling unused functionality on other controls. We continued to use the matrix to identify our risk every time a new threat was identified.  In most cases, we found we could simply modify configurations on existing software or hardware to adapt to the new challenges. (For more information about how we used the matrix, see, Use a security controls matrix to justify controls and reduce costs.)

    So the combination of a strategy and a managed controls matrix reduced costs, communicated how systems and network components fit into the enabling process, and allowed flexible responses to emerging threats.  We stopped reacting and began to proactively build a prevent, detect, contain, and respond framework to meet every contingency.

    The final word

    We recognized earlier than many security organizations that our ability to run up a red flag and automatically get budget dollars was waning. I won’t say that our approach was perfect, but it provided a foundation upon we improved over time. The end result was the ability to react to business need while methodically and proactively addressing external and internal security challenges at a reasonable cost.

    Facebook and personal brand suicide: What you can do to prevent it

    If you have some extra time and would like a front seat to some major Facebook fails, check out lamebook.com. Some of the content can be pretty tasteless but if anything can drive home the point of “Facebook post” remorse, it’s this site.

    I searched under “workplace” and found an example of a guy who was griping about his boss and jokingly said he’d like to “stab her in the jugular.” This was followed soon by his next post, which was, “Does anyone know of any job openings?” Apparently someone told his boss about the post and she wasn’t pleased.

    Since Facebook changes its security policies about every 3 seconds, your worries are not limited just to someone seeing a post and telling someone else. In fact, personal pages on sites like LinkedIn, Twitter, and Facebook rank pretty high in Google searches. So that means that a potential employer who’s looking at your resume could also be finding out via social networking about your membership in that Neo-Nazi Glee Club even though your account settings are “Friends only.” (Studies show that 78% of recruiters use Google to research job candidates and social networking sites rank very highly on Google.)

    And don’t forget that Facebook is like “Six Degrees of Separation” on Barry Bonds-grade steroids. Even if you make a comment on someone else’s page, THAT person’s friends will see it, and then every friend of those friends will see it, etc.

    My opinion is you shouldn’t have to totally homogenize yourself on your own Facebook page just to make sure you don’t tick off any present or future employers. However, just be aware of repercussions and decide whether they’re important enough to take heed of. In other words, there’s nothing scandalous about putting up a photo of yourself proudly holding the blue ribbon for an international beer-drinking contest, but don’t expect MADD to come knocking at your door to offer you a job.

    But let’s say that you just can’t control the impulse to self-sabotage on a social networking scale. Google recently launched a tool called google.com/profiles that lets you create your profile and actually direct what appears when someone conducts a search on your name. You can choose to link your name with “safe” urls and employment information. Then maybe your association with Do IT Yourself Bombs Inc. might not be so easy to expose.
    Get a line on your online rep

    So how can you find out what others see when they’re looking for the scoop on you? The first avenue, of course, is to Google yourself. If you don’t like what you see, start posting keyword-rich content that will push some of the bad stuff off the radar. Write a blog about your technical specialty. Post frequently (but tastefully) to discussion forums of sites you use for your work, like this one!

    If you haven’t already, you should sign up for Google Alerts. That way you can see where your name comes up in other areas, like in that piece for an online magazine where your bitter ex falsely accuses you of being a fan of the Jonas Brothers. These alerts can be sent to you once a week, once a day, or whenever your name is mentioned online.

    There are also some resources for finding out where your username crops up in Twitter land. One is a desktop client for Twitter called tWhirl. You can download it here. And TweetGrid is a powerful Twitter search dashboard that allows you to search nine different topics, including your name, in real-time.

    Network admins must beware of Stuxnet: A SCADA System worm

    Sometimes with mind-numbing frequency, patches and security advisories from Microsoft, Adobe, and Apple compete for an ever-increasing amount of attention from administrators. Little wonder then, that most will have greeted with a mild yawn the latest announcement of another zero day attack — this one named the “Stuxnet Attack.” Just as I was about to file this latest message under “Priority - To Be Reviewed,” the sender’s name jarred me to attention: Managing Automation.

    Managing Automation is a periodical with a healthy web presence that tends to cover topics from the supply chain, manufacturing, process control, and product lifecycle management. Over the past five years or more, the editorial focus has branched out to cover additional topics more familiar to network administrators: e.g., security event management for industrial systems, defenses against industrial espionage, etc. Despite this new coverage area, Managing Automation topics are rarely vehicles for malware notification. It was noteworthy then, to see author Chris Chiappinelli’s story begin with:

    Manufacturers worldwide have been put on notice that an insidious virus targeting supervisory control and data acquisition (SCADA) systems is on the loose.

    The targets of the malware are Siemens’ SIMATIC WinCC and PCS7 software, integral components of the distributed control and SCADA systems that facilitate production operations in many process manufacturing companies…

    Those not in the manufacturing and process engineering fields may be unaware of Siemens SIMATIC and PCS7 software. How important was this emerging threat, in a field rife with worries that are sometimes alarmist and self-serving? Important. This time there is legitimate cause for concern.

    Wired’s Kim Zetter wrote in a post the same day as the Managing Automation announcement that “the emergence of malware targeting a SCADA system is a new and potentially ominous development for critical infrastructure protection.” Network World’s Ms. Smith quotes F-Secure’s warning that the vulnerability poses “a risk of virus epidemic at the current moment.” Finally, it may be standard lingo for such announcements, but Microsoft’s July 16th announcement of Security Advisory 2286198 advised customers to visit Microsoft’s general support portal and to “contact the national law enforcement agency in their country.”

    All of this was more than enough to get my attention.

    While SCADA systems are often not regularly connected to the Internet, they are networked and are subject to the usual array of vulnerabilities. (Promotional web copy for the Siemens product that is the target of this attack explicitly mentions Ethernet switches and wireless LANs.) Public officials such as Richard Clarke have warned about risks to SCADA systems, but there have been few examples to rally the troops. While the particular vulnerability — a hard-coded password allowing access to the Siemens software’s back end data base — is not especially remarkable (though it does both date the software and call into question software quality review processes at Siemens), the malware packs a punch.

    Thought to mainly spread by USB stick, or possibly by network shares, it cannot be defeated by simply turning off Windows autorun; simply viewing an infected file system will install the malware. A security specialist at Tofino believes that this zero-day attack, which affects all versions of Windows, may have been in the wild for a month or more. Preliminary assessments indicate that the malware does not appear designed to cripple infrastructure, but rather to steal information from SIMATIC WinCC / PCS7 implementations — i.e., some form of industrial espionage. Of course that espionage could later be used to wreak havoc on these same or similarly configured systems.

    Recent press and analyst coverage has addressed both the threats to SCADA networks, and also the broader Windows vulnerability which the worm uses to spread (it exploits a code that interprets Windows shortcuts, i.e., .lnk files). As Microsoft noted in their analysis of the exploit, which has been named the “Stuxnet” threat, this is a new method of propagation which leverages a flaw in the way the Windows Shell “parses shortcuts.” Stuxnet has been cataloged as CVE-2010-2568 at Mitre’s CVE. For its part, Microsoft has proposed a workaround of sorts, and updated its own detection engines.
    There’s more

    As if that wasn’t enough, the attack also involved theft of a signed Verisign digital certificate owned by Realtek Semiconductor. This certificate was used to authenticate drivers needed by Stuxnet when it self-installs, though Microsoft has since persuaded Verisign and Realtek to revoke the certificate. This was the icing on the trojan’s cake.
    The Dependency Syndrome

    What does all this mean? One lesson — not new, but that is borne out by this incident — is that the Internet-centric orientation of most malware models could miss certain types of threats. SCADA vulnerabilities are just that sort of threat. And while infections might not spread directly from them to general purpose networks, those general purpose networks depend upon SCADA systems for connectivity, power — and even human habitability. The “Dependency Syndrome” asserts that connections between traditional networks such as those managed every day by network administrators, and nontraditional networks such as those hosting SIMATIC WinCC / PCS7, will sooner or later be impossible to detect — and defend against.

    Once-great companies drift due to leaders' styles

    In the tech manufacturing/development sectors, there are only a few companies that everyone knows. And when I say, everyone, I don’t just mean insiders, pundits, media types or bankers - I’m talking about consumers and even people who have no interest in the products or services the companies provide.

    This list isn’t long: Google, Apple, Microsoft, Sony, Amazon. Perhaps it could also include Nintendo, Priceline, Samsung, Intel, or Blackberry (RIMM). But my point is this: Despite their size, most tech organizations are not brands that are commonly known by a wide spectrum of the population.

    So it’s worth looking at why a couple of once-great organizations are no longer great. I believe it has everything to do with leadership. Consider:

    1. Microsoft - Once the unquestionable king of the industry. When Bill Gates was the boss, everyone knew everything about what it was doing. Print news and TV articles heralded each new development and product. People across a wide swath of industries waited for Gates’ pronouncements with bated breath. Competitors seemed to be destined to follow companies like Wang computer and Lotus software in Microsoft’s wake. Then Gates stepped back and Steve Ballmer took the lead in January 2000. Ever since, Microsoft has seemed incapable of creating the kind of great, exciting, newsworthy products it was known for. I mean the kind that helps an organization to grow more quickly - or gain the hearts and minds of new consumers.

    2. Sony - For different reasons it, too, was once the most exciting organization to watch. When new products were launched, everyone from insider pundits to the chronically hip wanted them. It was the “Apple” of its time. Music devices, televisions, and many other electronic products made by Sony were always leading edge and premium quality. Then, in 2005, after a tough period of missed opportunities, Howard Stringer became the head honcho. Although the company is now in more business segments (such as movies and games) than in was during its heyday, it’s clearly no longer the one to watch for newness, style leadership, or even leading edge technology.

    Leadership, at its best, elicits emotion. Great leaders inspire people to be great and do more. At its worst, leadership can be bureaucratic, pedantic, and selfish.

    During demanding times, organizations look to the boss to help them to get ahead of the curve. Bosses can do this in many ways of course; but one of the best ways is to focus everyone on those things that the company does best. And then do more of the same. This helps rebuild a company’s reputation. It re-creates the team’s sense of pride; kind of, “We’re going to show everyone that we can fix this situation and be great again.” Pride, combined with some challenge, encourages everyone involved to go a little further each day.

    But, for some leaders this can be hard. It can take time to show results.

    Consequently many bosses take different approaches. Some will fix their eyes firmly on the stock market, planning to show investors “progress” by resorting to acquisitions or mergers. And, while this tactic may make the company bigger, it rarely makes their companies as relevant or important again. Others go “internal.” These CEOs spend their time trying to reduce costs, perhaps encouraging their engineers or developers to piggyback onto older products and push new efficiencies. These are the ones we see frequently on all the financial networks talking up efficiency.

    But ultimately both approaches will do little to return the company to the top. It will be interesting to watch how the current leader, Google, now managed by Eric Schmidt more than the founders, will move into the future.

    Firefox is breaking my heart...sort of

    Today I went to download the beta version of Firefox 4 and had my heart broken by a few disconcerting issues. One of these issues surprises me and the other issue REALLY surprises me…and then there’s another issue that doesn’t really surprise me.

    Say what?

    Let me preface this, before I dig my grave too deep. Although Google Chrome has been my default browser for a while now, I still like (and rely upon) Firefox. I have used Firefox for as long as it had a name other than Firefox. And prior to Firefox, I used Mozilla…so I’ve been in the family for quite some time. Here endeth the preface.

    For the longest time I used Firefox (over Konqueror and Opera) simply because it was ahead of the curve in just about every aspect. But then this newcomer comes around and quietly blows the other browsers (including Firefox) out of the water. This new browser? Google Chrome. In every way Chrome is faster than Firefox. I can’t say it is 100% better than Firefox because there are certain sites (like any site I use that has Xoops as a content management system) that don’t like Chrome so well. For those sites I just head on back to Firefox and wait (and wait, and wait) until it opens (all the while thinking I’d be done with the article by now if I could use Chrome on this site!)

    So anyway…

    Here comes Firefox 4. I do have high hopes for this browser. After all, it is offering to feature such updates as:

    * New tab location.
    * New add ons manager.
    * WebM and HD support.
    * Better security.
    * HTML5 support.
    * Websockets.
    * CSS3 support.
    * Crash protection.

    and more. But what is really interesting is the first feature that Firefox is promoting — the “new look.” This “new look” (with the tabs above the menus) looks suspiciously like Google Chrome. So much so that when I first saw this new look I thought it WAS Chrome.

    That “sort of” surprised me. But I’m not quite sure I get why they are promoting this new look as one of the better features. Yet…that’s their big PR push. NEW TAB LOCATION! Woohoo, we can be Chrome too!

    Now, this brings me to the issue that really surprises me. Firefox has started to follow suit with other development teams and has started pushing features before reliability. I still can see memory issues when Firefox is left open for longer periods of time. I do realize they are still working on these types of issues with the Crash Protection, but why not resolve the serious memory leaking before you move on to bigger and bolder things? All of these new features only serve one purpose - bloat. This is not the Firefox the world needs. The world needs a secure, reliable Firefox to combat the insecure, unreliable Internet Explorer.

    This brings me to the final issue which, sadly enough, doesn’t really surprise me all that much. If you look at the new features of Firefox you quickly see that many of these new features are currently only working with the Windows release. Really? Linux has all of a sudden become second string on the Firefox user list? Correct me if I’m wrong, but would Firefox really be where they are if it were not for the Linux operating system and community? While Firefox was struggling to gain any foothold in the world of Windows, it was practically the only show in town for the Linux crowd. And the Linux community embraced Firefox. Happily we said, “We will be your chosen users and bring you into the spotlight again!” And now…this is how the Linux community is repaid? By getting features AFTER the Windows crowd.

    Let me put it to you another way. Mozilla is looking for beta testers to kick the tires of these new features and report back bugs. Do you think Windows users are more apt to report bugs over Linux users? I think not. Linux users are practically born and bred to report bugs. It’s part and parcel to the very heart of open source! So why not continue on with your faithful user base - the one you know will give back what you need? No? You want to give Windows the new features first because there are more users? And more users = more bug reports? Sorry, I call fallacy here.

    Well…there you have it. I see the landscape of Firefox is changing. Feature creep, forsaking it’s first love, and trying to be someone it’s not. Well, Firefox…I guess it’s time we break up. It’s been a long relationship but it seems I have someone else who, at least on the surface, seems to care more about me than you. I could be wrong. I’ve been wrong before (and I’ll be wrong again). But you seem to be more in love with Windows than you are me now (”me” being Linux, not the literally “me” - that would be creepy).
    Truthfully, however, we’ll see what comes out in the wash. I’ve been trying the Firefox 4 beta in both Ubuntu and Fedora and, well, to be honest, I haven’t noticed much of a difference. Why? Because the new features aren’t as prevalent as they would be if I were using Windows. It’s a good browser…although still not nearly as fast as Chrome. I’m sure Firefox 4 will be a solid browser. Will it be faster than Chrome? I doubt it. Will all of it’s new sleek, shiny, features translate from Windows to Linux and back again? I doubt it. I guess only time will tell. Until then, Firefox will just keep breaking my heart.

    Monday, July 19, 2010

    Snack time with the new iGoogle for Android and iPhone

    (cross-posted with the Google Mobile Blog)

    We like iGoogle because it lets us "snack" on interesting information all day long. We can read a little bit of news here and there, glance at finance portfolios, take a look at the weather forecast, and then do a Google search. It doesn't require a big commitment of time and energy — it's simply there for us whenever we need it. This kind of availability is even more important on a phone, where it can take a long time to surf. That's why iGoogle is so convenient on mobile devices. When you're waiting in line, you can check iGoogle on your phone for a quick "info snack" — even in areas with mediocre network coverage.

    But speed isn't everything. Many of you have told us that you wanted to use more of your iGoogle gadgets on your phone. You wanted to see your tabs, too. We read your blog comments and forum posts and put your requests at the top of our to-do list.

    Today, we're excited to roll out an improved beta version of iGoogle for the iPhone and Android-powered devices. This new version is faster and easier to use. It supports tabs as well as more of your favorite gadgets, including those built by third-party developers. Note that not all gadgets — like those with Flash — will work in mobile browsers.

    One of our favorite new features is the in-line display of articles for feed-based gadgets. That means you can read article summaries without leaving the page. You can also rearrange gadget order or keep your favorite gadgets open for your next visit. None of these changes will mess up the layout of gadgets on your desktop computer, so feel free to play around and tune your mobile experience. 



    The new version of iGoogle for mobile is available in 38 languages. To try it out, go to igoogle.com in your mobile browser and tap "Try the new Mobile iGoogle". Bookmark the page or make it your home page so you can return to it quickly. Finally, please fill out our survey by clicking on the "Tell us what you think" link at the top of the new home page. We'll continue to use your feedback to make iGoogle even better.

    Let's make the web faster


    From building data centers in different parts of the world to designing highly efficient user interfaces, we at Google always strive to make our services faster. We focus on speed as a key requirement in product and infrastructure development, because our research indicates that people prefer faster, more responsive apps. Over the years, through continuous experimentation, we've identified some performance best practices that we'd like to share with the web community on code.google.com/speed, a new site for web developers, with tutorials, tips and performance tools.

    We are excited to discuss what we've learned about web performance with the Internet community. However, to optimize the speed of web applications and make browsing the web as fast as turning the pages of a magazine, we need to work together as a community, to tackle some larger challenges that keep the web slow and prevent it from delivering its full potential:
    • Many protocols that power the Internet and the web were developed when broadband and rich interactive web apps were in their infancy. Networks have become much faster in the past 20 years, and by collaborating to update protocols such as HTML and TCP/IP we can create a better web experience for everyone. A great example of the community working together is HTML5. With HTML5 features such as AppCache, developers are now able to write JavaScript-heavy web apps that run instantly and work and feel like desktop applications.
    • In the last decade, we have seen close to a 100x improvement in JavaScript speed. Browser developers and the communities around them need to maintain this recent focus on performance improvement in order for the browser to become the platform of choice for more feature-rich and computationally-complex applications.
    • Many websites can become faster with little effort, and collective attention to performance can speed up the entire web. Tools such as Yahoo!'s YSlow and our own recently launched Page Speed help web developers create faster, more responsive web apps. As a community, we need to invest further in developing a new generation of tools for performance measurement, diagnostics, and optimization that work at the click of a button.
    • While there are now more than 400 million broadband subscribers worldwide, broadband penetration is still relatively low in many areas of the world. Steps have been taken to bring the benefits of broadband to more people, such as the FCC's decision to open up the white spaces spectrum, for which the Internet community, including Google, was a strong champion. Bringing the benefits of cheap reliable broadband access around the world should be one of the primary goals of our industry.
    To find out what Googlers think about making the web faster, see the video below. If you have ideas on how to speed up the web, please share them with the rest of the community. Let's all work together to make the web faster!


    Introducing Google Public DNS

    When you type www.wikipedia.org into your browser's address bar, you expect nothing less than to be taken to Wikipedia. Chances are you're not giving much thought to the work being done in the background by the Domain Name System, or DNS.

    Today, as part of our ongoing effort to make the web faster, we're launching our own public DNS resolver called Google Public DNS, and we invite you to try it out.

    Most of us aren't familiar with DNS because it's often handled automatically by our Internet Service Provider (ISP), but it provides an essential function for the web. You could think of it as the switchboard of the Internet, converting easy-to-remember domain names — e.g., www.google.com — into the unique Internet Protocol (IP) numbers — e.g., 74.125.45.100 — that computers use to communicate with one another.

    The average Internet user ends up performing hundreds of DNS lookups each day, and some complex pages require multiple DNS lookups before they start loading. This can slow down the browsing experience. Our research has shown that speed matters to Internet users, so over the past several months our engineers have been working to make improvements to our public DNS resolver to make users' web-surfing experiences faster, safer and more reliable. You can read about the specific technical improvements we've made in our product documentation and get installation instructions from our product website.

    If you're web-savvy and comfortable with changing your network settings, check out the Google Code Blog for detailed instructions and more information on how to set up Google Public DNS on your computer or router.

    As people begin to use Google Public DNS, we plan to share what we learn with the broader web community and other DNS providers, to improve the browsing experience for Internet users globally. The goal of Google Public DNS is to benefit users worldwide while also helping the tens of thousands of DNS resolvers improve their services, ultimately making the web faster for everyone.

    Use Chrome like a pro

    Use Chrome like a pro

    This week I sent a note to Googlers about some of the Chrome team's favorite extensions. So many of them asked if they could share the list with people outside the company that I thought I would just do it for them. Here it is. We're proud of the Chrome browser and the great extensions that its developer community has created, and we hope you enjoy them! They can all be found at chrome.google.com/extensions.
    • Opinion Cloud: Summarizes comments on YouTube videos and Flickr photos to provide an overview of the crowd’s overall opinion.
    • Google Voice: All sorts of helpful Voice features directly from the browser. See how many messages you have, initiate calls and texts, or call numbers on a site by clicking on them.
    • AutoPager. Automatically loads the next page of a site. You can just scroll down instead of having to click to the next page.
    • Turn Off the Lights: Fades the page to improve the video-watching experience.
    • Google Dictionary: Double-click any word to see its definition, or click on the icon in the address bar to look up any word.
    • After the Deadline: Checks spelling, style, and grammar on your emails, blog, tweets, etc.
    • Invisible Hand: Does a quick price check and lets you know if the product you are looking at is available at a lower price elsewhere.
    • Secbrowsing: Checks that your plug-ins (e.g. Java, Flash) are up to date.
    • Tineye: Image search utility to find exact matches (including cropped, edited, or re-sized images).
    • Slideshow: Turns photo sites such as Flickr, Picasa, Facebook, and Google Images into slideshows.
    • Google Docs/PDF Viewer: Automatically previews pdfs, powerpoint presentations, and other documents in Google Docs Viewer.
    • Readability: Reformat the page into a single column of text.
    • Chromed Bird: A nice Twitter viewing extension.
    • Feedsquares: Cool way of viewing your feeds via Google Reader.
    • ScribeFire: Full-featured blog editor that lets you easily post to any of your blogs.
    • Note Anywhere: Digital post-it notes that can be pasted and saved on any webpage.
    • Instant Messaging Notifier: IM on multiple clients.
    • Remember the Milk: The popular to-do app.
    • Extension.fm: Turns the web into a music library.

    The most World Cup-crazy countries

    The most World Cup-crazy countries


    Last weekend, Spain won the 2010 World Cup. For the month leading up to the final, Googlers joined the world in cheering for their favorite teams. Around our campus, games were watched on computer screens and on cafe video screens. Code went unwritten. Emails went unanswered.

    Throughout the world, real life also slowed during World Cup matches. Which teams had the most loyal fans? Which game captured the attention of world the most? To answer these questions, we looked at counts of queries using Google. People search using Google day and night—except for football fans when a game is on.

    These graphs show the volume of Google queries for some of the World Cup matches:


    On June 15, as Brazil played its first game against North Korea, the volume of queries from Brazil, shown using a red line, plummeted when the match began, spiked during halftime, fell again and then quickly rose after the match finished.


    Queries from Spain during its June 25 game against Chile also decreased during the game, except during halftime. After some post-game querying, Spaniards went to sleep and queries dropped again.

    To measure which country has the most loyal fans, we computed the proportional drop in queries during each of its team’s matches compared with normal query volume. Brazil topped the charts with queries from that country dropping by half during its football games. Football powerhouse and third-place winner Germany came in second, followed by the Netherlands and South Korea.


    In fourth place, South Koreans were remarkably loyal even though some games began at 3:30am Seoul time. Japan, Australia and New Zealand, also affected by time-zone differences, expressed much less interest. A few countries searched more, not less. But only Honduras and North Korea increased significantly.

    During the knockout rounds, each match’s losing team is eliminated from the tournament. As fewer and fewer teams remain, we expected increased worldwide interest in each remaining game. Unsurprisingly, worldwide queries slowed the most during the final game between the Netherlands and Spain, but the round-of-16 Germany v. England game had the second largest query decrease. Semi-finals and quarter-finals were all popular except for semi-final Uruguay v. Netherlands, during which queries actually increased.


    In Latin American countries, search volume dropped more steeply leading into and out of matches while, in Europe, searches ramped down and up more gradually. Of course, for games that went into extra time and penalty shootouts the drops deepened the longer the match went on, including Paraguay v. Japan, Netherlands v. Spain, and Uruguay v. Ghana as seen here:


    Finally, no blog post about the World Cup would be complete without a look at what did drive people to search—after the final match, of course. Although he won neither the Golden Boot (for the most World Cup goals) nor the Golden Ball (for best player) last weekend, Spain’s David Villa is winning in search compared to the recipients of those two honors—Germany’s Thomas Müller and Uruguay’s Diego Forlán—and Dutch midfielder Wesley Sneijder. All of these men competed for the Golden Boot with five goals apiece.

    Similar to when Carlos Puyol headed in the single goal that put Spain in the final, people flocked to the web to search for information on Andres Iniesta, the “quiet man” who scored the one goal that led his country to its first World Cup championships. They were also interested in Dani Jarque, a Spanish footballer who died last fall and whose name was emblazoned on Iniesta’s undershirt, which he displayed after his goal. And after the match, searches for keeper Iker Casillas skyrocketed to a higher peak than any other popular footballer—including household names like Ronaldo, Villa and Messi—reached during the Cup. Sometimes, it seems, goalies get the last word.

    We hope you enjoyed our series of posts on World Cup search trends and we’ll see you in Brazil in 2014!

    10+ ways to be productive when you're brain dead

    Ever have one of those days when you wake up but your brain doesn't? Come on; tell the truth. Hell, it happens
    to me all the time. There are dozens of causes: overwork, overstress, lack of sleep, too much fun the night
    before, temporary depression, sick of a never-ending project, or just plain lazy, to name a few. Sometimes the
    old noggin just doesn't want to work. Do you blame it?
    On days like that, you essentially have four choices: Stay home, try to have a normal day and probably screw it
    up, exercise, or adjust. Since I don't consider the first two choices real options, at least not for me or most
    executives, and when I don't feel like thinking I sure as hell don't feel like exercising, I decided long ago to find
    ways to adjust, to make the most of those days when my brain's on autopilot.
    As it turns out, there are certain types of tasks that most of us either have to do or should do, even managers
    and top level executives, that don't require you to be at the top of your game. Of course, you may have to crank
    up your willpower to get started, but the point is that, once you do -- get started -- you'll cruise right through
    these tasks.
    1: Work on the graphics, special effects, or slide show timing of a PowerPoint presentation. Creative work that
    doesn't require intense thought.
    2: Hold one-on-one meetings with your staff or peers, ask them how you can improve, and really listen to what
    they have to say.
    3: Let your mind wonder and brainstorm. You see, when you're conscious mind is tired, your subconscious sort
    of kicks in and takes up the slack. You'd be amazed at what you can tap into. I get some of my best ideas when
    I'm half asleep or not even thinking.
    4: Walk around, talk to people, let your guard down, be yourself.
    5: If you happen to be writing something, do an outline. The final product always turns out better that way, and
    outlining is methodical work that doesn't require a lot of brainpower.
    6: Check out what the competition is doing. Do a little digging. Call some contacts to get some competitive G2.
    7: Try a change of scenery, like working outside for a change.
    8: Schmooze with some vendors or partners. No, I'm not saying completely waste people's time, I'm talking
    about checking in and asking open questions you usually don't ask.
    9: Take your administrative assistant or favorite employee out for a long lunch and really get to know them.
    10: Do your expense reports. Yes, my least favorite too, but it is more or less brainless work.
    11: Clean off your desk. Granted, this one really sucks, but you feel such a sense of accomplishment when
    you're done, it almost makes it worthwhile.

    Gather Exchange 2010 mailbox information with a PowerShell command

    When it comes to infrastructure reporting, different organizations require different levels of detail. For some organizations, understanding their Exchange footprint is simply a matter of identifying how much space is used by various Exchange databases. For other organizations, much more granular information is necessary, often requiring an understanding of individual mailbox sizes, and the number of items in each mailbox is considered important information. When it comes to detecting abandoned mailboxes, Exchange administrators might want to learn about the last time someone logged in to a particular mailbox. Fortunately, the PowerShell command get-mailboxstatistics can do all this and more. (There is a caveat to the Last Login Time; I address this later in the article.)

    The get-mailboxstatistics command syntax is simple and requires one of three parameters to be specified:

    • Identity. Specify the user for which mailbox statistics should be gathered.
    • Database. Specify the database for which mailbox statistics should be gathered. All mailboxes in the target database will be listed.
    • Server. Specify the mailbox server for which mailbox statistics should be gathered. All mailboxes in all databases on the target server will be listed.

    By default, get-mailboxstatistics returns the mailbox display name, the number of items present in the mailbox, the status of any storage limits that might be in place (e.g., is the mailbox meeting the administrator-imposed storage quotas?), and the time the mailbox was last used. Figure A shows the results from the get-mailboxstatistics command returning results at the database level.

    Figure A


    If you want to know exactly when you moved a mailbox from one server to another, you can use the -includemovehistory parameter, and you can have this information added to the output of the get-mailboxstatistics command. In Figure B, you’ll note that my mailbox was moved on 7/9/2010 at around 10:30 PM. The total move size was just under 5.6 GB and the process took almost one hour to complete. Also notice that this mailbox currently has no size limit; this is because we’re in the final pilot stages of our rollout to Exchange 2010, and a number of mailboxes have not yet been configured with new limits. (And, yes, my mailbox is huge, but it’s not as bad as it seems due to the way that I use my mailbox.)


    Figure B


    From a parameter perspective, the only other parameter of interest is -archive, which specifies that statistics regarding a related archive mailbox should be provided along with the main mailbox information.

    Last Login Time caveat

    As you saw in Figure A, all of the listed mailboxes were accessed in the same week; unfortunately, that’s not true, depending on how you look at it. The lastlogindate information stored with a mailbox reflects the last time that the mailbox was accessed for any reason, including brick-level Exchange backups. If you’re backing up Exchange in this way, the backup software logs in to the mailbox; hence, the login date information is useless.

    Summary

    Mailbox size information is important when it comes to space planning and other needs. As you move to Exchange 2010, this information can be invaluable. Bear in mind that Exchange 2010 completely does away with single instance messaging, which can significantly increase overall capacity needs, but there’s a tradeoff in lower IOPS needs. Understanding true space needs is a critically important planning step.

    Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 Release Candidate

    The Virtual Machine Manager Self-Service Portal 2.0 (VMMSSP) is a fully supported, partner-extensible solution that can be used by customers to pool, allocate, and manage their compute, network and storage resources to deliver the foundation for a private cloud platform in their datacenter.

    VMMSSP (also referred to as the self-service portal) is a fully supported, partner-extensible solution built on top of Windows Server 2008 R2, Hyper-V, and System Center VMM. You can use it to pool, allocate, and manage resources to offer infrastructure as a service and to deliver the foundation for a private cloud platform inside your datacenter. VMMSSP includes a pre-built web-based user interface that has sections for both the datacenter managers and the business unit IT consumers, with role-based access control. VMMSSP also includes a dynamic provisioning engine. VMMSSP reduces the time needed to provision infrastructures and their components by offering business unit “on-boarding,” infrastructure request and change management. The VMMSSP package also includes detailed guidance on how to implement VMMSSP inside your environment.

    Important: VMMSSP is not an upgrade to the existing VMM 2008 R2 self-service portal. You can choose to deploy and use one or both self-service portals depending on your requirements.

    The self-service portal provides the following features that are exposed through a web-based user interface:

    * Configuration and allocation of datacenter resources: Store management and configuration information related to compute, network and storage resources as assets in the VMMSSP database.
    * Customization of virtual machine actions: Provide a simple web-based interface to extend the default virtual machine actions; for example, you can add scripts that interact with Storage Area Networks for rapid deployment of virtual machines.
    * Business unit on-boarding: Standardized forms and a simple workflow for registering and approving or rejecting business units to enroll in the portal.
    * Infrastructure request and change management: Standardized forms and human-driven workflow that results in reducing the time needed to provision infrastructures in your environment.
    * Self-Service provisioning: Supports bulk creation of virtual machines on provisioned infrastructure through the web-based interface.Helps business units to manage their virtual machines based on delegated roles.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=fef38539-ae5a-462b-b1c9-9a02238bb8a7&displaylang=en