Monthly Archives: July 2013

IDmgmt[1]

E-authentication: What IT managers will be focusing on over the next 18 months

e-authentication

If you are a federal systems manager working in government today, chances are good that, over the next 18 months, you will be focusing heavily on identity management and authentication.

This boost in electronic authentication solutions is triggered by several factors, including the growth of new cloud solutions, the migration to enterprisewide authentication as storage and shared databases are centralized, and the growth of new types of integrated systems — such as those created in mash-up environments.

Today’s sprawling IT environments mean that it’s more important than ever for person-to-machine and machine-to-machine identities to be properly established for each connection. Across the federal government, more than $70 million will be spent on identity management solutions in fiscal 2013, and about the same will be spent in 2014, according to the Office of Management and Budget.

For IT managers embarking on such efforts, a good starting point is OMB Memo M-04-04, which focuses on e-authentication guidance for federal agencies. Even though the memo is nearly 10 years old, it’s still the most highly cited directive for authentication, and it provides a useful outline for the decision process agencies are likely to follow when evaluating e-authentication.

At a basic level, e-authentication is the process of establishing confidence in user identities electronically presented to an information system. A key message within the memo is that OMB encourages agencies to make decisions on the type of e-authentication posture they want to enforce. There are different levels of need, and it doesn’t make sense to pay for the most secure systems if they aren’t necessary.

Four levels are described:

  • Level 1:  Little or no confidence in the asserted identity’s validity.
  • Level 2:  Some confidence.
  • Level 3:  High confidence.
  • Level 4:  Very high confidence.
  • This is not a decision that should be made arbitrarily. Each agency needs to conduct a risk assessment of their systems, identify and map those risks and make business decisions on which assurance level is acceptable for their systems. For example, when allowing guest access to the Wi-Fi system, and then to some portions of a public network, Level 1 authentication may be perfectly acceptable. But when allowing access to databases and important business applications, Level 3 or Level 4 may be more appropriate.Next, the agency must select appropriate authentication solutions based on technical guidance for e-authentication. This will vary significantly based on the types of resources in question. More details on this process can be found in the National Institute of Standards and Technology’s Electronic Authentication Guideline from 2011 (Special Publication 800-63-1, which has a follow-on Draft publication from this year, Draft SP 800-63-2).

    Once the system is in place, organizations should conduct periodic risk assessments of their solution, and map how any newly discovered risks will be addressed. This may require new validation that a solution still meets its required assurance levels. If it does not, new configuration or programming may be needed, and new testing may be required after the work is done.

    As spending plans associated with these types of investments are made, keep in mind that identity and authentication management is a solution that is tracked by OMB as a “primary investment area.” So if you plan to install or improve a current identity management solution, OMB will want to know the associated system name and any associated ID number, as part of your agency’s annual Exhibit 300 submission.

    Also, an authentication solution usually requires compliance with the Federal Information Security Management Act, which can vary depending on the type of systems being connected. It might be worth enlisting the services of a cloud provider who handles authentication as a service. FISMA compliance is part of their approval process. The Homeland Security Department has been the most progressive agency for moving in this direction. Last year, DHS announced that over 30 applications were using AaaS; this year that number reportedly has grown to 70.

  • For background information on how e-authentication works with ID cards and HSPD-12, visit idmanagement.gov.There’s also the evolving space of mobile device authentication. Recently, Akamai Technologies, working with ID authentication provider Daon, announced the availability of Mobile Authentication as a Service (MAaaS). For the federal government, this solution is available as a cloud-based application through CGI Group.

    The General Services Administration’s FedRAMP service has a process for those providers that wish to offer AaaS. Check out this document, which is intended to be used by cloud service providers, Third Party Assessor Organizations (3PAOs), and contractors that want to coordinate the certification of cloud-based eAuthentication requirements.

    Cloud-based AaaS is poised to grow as demand for a quick authentication solution picks up — especially for connected resources that may be geographically disbursed. And if you haven’t considered it yet, the time to do so could be now.

Retrieved from GCN

Patchmgmt[1]

Navigating the troubled waters of patch management

patch management Patching is an effective way to mitigate security vulnerabilities in software and firmware, but patch management in an enterprise can be a daunting task because of the complexities of deployment across large, heterogeneous platforms.“There are several challenges that complicate patch management,” the National Institute of Standards and Technology warns in its latest guidance on the practice. “Organizations that do not overcome these challenges will be unable to patch systems effectively and efficiently, leading to compromises that were easily preventable.”

NIST describes the challenges as well as the technology available for meeting them in the updated release of its Guide to Enterprise Patch Management Technologies, Special Publication 800-40, Revision 3.

Patch management is a basic part of federal security controls required under the Federal Information Security Management Act. Researchers and hackers aggressively search for software flaws that can be exploited in commercial products, and vendors have worked with the security community to develop processes for responding with patches in a timely manner. With the complexity and frequent upgrading of operating systems and applications, this process has resulted in a nearly continuous stream of patches, straining the resources of agencies’ IT staffs.

Among the challenges in managing the process are the variety of mechanisms for applying patches, the different schemes for managing hosts and the maintenance of an accurate inventory of software. But the biggest problems are prioritizing, testing and scheduling deployment of patches. Because of the volume of patches being issued and the need to test them to ensure that they don’t do more harm than good, getting the patches deployed in a timely manner can be difficult, if not impossible. Because of the mission-critical nature of some systems, administrators sometimes put system availability above security and are reluctant to update the software.

Tools are available to help automate at least part of the patch management process, particularly the discovery of unpatched vulnerabilities and outdated software that needs attention. The NIST guidelines describe three basic techniques for identifying missing patches, each with its own advantages and disadvantages: Scanning by a host-based agent, agentless scanning and passive network monitoring. Choosing a technology depends on an enterprise’s needs, and some might want to use more than one type of tool.

  • Agent-based tools work best for hosts that are not always on the local network, such as mobile devices, because they enable more regular scanning while unconnected. But some devices do not allow agents to run on them and agents might not be available for all platforms.
  • Agentless scanning, which is done from a server, does not require installation or execution of an agent on each host. But hosts will not be scanned when they are not on the local network, and scanning can be blocked by firewalls or other technologies such as Network Address Translation.
  • Passive network monitoring examines network traffic to identify applications and operating systems that need attention. Passive monitoring does not require privileges on the hosts being monitored and these tools can monitor devices that the enterprise does not control, such as those of visitors and contractors that log onto the network. But this technique depends on the ability to identify applications and software versions based only on network traffic and only work for those hosts that are on the network.

Metrics are necessary for determining the effectiveness of any security program. The NIST guidelines also provide suggestions for measuring the implementation of the patch management program, its effectiveness and its impact.

Less mature programs should start by measuring implementation and by looking at the percentages of devices and services being addressed in the program. More mature programs can measure effectiveness of implementation by assessing the frequency of updates, the time required to patch assets and the percentage of hosts that are fully patched at any given time. Metrics for the impact of the program can include costs of and savings from the patch management process.

Retrieved from GCN

Syrian hackers take over Thomson Reuters’ Twitter

HACKERS claiming to be the Syrian Electronic Army have taken control of a Thomson Reuters Twitter account and claimed mixed success in an assault on the US White House.

The hackers known as the SEA have made a name for themselves by carrying out sophisticated phishing attacks on media companies.

So far they have claimed a few scalps including the Guardian, the Associated Press and the BBC.

In this latest instance they apparently took over the @thomsonreuters Twitter feed. It was apparently out of the owners hands during the night, during which a number of cartoons were posted.

Those messages, which have been preserved by Buzzfeed, have since been removed.

The SEA announced the attack on its own Twitter feed when it linked to a report at the Ehacker News.

The Twitter feed also announced a successful assault on White House email accounts.

In a couple of tweeted messages the group claimed to have obtained and shared logins to White House systems. It admitted that the assault was not particularly successful and gathered only old @whitehouse emails. “You were lucky this time,” it said.

According to a report on the Nextgov website the hackers were able to get access to systems through an email that was designed to look like a BBC or CNN news story and took users to a fake Gmail or Twitter login page.

The attack serves as a reminder to not click on suspicious or untrusted links.

Retrieved from The Inquirer

Software experts attack cars, to release code as hackers meet

Charlie Miller and Chris Valasek say they will publish detailed blueprints of techniques for attacking critical systems in the Toyota Prius and Ford Escape in a 100-page white paper, following several months of research they conducted with a grant from the U.S. government.

The two “white hats” – hackers who try to uncover software vulnerabilities before criminals can exploit them – will also release the software they built for hacking the cars at the Def Con hacking convention in Las Vegas this week.

They said they devised ways to force a Toyota Prius to brake suddenly at 80 miles an hour, jerk its steering wheel, or accelerate the engine. They also say they can disable the brakes of a Ford Escape traveling at very slow speeds, so that the car keeps moving no matter how hard the driver presses the pedal.

“Imagine what would happen if you were near a crowd,” said Valasek, director of security intelligence at consulting firm IOActive, known for finding bugs in Microsoft Corp’s Windows software.

But it is not as scary as it may sound at first blush.

They were sitting inside the cars using laptops connected directly to the vehicles’ computer networks when they did their work. So they will not be providing information on how to hack remotely into a car network, which is what would typically be needed to launch a real-world attack.

The two say they hope the data they publish will encourage other white-hat hackers to uncover more security flaws in autos so they can be fixed.

“I trust the eyes of 100 security researchers more than the eyes that are in Ford and Toyota,” said Miller, a Twitter security engineer known for his research on hacking Apple Inc’s App Store.

Toyota Motor Corp spokesman John Hanson said the company was reviewing the work. He said the carmaker had invested heavily in electronic security, but that bugs remained – as they do in cars of other manufacturers.

“It’s entirely possible to do,” Hanson said, referring to the newly exposed hacks. “Absolutely we take it seriously.”

Ford Motor Co spokesman Craig Daitch said the company takes seriously the electronic security of its vehicles. He said the fact that Miller’s and Valasek’s hacking methods required them to be inside the vehicle they were trying to manipulate mitigated the risk.

“This particular attack was not performed remotely over the air, but as a highly aggressive direct physical manipulation of one vehicle over an elongated period of time, which would not be a risk to customers and any mass level,” Daitch said.

‘TIME TO SHORE UP DEFENSES’

Miller and Valasek said they did not research remote attacks because that had already been done.

A group of academics described ways to infect cars using Bluetooth systems and wireless networks in 2011. But unlike Miller and Valasek, the academics have kept the details of their work a closely guarded secret, refusing even to identify the make of the car they hacked. (reut.rs/NWOPjq)

Their work got the attention of the U.S. government. The National Highway Traffic Safety Administration has begun an auto cybersecurity research program.

“While increased use of electronic controls and connectivity is enhancing transportation safety and efficiency, it brings a new challenge of safeguarding against potential vulnerabilities,” the agency said in a statement. It said it knew of no consumer incident where a vehicle was hacked.

Still, some experts believe malicious hackers may already have the ability to launch attacks.

“It’s time to shore up the defenses,” said Tiffany Strauchs Rad, a researcher with Kaspersky Lab, who previously worked for an auto security research center.

A group of European computer scientists had been scheduled to present research on hacking the locks of luxury vehicles, including Porsches, Audis, Bentleys and Lamborghinis, at a conference in Washington in mid-August.

But Volkswagen AG obtained a restraining order from a British high court prohibiting discussion of the research by Flavio D. Garcia of the University of Birmingham, and Roel Verdult and Baris Ege of Radboud University Nijmegen in the Netherlands.

A spokeswoman for the three scientists said they would pull out of the prestigious Usenix conference because of the restraining order. Both universities said they would hold off on publishing the paper, pending the resolution of litigation. (See FACTBOX by clicking)

Volkswagen declined to comment.

Retrieved from REUTERS

5573140516_b0f1445d56_b_large_verge_medium_landscape[1]

Department of Defense plans to share wireless spectrum after being blasted by Congress

Cellphone Tower (Flickr) http://bit.ly/XoDssy

The US department of defense has long held wireless spectrum for use in military operations, flight combat training, and even drone training programs, but now the agency has announced plans to give some of that spectrum to the wireless carriers. According to The Wall Street Journal, the DOD said that it would work to “relocate” out of the 1755 to 1780 Mhz bands and make them available to carriers, though there are few details on exactly how that transition would work. The DOD said it would cost about $3.5 billion to move much of its operations to the 1780 to 1850 Mhz bands which it already operates in, with some additional operations moving into the 2025 to 2100 Mhz bands currently used for broadcast TV. Still, there’s no timeline for when this move might happen or how the military would work with broadcasters to share spectrum.

This attempt at increased collaboration comes less than a month after Congress blasted the DOD for its unwillingness to share wireless spectrum. During the hearing, Congresswoman Anna Eshoo (D-CA) was particularly critical of the DOD, asking Teri Takai, the US Defense Department’s chief information officer, why the Defense Department had not yet given any estimates or timeframes for shifting its systems to away from the highly-desired spectrum it currently uses. “Why wouldn’t the two of you [the DOD and the US National Telecommunications and Information Administration] sit down and talk about it? Why am I even having to ask this question again?,” Eshoo asked. Surprisingly, it seems that those talks might come sooner than Congress thought last month — though we’ll have to wait and see exactly what the DOD will do to make this a smooth transition.

Retrieved from The Verge

Appledevelopersitedown-e1374443659121-380x285[1]

Apple Developer Center Was Hacked; Site Remains Down While Company Overhauls Security

Apple’s developer site was accessed by “an intruder” last Thursday, the company has disclosed, and Apple has not ruled out the possibility that developers’ names, mailing addresses, and/or email addresses were compromised.

AppledevelopersitedownThe company just sent developers an email explanation, after pushing them off for the past three days with notices that the developer site was down for maintenance.

It appears that the potentially vulnerable names and addresses had not been encrypted. By contrast, Apple says developers’ “sensitive personal information” was encrypted, so it has not been accessed.

Before it reopens the developer site, Apple is “completely overhauling our developer systems, updating our server software, and rebuilding our entire database,” the email said.

Apple spokesman Tom Neumayr said he would not go into further detail about the weakness of the old system or the improvement of the new system, but he noted that no customer information was impacted.

“The website that was breached is not associated with any customer information,” Neumayr said. “Additionally, customer information is securely encrypted.”

The Apple developer site — which allots access to iOS 7, OS X Mavericks and other development kits, helps developers allocate apps to beta testers, and also includes popular developer-only forums — went down Thursday, and was first marked with a notice saying it was down for maintenance.

Later, it was updated with a notice saying, “We apologize that maintenance is taking longer than expected.” Developers were told that their memberships that would have expired during the downtime had been automatically extended.

Extended downtime is rare, and developers had wondered what was up, with some, including Marco Arment, theorizing that there had been some sort of security breach.

Here’s the full notice:

Apple Developer Website Update

Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.

In order to prevent a security threat like this from happening again, we’re completely overhauling our developer systems, updating our server software, and rebuilding our entire database. We apologize for the significant inconvenience that our downtime has caused you and we expect to have the developer website up again soon.

Retrieved from All things D

cell_phone_nsa_2_610x344[1]

Apple, Google, others want to disclose more details on data snooping

A host of tech players and other organizations have asked Washington to allow them to reveal more information about the requests for user data.

(Credit: CNET)

The tech industry wants to come cleaner about its role in providing user data to the government and is asking the Feds for permission.

In a letter sent Thursday to the White House and Congress, dozens of organizations involved in or concerned about the data snopping controversy fired up a couple of requests, Reuters has reported. Companies want to be able to regularly provide statistics on the number and scope of user data records ordered by the government. They also want to be allowed to disclose the number of people, accounts, or devices targeted in those requests.

On Wednesday, AllThingsD obtained a copy of the letter with the actual request:

“Basic information about how the government uses its various law enforcement-related investigative authorities has been published for years without any apparent disruption to criminal investigations,” a copy of the letter reads. “We seek permission for the same information to be made available regarding the government’s national security-related authorities.”

Apple, Google, and Facebook are among the tech players that signed the letter. Other organizations who’ve joined the effort include Human Rights Watch, Electronic Frontier Foundation, the American Civil Liberties Union, Americans for Tax Reform, and FreedomWorks.

The government, at least as voiced by National Security Agency chief Keith Alexander, seems open to the idea as long as it doesn’t jeopardize any investigations, Reuters added.

“We just want to make sure we do it right, that we don’t impact anything ongoing with the FBI,” Alexander told the Aspen Security Forum in Colorado. “I think that’s the reasonable approach.”

Alexander also stressed that the companies had no choice in handing over user data to the government as they were compelled by court order to do so. As such, they want to offer more specifics on the type of data they were forced to provide.

“From my perspective, what they want is the rest of the world to know that we’re not reading all of that email, so they want to give out the numbers,” Alexander said, according to Reuters. “I think there’s some logic in doing that.”

Retrieved from CNet

jeff-jarvis[1]

U.S. Government Can No Longer Be Trusted To Protect The Internet From International Power Grabs

Jeff Jarvis

Editor’s note: Jeff Jarvis is the author of “What Would Google Do?,” “Public Parts,” and the Kindle Single “Gutenberg the Geek” and is cohost of This Week in Google. He directs the Tow-Knight Center for Entrepreneurial Journalism at the City University of New York. Follow him on Twitter @jeffjarvis.

In the wake of Edward Snowden’s whistleblowing, the United States government can no longer be seen as a beneficent or even merely benign actor on the Internet. That could have disastrous consequences, first in reducing trust in the cloud and its American hosts and second in potentially upending Internet governance.

Many governments have been chomping at the bit to gain greater control of the net:

  • Two years ago at the eG8 meeting in Paris, I faced then-President Nicolas Sarkozy of France and urged him to take a Hippocratic oath for the net: First, do no harm. He mocked the question and visibly warmed to the idea of the net as an eighth continent onto which he could plant his flag.
  • After reports of U.S. surveillance of Brazilian companies and citizens, their government has asked the United Nations to step in to protect privacy on the net.
  • And at last year’s Internet Governance Forum and International Telecommunications Union meetings, such stalwarts of free speech as Russia, China, Saudi Arabia, the United Arab Emirates, Algeria, and Sudan tried to claim ”equal rights to manage the internet.” They were blocked when the U.S. gathered other Western nations to walk out of treaty negotiations.

But now that Snowden and the Guardian have revealed the U.S. to be the Big Ear listening to more and more raw communication – to “collect it all,” in the words of NSA Director Gen. Keith Alexander — can America still be seen as both the mother and the protector of the Internet?

The revelations are “likely to be severely setting back the cause of Internet freedom in the international community,” wrote Zachary Keck in The Diplomat. “States and inter-governmental organizations are likely to gain even more control over what has long been thought of as a stateless entity.”

The net’s own sovereignty depends on no one having sovereignty over it.

In stronger words yet, John Naughton, a tech columnist for the Observer in London, warned that Snowden’s leaks demonstrate “that the US is an unsavoury regime too. And that it isn’t a power that can be trusted not to abuse its privileged position. They also undermine heady U.S. rhetoric about the importance of a free and open Internet. Nobody will ever again take seriously US Presidential or State Department posturing on Internet freedom. So, in the end, the NSA has made it more difficult to resist the clamour for different – and possibly even more sinister – arrangements for governing the Net.”

But the net’s own sovereignty depends on no one having sovereignty over it. I wrote that a few years ago when I came to the conclusion that no company and no government can protect the freedom of the net. So who will? We, the citizens of the net — and many of you, its builders — must engage in a discussion of the principles of a free net and open society that we wish to protect.

Those principles include the ideas that we have a right to privacy no matter the medium and a right to speak, assemble and act. We have a right to connect, and if that connection is cut or compromised, that must be seen as a violation of our human rights. All bits are created equal and if any bit is stopped or detoured — or spied upon — on its way to its destination, then no bit can be presumed to be free and secure. And the net must remain open and distributed under the thumb of no authority.

Let’s be clear that the net is enabling disruptive forces to organize and act against governments from Tunisia and Egypt to Turkey and Brazil — not to mention the United States. It is in the interests of these institutions to control the net and its redistribution of power.

Our net is in danger — not because of Edward Snowden, but because of what we now know about the actions of the U.S. government. The threat is bigger than SOPA or PIPA or ACTA. It is a threat to the nature of the net.

[Image: CUNY]

Retrieved from TechCrunch

cctv-wall[1]

Wi-Fi could come to CCTV, lamp posts in government plans

In a bid to make our city streets more connected, the government wants to put Wi-Fi in lamp posts, traffic lights and CCTV cameras.

The Department for Culture, Media and Sport (DCMS) has plans for the £150million in the government’s Urban Broadband Fund, earmarked for creating super-connected cities with 80Mbps Internet connections or faster. Recombu highlights one of the options on the table: wiring up street furniture.

Street furniture is the stuff that’s happily standing around minding its own business, like lamp posts, benches, bins, traffic lights, and CCTV cameras. OK, maybe not minding its own business so much as minding everyone else’s.

Internet service providers — the companies that do your broadband, like BT, Virgin Media, Be and the like — would be allowed to install Wi-Fi gubbins on such items in public places. They would then collect money from users, whether from people paying to log in or from public money.

On top of street furniture, Wi-Fi could be added to libraries, museums and other local points of interest, as well as council offices and public transport. The government also has plans to spend some of the urban broadband fund on vouchers for small businesses.

In London, free Wi-Fi is provided in tube stations by Virgin Media, which can be used by anyone on Virgin and some other phone networks including Vodafone and O2.

Meanwhile, heading out of the city and into the lush green hills and rolling dales of our green and pleasant land, the government’s rural broadband plans were recently criticised for being a day late and a dollar short — or more accurately, two years late and £207m short.

What do you think of the government’s broadband plans? Is CCTV a reassuring safety measure or the intrusive eye of the Big Brother state? Tell me your thoughts in the comments or on our always-vigilant Facebook wall.

Retrieved from CNet

china_61398_large_verge_medium_landscape[1]

Homeland Security urged ISPs to block IP addresses of suspected Chinese hackers

china base 61398 (city8.com)

The US Department of Homeland Security and FBI provided a list of IP addresses used by alleged Chinese military hackers to American internet service providers (ISPs) earlier in February, and not-so-subtly encouraged the ISPs to block them, The Wall Street Journal reported today. Based on The Journal‘s report, the IP addresses that were on the list handed to ISPs were ones linked to the “Comment Crew,” an alleged Chinese military hacking outfit that was described in a widely-publicized February report from cybersecurity firm Mandiant. As it turns out, Mandiant actually alerted the US government to its findings a week before it went public with them on February 18th. According to The Journal, the DHS and the FBI then released a memo listing the Comment Crew’s suspected IP addresses. DHS officials then sent a follow-up email to ISPs telling them to “institute actions” based on the memo.


 

DHS told ISPs to “institute actions” against hacker IP addresses

The Journal cites US officials as saying the goal of giving the IP addresses to the ISPs was to let these companies know that traffic coming over their networks could be actually attacking other US companies. At least some ISPs appear to have followed the urging of DHS, because The Journal reports that shortly after the DHS / FBI memo was released, there was a noticeable drop in observed attacks and infiltrations by the Comment Crew. But that also appears to have been short-lived, as the number of attacks quickly rebounded, and The Journal‘s sources in the US government say that it was because the Comment Crew wised up and changed their IPs.

The Journal doesn’t specify exactly which ISPs received the memo, nor which IP addresses were included on the original list, but says that one of the IP addresses was for the website of a “major oil company” that was compromised by the Comment Crew or other hackers. If it’s accurate, The Journal’s report suggests a previously unknown level of cooperation between the government and private industry when it comes to fighting hackers, one that calls into question the need for further expanding information-sharing efforts between the two sectors. Nonetheless, Congress has been pushing to pass new bills including the controversial CISPA that would do just that. At the same time, US officials told The Journal that US intelligence services were also running cyber espionage operations on Chinese targets, but that these were all military and government, and not private companies. While the particular series of incidents described by The Journal took place months ago, such cooperation and spying allegedly continues to this day.

Retrieved from The verge