Monthly Archives: October 2013

Paul Ryan and Elizabeth Warren will now take your Change.org petitions

When a handful of Denver residents came together this year to demand a protected bike lane downtown, their most effective weapon wasn’t a visit to city hall or a coordinated telephone campaign. It was an online petition — 834 signatures that inspired two city council members and the Denver Public Works commission, to publicly back their cause.

“Let me know what else I need to do to help this,” councilwoman Susan Shepherd wrote back, right on the petition’s Web site.

The experience encouraged Change.org, the platform behind the petition, to tie leaders more closely to those they represent. The organization seems serious about recruiting powerful people. On Wednesday, Change.org rolled out an upgrade called Decision Makers, which features profile pages for members of Congress that collect all of their public responses. Yep, that’s right: Your elected officials will now be responding directly to your Change.org petitions.

Rep. Paul Ryan (R-Wis.) was among the first to sign up.

“Change.org is going to be a big help,” said Ryan, chair of the House Budget Committee. “It will be a transparent, public forum where we can talk with our constituents.”

Others who’ve pledged to join the program include Sen. Elizabeth Warren (D-Mass.), Rep. Henry Waxman (D-Calif.) and Rep. Mike Honda (D-Calif.). The push to include lawmakers will soon lead to a broader focus on city officials, governors, business executives and others. One mayor, San Francisco’s Ed Lee, has indicated he’ll be responding to petitioners, too.

We the People, run by the people

Targets of a Change.org petition currently get notified when they receive a request, and when new signatures are added. But for the most part, it’s been a one-way relationship. Now the lines of communication will go both ways, with opportunities for a petition creator to keep rallying regardless of the response. In an interview, Change.org’s Jake Brewer said the system works much like the White House’s We the People petition site — but with some key differences. For one thing, while decision makers have access to the Change.org platform to log their responses, they don’t own it or set the rules. That’s a subtle dig at the Obama administration, which has on occasion decided to ignore petitions that crossed the signature threshold required for an official response.

That said, what also sets Change.org’s Decision Makers program apart from We the People is that it won’t include an explicit signature threshold. Brewer said Change.org simply makes recommendations for where to draw the line depending on a constituency’s population size and geography. In other words, it’s the civic process that matters — not the outcome.

“We need people’s voices to outweigh money in politics,” Brewer said.

Do online petitions really matter?

In its basic mechanics, the online petition of 2013 isn’t so far removed from its offline cousin. But in the way it’s socially regarded, the Internet petition has become a prominent symbol for 21st-century politics. Participating in a movement has never been easier, a fact that has led critics to dismiss the online petition as so much idle slacktivism.

Yes, it’s far easier to put your name on a petition than it is to stage a physical sit-in. At the same time, managing a petition — and doing it well — can be deceptively hard. At the national level, it takes 100,000 signatures to warrant a White House reply. (To be fair, the administration has had to raise the threshold over time because hitting the mark grew easier as the service became more popular. But think of it this way: Only a fraction of White House petitions have ever produced a substantial policy change.)

“A hundred signatures on a petition to a school board is going to have much greater impact than — I mean, you’re going to need to get 50,000 or 100,000 for a Paul Ryan or a Liz Warren before they even begin to take notice,” said Evan Sutton, a spokesperson for the New Organizing Institute, a left-leaning think tank for digital politics.

In short, there’s an inverse relationship between the importance of a decision maker and the work it’ll take to get them to listen, much less act.

The balance of power

If the bar for results is set so high, then who benefits more? Constituents, who through Change.org now enjoy greater access to elected officials? Or lawmakers, who by volunteering to respond can offer the promise of engagement without really committing themselves to anything?

In that respect, online petitions are really no different from constituent mail or phone calls, which can just as easily be written off. Where they might make a big difference, however, is in the way lawmakers talk to themselves and each other.

On the one hand, petitions can help officials make better choices. They’re not only an indication of what voters care about; they’re also a clue as to how strongly those voters feel. Dueling petitions give lawmakers insight into which position is more popular — or perhaps simply more organized. On the other hand, petitions also tend to activate the poles of the electorate. Lawmakers who are made to feel more accountable to them may wind up exacerbating partisanship. They might even marshal extreme petitions as evidence for their political agendas.

Still, the fact that petitions tend to be more effective in a local setting suggests that, on balance, when decision makers refer to a petition as a way to back themselves up, the effects will go toward producing results rather than talking points. Incidentally, that’s precisely what happened when Denver councilwoman Susan Shepherd signed onto the bike lane proposal. Within a week of Shepherd’s announcement, two other public officials had addressed the issue. The petition had become a collaborative policy initiative.

“I think that’s when it really starts to distinguish itself from a petition that goes at people,” said Brewer. “It actually starts to transition to, ‘Okay, the next step of this solution is that other people need to be involved. Let’s go involve them.’ ”

Retrieved from WashingtonPost

NSA spied on 35 world leaders, says leaked document

The U.S. monitored the phone conversations of 35 world leaders, according to a National Security Agency document provided by its former contractor, Edward Snowden, according to The Guardian newspaper.

The names of the world leaders is not disclosed in the document of 2006, and access to the 200 phone numbers of the leaders provided “little reportable intelligence,” as the phones were apparently not used for sensitive discussions. The numbers, however, provided leads to other phone numbers that were subsequently targeted, according to the document.

The document is likely to add to concerns about NSA surveillance, including its monitoring of phones of political leaders. German officials said this week that U.S. intelligence agencies may have spied on German Chancellor Angela Merkel’s mobile phone. There have also been reports that the U.S. hacked into the email server of Mexico’s former president Felipe Calderon while he was in office, and also spied on Brazil’s President Dilma Rousseff.

A report in French newspaper Le Monde alleged that the NSA recorded data relating to over 70 million phone calls involving French citizens over a period of 30 days. U.S. Director of National Intelligence, James R. Clapper said the allegation that the NSA had collected recordings of French citizens’ telephone data was false.

Alarmed at the developments, some countries are considering measures. Brazil has, for example, proposed in-country data storage requirements under an Internet bill before the country’s Parliament.

The phone numbers of leaders were handed over to the NSA as part of a policy encouraging people to provide the direct, residence, mobile phone and fax numbers of foreign political and military leaders.

The note asking for “targetable” phone numbers was addressed to “customer” departments, which the Guardian said includes the White House and the Pentagon.

Retrieved from ComputerWorld

Adobe confirms Flash Player is sandboxed in Safari for OS X Mavericks

As outlined in a post to Adobe Secure Software Engineering Team (ASSET) blog, the App Sandbox feature in Mavericks lets Adobe limit the plugin’s capabilities to read and write files, as well as what assets Flash Player can access.

Adobe platform security specialist Peleus Uhley explained that in Mavericks, Flash Player calls on a plugin file — specifically com.macromedia.Flash Player.plugin.sb — used to define security permissions defined by an OS X App Sandbox. The player’s capabilities are then restricted to only those operations that are required to operate normally.

After years of fighting malware and exploits facilitated through Adobe’s Flash Player, the company is taking advantage of Apple’s new App Sandbox feature to restrict malicious code from running outside of Safari in OS X Mavericks.

Flash

In addition, Flash Player can no longer access local connections to device resources and inter-process communications (IPC) channels. Network privileges are also limited to within OS X App Sandbox parameters, preventing Flash-based malware from communicating with outside servers.

Uhley noted that the company has effectively deployed some method of sandboxing with Google’s Chrome, Microsoft’s Explorer and Mozilla’s Firefox browsers. Apple will now be added to that list as long as users are running Safari in Mavericks.

“Safari users on OS X Mavericks can view Flash Player content while benefiting from these added security protections,” Uhley said. “We’d like to thank the Apple security team for working with us to deliver this solution.”

Retrieved from Apple Insider

China’s Alibaba to expand U.S. reach with new investment group

China’s Alibaba Group is poised to invest more in U.S. tech companies with the start of a new investment group that the e-commerce   giant is setting up in San Francisco .

Alibaba is looking to back “innovative platforms, products, and ideas” that focus on e-commerce and new technologies with   the investment group, the company said in an email Wednesday.

The company recently invested in three U.S. tech companies, the latest being ShopRunner, an online retailer that competes against Amazon.com. Alibaba led a recent investment round for ShopRunner that raised US$200   million.

Earlier in the year, the company also funded Quixey, a search engine for mobile apps, and Fanatics, a retailer of licensed   sports merchandise.

The U.S. market and Silicon Valley have talent and expertise the Chinese e-commerce company wants to tap into, said Mark Natkin,   managing director for Beijing-based Marbridge Consulting. At the same time, Alibaba has ambitions to become more international.   Its investments in the U.S. could lay the groundwork for an eventual expansion into the country’s market, Natkin added.

“Its often more effective, more cost efficient, to acquire a company that already has demonstrated success in the area you   are trying to expand in,” he said.

While not as well known in the U.S., Alibaba reigns as the largest e-commerce company in its home market. The company established   Tmall and Taobao, two of the country’s most popular online retail sites.

In the U.S., the company has a smaller presence with its wholesale supplier sites, Alibaba.com and AliExpress, which sells   products to businesses and even consumers across the world.

Alibaba could also decide to list on a U.S. stock exchange, with an initial public offering some reports have estimated could   value the company at over $100 billion.

Retrieved from NetworkWorld

Keeping your endpoint data safe: some simple precautions

People are out to get you. Your business, your users, your systems and your data all have value to someone.

You could be targeted because you have something that someone specifically wants, or because attackers are hoping to find bank account details or email addresses to spam, or because they want your compute power for a botnet.

Few companies have the luxury of being able to dedicate one or more members of staff to security, but there are some easy layers of defence that everyone should have in place.

Security does not earn money so it tends to be something companies attend to after an incident. But remember you may very well be blamed for not having identifed the risks.

Black magic

A unified threat management solution is one defence option. This is a gateway that has black wizardry to protect you from spam, intrusions and viruses, as well as controlling content or network traffic.

It is one of those balance calls: you won’t stop everything (impossible) but for a reasonably small outlay you will be ahead of many people out there and become a less easy target.

This sort of device should alert you to something going on that you would normally not be aware of. For example, I have seen laptops plugged into a corporate network whose user had administrator access, clicked on a few dodgy websites at home and ended up being a spam relay box.

Seeing an alert come up warning of large numbers of connection attempts on port 25 to an overseas address is an easy way to catch this.

Ye of little faith

Endpoint security is another area where it might seem like you are dishing out cash for nothing.

Microsoft Windows 7 and below have this covered fairly well with Microsoft Security Essentials for your anti-virus needs and Windows Defender for spyware. Windows 8 has Windows Defender built in and does both anti-virus and anti-spamware.

One of the most common methods of getting something unwanted is via an infected USB. Blocking USB devices is of course one line of defence, but if you are not in a highly secure environment you will just annoy your staff, who probably don’t want to see or believe the risks.

I have seen malware that launches via the autorun.inf file, which can mean users are running the malware on every PC they decide to plug into.

Fear of phones

The latest threat on the block is mobile malware. Android phones are still the worst, hands down, so if you can possibly avoid it, don’t provide them to staff. iPhones, Windows phones and BlackBerrys are much safer in that regard.

Enforcing a PIN or password on devices is the most basic level of protection and should be employed wherever possible.

It is worth having a look at a mobile device management platform. It can report on what apps are installed on your mobile fleet, allow you to remote-wipe when someone leaves their phone in the back of a taxi, and can help identify devices that are not running the latest operating system version.

Knowing whose device is jailbroken is also a good thing. Remember the RickRoll worm?

If you care about protecting your data when users are sharing it, don’t use open, free services such as DropBox. The ideal solution is something that can be hosted on premises (so you know where your data is), has optional security mechanisms (so you can control who sees the data), and has killable time-bomb links (so you can pre-determine when data should no longer be available).

A year after he left the company company-sensitive information was still being emailed to him

The rogue user is another danger area. I have seen a few in my time. One example: a staff member set all his emails to be forwarded externally, and a year after he left the company to work for a competitor, someone worked out that company-sensitive information was still being emailed to him.

At the other end of the scale is someone who left but knew another person’s password. Weeks after leaving the company he logged in via webmail and began abusing staff.

Flashing red lights and sirens should be going off in your brain about this. Policies prohibiting sharing passwords with other staff members and a regular forced change of password should avoid these situations.

Beware the mafia

Making sure that accounts are disabled as people walk out the door for the last time is a very small price to pay to avoid a potential high risk of damage.

It is also worth educating users with reminders and tips. It is obvious to us, but a random email asking for their login details will often have users happily clicking a link that goes to “http://yourcompany.russianmafia.com” and entering their company username and password.

An attacker who has targeted a staff member or company can do huge amounts of damage and companies of all sizes are at risk.”

These are just some of the basic approaches you should consider to protect everyone. You want to be thinking about them now rather than when it is too late.

Retrieved from TheRegister

Obamacare exchange contractors had past security lapses

Two of the contractors involved in developing the Affordable Care Act healthcare exchanges have had fairly serious data security   issues, a Computerworld review of publicly available information has found.

The incidents involving Quality Software Services (QSS) and Serco are not related to the ongoing glitches in Healthcare.gov,   the ACA’s troubled website.

Even so, the information is relevant in light of the ongoing scrutiny of the companies involved with the problem-plagued exchange.

Since going live on October 1, Obamacare’s Healthcare.gov site has been bedeviled by problems that are keeping people from   shopping for and enrolling in ACA health insurance plans. So far, none of the problems appear security related.

However, critics say the exchanges and the underlying data hub connecting health insurers to federal eligibility verification   systems could face security problems, given the complexity and the sheer volume of highly sensitive personal information flowing   through the systems.

Systems integrator Quality Software Services developed the software code for the ACA data services hub and oversaw development   of tools to connect the hub to databases at the Internal Revenue Service, the Social Security Administration and other federal   agencies.

The company is also charged with helping the Centers for Medicare and Medicaid Services (CMS) maintain and administer the   data hub.

The company in June was the subject of an audit report by the U.S. Department of Health and Human Services Inspector General   for failing to adhere to federal government security standards in delivering, what appears to be unrelated, IT testing services   for the CMS.

The 16-page report noted that the systems QSS used for testing purposes at CMS did not include controls for protecting against misuse of USB   ports and devices as required by the CMS.

Specifically, QSS failed to disable USB ports or put other measures in place for preventing unauthorized use of USB devices   and ports, the report said. The company had also not listed essential system services or ports in its security plan, it said.

“As a result of QSS’s insufficient controls over USB ports and devices, the [Personally Identifiable Information] of over   6 million Medicare beneficiaries was at greater risk from malware, inappropriate use, access or theft,” the report warned.

QSS officials did not respond to a request for comment on the report.

However, in a response to the Inspector General’s findings, the company said it revised corporate network access control polices   to put restrictions on the use of USB ports and devices. It also said it planned to implement “Read Only” restrictions for   USB ports in all laptops along with controls to prevent USB devices from automatically executing code.

Testifying before the U.S. House Committee on Energy and Commerce Subcommittee on Health in September, a QSS executive said   the design and development of the ACA Data Services Hub complies with federal security standards.

Services firm Serco in July won a five-year $1.3 billion contract to process and verify paper applications submitted by individuals   seeking health insurance via the online exchanges.

A Serco executive told lawmakers earlier this year that the company has taken many steps to ensure that the data it handles   meets CMS and Federal Information Security Act security requirements.

Serco had made the news in 2012 whn it disclosed a data breach that exposed sensitive data of more than 123,000 members of the Thrift Savings Plan   (TSP), a $313 billion retirement plan, run by the U.S. Federal Retirement Thrift Investment Board.

The exposed data included full names, addresses, Social Security Numbers, financial account information and bank routing information.

The compromise resulted from an intrusion into a single desktop computer used by a Serco employee to support the TSP.

Though the breach occurred in July 2011, Serco did not discover it until April 2012 after being notified about it by the FBI.   The incident, and Serco’s subsequent handling of the breach notification process, prompted some lawmakers to demand a clear   timeline from the company on the initial intrusion, its subsequent discovery and the steps taken to prevent another breach.

In a lengthy e-mail to Computerworld Tuesday, Serco spokesman Alan Hill downplayed the significance of the breach and maintained   that the company has since thoroughly reviewed its security program and infrastructure protection mechanisms. For instance,   the company redesigned its network and data management infrastructure and revised security risk management policies, controls   and procedures, Hill said.

Serco executives are working with the CMS to ensure that information security controls are built into the ACA paper application   processing system, the spokesman said.

“We are committed to applying and enforcing a strong information security program and strict controls across all of our contracts   and operations,” Hill said. “Protecting the privacy of consumers through the paper application process is top priority for   Serco and CMS.”

Richard Stiennon, principal at security consultant IT-Harvest, predicts a lot of finger pointing at the contractors if there’s   a breach into ACA systems.

“That said, often having made mistakes in the past will lead to improved coding and security practices in the future. Here’s   hoping that is the case,” he said.

However, bringing in a slew of experts to fix the system “will probably lead to short cuts, which usually lead to bad security   hygiene,” he said.

Retrieved from ComputerWorld

Controversial cyberthreat bill CISPA may return to Congress

capitol-dome-twilight

After suffering defeat this spring, the controversial legislation aimed at preventing cyberthreats, CISPA, may be returning to the Senate. According to Mother Jones, two senators are now working on a new version of the bill that looks to curb some of the concerns that kept it from initially passing. The goal of the bill will still be to make it easier for private companies to share information with the government regarding cyber threats, however the type of information that can be shared will reportedly be narrower in scope this time around.

The bill won’t target American’s communications

As the legislation is still being written, it’s not clear exactly how different its updated form will be. Mother Jones reports that Senators Dianne Feinstein (D-CA) and Saxby Chambliss (R-GA) are working together to draft the bill. “The goal is to allow and encourage the sharing only of information related to identifying and protecting against cyberthreats, and not the communications and commerce of Americans,” Feinstein’s office tells Mother Jones in a statement. Feinstein in particular has been a major proponent for facilitating this type of sharing, having also been in support of expanding FISA.

In light of the NSA leaks, Mother Jones suggests that so many companies may have initially stood in support of CISPA — the Cyber Intelligence Sharing and Protection Act — because it could have granted them protections for handing over information as part of PRISM. But those leaks should only make a reintroduction of CISPA, however limited, all the more disconcerting for privacy advocates. The bill was even called for earlier this month by NSA director General Keith Alexander, who said that legislation must be put in place before the US was hit with a cyberattack. But it’s only become more evident since CISPA was defeated how widely the NSA is able to access American citizens’ information as it is, and a new bill would only expand those abilities.

 

Retrieved from TheVerge

From small to big: 5 tips for managing clouds at scale

The enterprise adoption of cloud computing resources has taken a precarious path. Many organizations have started by running   small workloads in the public cloud, reticent to use the platform for bigger mission-critical workloads.

But once they get comfortable with say a test and development use case in the cloud, or an outsourced e-mail platform, perhaps   CIOs and CTOs warm up to the idea of using outsourced cloud resources for more jobs.

At a recent panel of cloud users, one thing became clear though: Managing a public cloud deployment at small scale is relatively   straightforward. The problem comes when that deployment has to scale up. “It gets very complex,” says IDC analyst Mary Turner,   who advises companies on cloud management strategies. “In the early stages of cloud we had a lot of test and development,   single-purpose, ad-hoc use case. We’re getting to the point where people realize the agility cloud can bring, and now they   have to scale it.”

And doing so can be tough. The panelists at the recent Massachusetts Technology Leadership Cloud Summit had some tips and   tricks for users though. Here are five.

-Consolidate account management

Unfortunately, a common way that cloud usage starts in an enterprise is when various departments within an organization spin   up public cloud resources behind the back of their IT departments. Known as “shadow IT,” it can create a scenario where multiple   different departments each have their own accounts with a public cloud provider, like Amazon Web Services. When the IT department   attempts to take control of these services, the IT manager all of a sudden is juggling multiple accounts.

Instead of managing each of these separately, Amazon Web Services allows users to consolidate those into a single administrative   account. By doing this, usage statistics are aggregated into a single billing stream, and users can re-allocate resources   among various accounts, with some limitations. Jason Fuller, head of cloud service delivery of Pegasystem’s software, says   that’s an immensely helpful feature when managing multiple accounts within the same organization. It helps not only from a   technical standpoint to have oversight across all the accounts, but from a financial one too because of the aggregated and   streamlined billing.

-Turn the lights off

“Sometimes when I wake up in the morning I go downstairs and my kids have left the lights on all night,” says John O’Keefe,   senior director of operations at Acquia, a company that supports open source Drupl, and one of the panelists at the Mass TLC   event. He worries about the same thing with his developers using Amazon’s cloud. The beauty of self-service public cloud resources   is that they’re incredibly easy to spin up – customers just swipe a credit card and click a few buttons. The problem is those   resources don’t get shut off when users are done with them. To prevent this situation, O’Keefe tries to do a daily inventory    – if not even more frequently – to ensure that only the resources that are actively used are “on.” De-provisioning resources   is just as easy as spinning them up – someone just needs to remember to do it. AWS has a variety of tools to help customers   monitor this, including CloudWatch.

-Right-size your resources

A common platform for IaaS resources nowadays is for providers to offer an a la carte offering of virtual machine instances   sizes and storage platforms. Customers should take care when choosing exactly which resources to use, because if you don’t   then there can be significant waste.

Usually it’s pretty straightforward to decide between the three main flavors of storage from AWS: Elastic Block Storage (EBS),   Simple Storage Service (S3) or Glacier. EBS is for block storage, which are volumes that are not connected directly to compute   instances that use them. S3, on the other hand, is a massive file store system that can be used for granular storage of small   items and scaled way up to larger files too. Glacier is a long-term storage platform with extremely high availability and   low costs, but very (comparatively) long wait time for retrieving data. Within EBS there are tiers of storage though (see   information about the options here). Customers should ensure they have the right performance, reliability and scalability requirements for their needs. If you   don’t, you may end up overpaying for services you don’t need.

Another key is ensuring virtual machines are right-sized for your workloads. AWS has a catalog of more than a dozen different   types of virtual machines, from high input/output VMs, to high memory ones. Evaluate what your application is and what kind   of resource it needs, and get the right size VM for it. A variety of third-party AWS monitoring tools can help users make   the right decisions. Other companies, like ProfitBricks and Cloud Sigma, allow customers to set their own VM instance (and   pay for them by the minute, instead of by the house). These features allow customers to customize their VMs at granular levels,   opposed to choosing from a menu of options from AWS.

-Beware of noisy neighbors

When using a public cloud, you’re typically going to be sharing infrastructure resources with a lot of other users. That’s   in part why these IaaS clouds are so cheap – because providers can pack multiple customers into high-density virtual machines.   You may be sharing a virtualized server with other companies. For some applications and workloads that may not be a problem.   But, for others that are performance-sensitive, it can be an issue.

Greg Arnette

Greg Arnette

While AWS says that it takes steps to avoid this by hard partitioning resources, users still worry about it. Panelist Greg   Arnette, CTO of cloud data archiving company Sonian, says this used to be a bigger issue a few years ago, but network volatility   is less common nowadays. Still, some users may be concerned about it. For those who are, customers can pay extra to have dedicated   resources – which are isolated areas of the AWS cloud reserved for individual customers. There is also AWS Virtual Private   Cloud, which is now the default setting in EC2, which uses a hardware VPN and allows customers to configure their virtual   networks. The best way to avoid the noisy neighbor though is to right-size the VMs to make sure they have enough capacity   for the application that’s running. If the VMs don’t meet their performance being advertised, that can be a breach of the   service-level agreement (SLA).

-Find efficiencies where you can

Living in a cloud makes techies think differently about how they run their shop. Let’s say, for example, you have hundreds   or thousands of files that you’re looking to store into S3 on a daily or weekly basis. AWS makes it easy to upload and download   those files one at a time. But, in doing so each one of those is an API call, which AWS users are charged for (see API call   prices here). Instead, users should bundle their jobs, and load files up in blocks through a single API call to reduce API surcharges,   Arnette says. Steps like these are what can be the difference between having an efficient, right-sized cloud and being nickel   and dimed by your cloud provider.

Retrieved from Newwork World

Biometrics: Password Life-Slaps & The iPhone 5S

Biometrics: Password Life-Slaps & The iPhone 5S

The media frenzy around the fingerprint scanner on the new iPhone 5S has cast a giant spotlight on the biometrics industry. Kathryn Cave investigates the potential and danger of biometrics… and what it all really means.

The email had me bouncing in my seat and ranting at anyone who’d listen:  “your record has been suspended as we have received mail back from you marked as ‘no longer at this address’.”  This was rubbish… pure unbridled rubbish. I’ve been at the same email (and physical) address for years. Besides, the message didn’t even help me retrieve my password… I still needed to prove who I was.

The following black-zone of filling out CAPTCHA codes, hitting refresh on my email, engaging in pointless correspondence with customer service – and still not getting into the system – left me a dribbling ruin. Anyone who’s experienced it for themselves  knows that only a password ‘episode’ can send you on the rapid trajectory from irate barking, through to broken defeat, in the space of half an hour.

This is a large part of the case for biometrics – the system which uses facial recognition, fingerprints, iris scanning or more recently, palm vein or finger vein technology to verify your real identity. This is a system which (theoretically at least) actually knows who you are, and because of it, could spell the end of a million half-remembered passwords. But this is precisely the problem for many… it all feels a bit Big Brother.

Retrieved from IDG Connect

Using NFC as a secure smart card reader

IBM scientists have developed a new mobile  authentication security technology based on the radio standard known as  near-field communication (NFC). The technology provides an extra layer of  security when using an NFC-enabled device and a contact-less smart card to  conduct mobile transactions.
A recent report by ABI Research predicts the number of NFC devices in use  will exceed 500 million in 2014. In addition, it is expected that 1 billion mobile phone users will use their  devices for banking purposes by 2017 making them tantalizing targets for hackers.
IBM scientists in Zurich, also known for inventing an operating system used  to power and secure hundreds of millions of smartccards, have developed an additional layer, a  so-called two-factor authentication, for securing mobile transactions (see video  below).
A typical consumer may use a two-factor authentication on a computer when  they are asked for both a password and a verification code sent by SMS. IBM  scientists are applying the same concept using a PIN and a contact-less smart  card.  The card could be an ATM card or an corporate ID badge.
“Our two-factor authentication technology based on the Advanced Encryption  Standard provides a robust security solution with no learning curve,” said Diego  Ortiz-Yepes, a mobile security scientist at IBM  Research.
The technology works by allowing a user to simply hold his, or her, card  near the NFC reader of the mobile device and after keying in a  PIN, a one-time code would be generated by the card and sent to the server by  the mobile device.
The technology is based on end-to-end encryption sing the National  Institute of Standards & Technology (NIST) AES (Advanced Encryption  Standard) scheme. Current technologies on the market require users to carry an  additional device, such as a random password generator, which is less convenient  and in some instances less secure.
The technology, which is available today for any NFC-enabled Android 4.0 device, is based on IBM  Worklight, a mobile application platform that is part of the IBM MobileFirst portfolio.  Future updates will include additional NFC-enabled devices based on market  trends.
Retrieved from NetworkWorld