Monthly Archives: June 2013

agentcloud[1]

CIA, NSA see benefits in double-barreled approach to the cloud

CIA agent standing between two cloudsThe intelligence community is looking to avoid a “tyranny of one” in its strategy for contracting cloud computing services, according to Gus Hunt, CTO of the Central Intelligence Agency.  Instead, taking a multi-vendor approach will help speed up access to computer resources, avoid vendor lock-in, he said.

“A long time ago, we learned [that] if we have one provider for everything, people tend not to act in our best interest,” Hunt said, describing the intelligence community’s desire to have freedom to move from one cloud provider to another as the situation warrants.

Having that flexibility will require deeper interoperability between vendors, a goal that can be best achieved by industry adherence to open data standards, Hunt said June 20 during a session at an AFCEA Emerging Technologies Symposium in Washington, D.C.

The intelligence community has a two-cloud strategy in which the National Security Agency is building an OpenStack secure cloud computing system for the entire intelligence community, while the CIA is looking to tap the resources of a commercial provider to give analysts access to compute resources and the ability to process large data sets.

The CIA’s goal is to give the intelligence analysts access to resources as quickly and easily as if they were swiping a credit card or making an online purchase. To meet that goal, the CIA is looking to work with a commercial cloud provider for the rapid provisioning of compute resources and processing of large data sets.

The commercial cloud will provide infrastructure as a service, offering users access to virtual or physical servers and other computing resources such as storage. “We are not going to tell anybody what to bring or what software to run,” Hunt said.

Eventually, the commercial cloud will provide software-as-a-service, which typically includes customer relationship management applications, e-mail, collaboration and virtual desktops. “Assuming we can get to a commercial [contract] award,” users can use the commercial cloud as a development and test environment, he said.

(Hunt would not discuss the CIA’s 10-year, $600 million cloud computing contract with Amazon Web Services, first reported by FCW, and the resulting IBM bid protest.)

Hunt pointed out that most of the work the agency does is unclassified. But when analysts need to, they should be able to forklift their workload and drop it on the classified side, he said.

The need for more flexibility between classified and unclassified work could be accommodated by the ability to switch back and forth dynamically between the CIA commercial cloud and the OpenStack cloud infrastructure being developed under the auspices of the NSA.

The NSA’s intelligence cloud will provide a powerful channel for platform-as-a-service, which typically involves the hosting of databases, Web servers and development tools. Ultimately, the objective is for users to not know (or care) which clouds their workloads are running on.

The aim behind the two-cloud strategy is to speed up innovation and lower costs, Hunt said, noting that the commercial cloud would comply with Federal Risk and Authorization Management Program security controls, but that the agency needs security that goes above and beyond FedRAMP.

A single cloud broker would then handle both architectures, and if a user wants to add capacity or analyze data, he won’t have to worry about where that happens.

But speed does matter. “Latency breeds contempt,” Hunt said; nothing makes intelligence analysts angrier than workloads taking too long to execute. The world is moving toward more index and memory-type systems to get that speed and performance, he said.

Retrieved from GCN

South Korea's Information Technology Research Institute in Seoul

North and South Korea websites shut amid hacking alert

 

South Korea's Information Technology Research Institute in Seoul

South Korea’s Information Technology Research Institute in Seoul. The centre launched an internet-security training scheme amid growing concern about the country’s vulnerability to cyber-attack. Photograph: AFP/Getty

Several government and media websites in South and North Korea were shut for several hours on the 63rd anniversary of Korean war, and Seoul said its sites were hacked and alerted people to take security measures against cyber-attacks.

It was not immediately clear whether the shutdown of North Korean websites, including those belonging to Air Koryo and the Rodong Sinmun newspaper, was triggered by hacking. Rodong Sinmun, Uriminzokkiri and Naenara websites were operational a few hours later.

South Korean national intelligence service officials were investigating the cause of the shutdown of the North Korean websites. Pyongyang did not make any immediate comment.

Seoul said it was also investigating attacks on the websites of the presidential Blue House and the prime minister’s office as well as some media servers.

The attacks in South Korea did not appear to be as serious as a cyber-attack in March, which shut down tens of thousands of computers and servers at broadcasters and banks. There were no initial reports that banks had been hit or that sensitive military or other key infrastructure had been compromised.

It was not immediately clear who was responsible, and the neighbours have long traded accusations over cyber-attacks.

Several Twitter users who purported to be part of a global hackers’ collective claimed they attacked North Korean websites. Shin Hong-soon, an official at South Korea’s science ministry in charge of online security, said the government was not able to confirm whether these hackers were linked to the attack on South Korean websites.

Officials in Seoul blamed Pyongyang for the attacks in March and said an initial investigation pointed to a North Korean military-run spy agency as the culprit.

In recent weeks the North has pushed for talks with Washington amid soaring tensions on the Korean peninsula, culminating in Pyongyang making threats over UN sanctions and US-South Korean military drills.

Investigators detected similarities between the cyber-attack in March and previous hacking attributed to the North Korean spy agency, including the recycling of 30 of 76 malware programs used in the attack, South Korea’s internet security agency said.

The cyber-attack on 20 March struck 48,000 computers and servers, hampering banks for two to five days. Officials said no bank records or personal data were compromised. Staff at the TV broadcasters KBS, MBC and YTN were unable to log on to news systems for several days, although coverage continued. No government, military or infrastructure targets had been affected.

South Korea’s national intelligence service said the North was behind a denial of service attack in 2009 that affected dozens of websites, including that of the presidential office. Seoul also believes Pyongyang was responsible for attacks on servers of Nonghyup bank in 2011 and Joongang Ilbo, a national daily newspaper, in 2012.

Pyongyang blamed its neighbour and the US for cyber-attacks in March that temporarily disabled internet access and websites in North Korea.

Experts believe North Korea trains large teams of “cyber-warriors”, and say the South and its allies should be braced for attacks on infrastructure and military systems. If the inter-Korean conflict were to move into cyberspace, South Korea’s deeply wired society would be more widely affected than North Korea’s, which largely remains offline.

Retrieved from Guardian

google-logo-stock-21_2040_large_verge_medium_landscape[2]

EU court says Google doesn’t have to delete sensitive data from search results (update)

Google New York Chelsea Office (STOCK)

The Advocate General at the European Court of Justice has said that Google should not have to delete sensitive information from its search results. In a statement published today, EU adviser Niilo Jääskinen said that while Google must comply with local data protection laws, it doesn’t need to remove sensitive information that is lawfully produced by a third party. It shows that in some cases, protecting freedom of speech might overrule rights to privacy. Google is already facing fines in both France and Spain if it doesn’t amend its privacy policy.


In some cases, protecting freedom of speech might overrule privacy

“Search engine service providers are not responsible, on the basis of the Data Protection Directive, for personal data appearing on web pages they process,” the court said on behalf of Jääskinen. The directive comes after a Spanish man lodged a complaint against Google and Google Spain in 2010 requesting that they take down details of an auction notice that was placed on his home. A Spanish newspaper published a story in print and later submitted a digital version of the article, which was then indexed by Google.

The directive states that Google doesn’t have any control over the content included on third-party web pages, meaning it “cannot in law or in fact fulfil the obligations” that it is lawfully asked to. The company may be forced to block access to websites with illegal content — which might infringe copyright or contain libellous information — but in this case the information it displayed had already entered the public domain. While the Advocate General’s statement isn’t binding, the proposal is generally followed by European judges in cases such as this. A final judgement is expected before the end of the year.

Update: In a statement, Bill Echikson, Head of Free Expression at Google EMEA said:

“This is a good opinion for free expression. We’re glad to see it supports our long-held view that requiring search engines to suppress ‘legitimate and legal information’ would amount to censorship.”

Retrieved from The Verge

Spam-645x250[1]

33Mail now helps you beat spammers by responding anonymously via email aliases

 spam

It’s been almost two years since we last caught up with 33Mail, the online service that helps you beat email spam by giving you disposable email addresses.

Just to recap, 33Mail provides ‘alias’ email addresses for users to include in online forms or anywhere they feel giving out a real address may attract unsolicited emails further down the line. While 33Mail does redirect all ‘alias’ emails to your real address, one of the ‘flaws’ thus far has been if you choose to respond to an email you receive, there was no way to continue to conceal your true email address. This has now been remedied.

33Mail: Hide from spammers

33Mail recently rolled out a much-needed update to its interface (33Mail “puts practicality way ahead of beauty” we noted in our previous coverage), but it’s the anonymous email replies feature that’s perhaps the most notable development.

In a nutshell, it means users can now communicate back-and-forth with anyone – it could be related to an advert they placed for a new room-mate, an old bike they’re trying to sell through the classifieds or, indeed, any company they believe could place them on a marketing list.

Once you’ve signed up for a 33Mail account, you can drag a bookmarklet to your browser and it will automatically create an alias for you specific to the website you’re on. The format defaults to: spamsite.com@username.33Mail.com.

a 33Mail now helps you beat spammers by responding anonymously via email aliases

But the beauty of 33Mail is you can decide on any alias you want, and the first time someone responds to it by email, the address is created. So, say you’re looking for someone to fill your spare room, you can just pluck something like ‘staywithme@username.33Mail.com’ out of thin air and place it in your ad – you don’t have to physically create anything.

For the time-being, anonymous replies are still technically in beta and it’s free for all users, though it seems likely that this will eventually only be available to premium users.

Free users get a 10MB monthly bandwidth limit, which equates to around 500 emails a month. Premium users, however, pay $12 a year and get a 50MB monthly bandwidth limit and the option to buy customized domain names.

For now, anyone can enable the anonymous reply setting via ‘Account Info’, where they can also set any name for the recipient to see when they respond – this could be their real name, a made-up name or whatever they want.

b 520x237 33Mail now helps you beat spammers by responding anonymously via email aliases

The more sites you sign-up to using a 33Mail alias, the more will be displayed in your dashboard, and you can block any address at any point simply by hitting the ‘block’ button.

c2 520x192 33Mail now helps you beat spammers by responding anonymously via email aliases

When you receive an email to an alias, 33Mail relays it through to your real address, and when you reply it looks as though it’s being sent from your real address too, but 33Mail works its magic to hide it from the recipient.

It’s a great idea for sure, one that’s similar in concept to SquadMail, which is a little bit like Dropbox for email, in that it lets you create and share temporary (or permanent) email folders, each with their own unique SquadMail-branded email handle.

33Mail says it now forwards almost 250,000 emails per month to its users, though of course this gives no indication as to the popularity of the service on the whole. That said, we’re told that it’s adding around 1,000 new users each month.

The new 33Mail is available to use on the Web now.

Retrieved from The Next Web

Printedbattery[1]

Tiny 3D printed battery could power devices of the future

A lithium ion battery the size of a grain of sand printed out of electrochemical inks

A team of university researchers have taken 3D printing to the nanoscale, printing a lithium ion battery the size of a grain of sand and opening up new possibilities for tiny medical, communications and other devices.

Based at Harvard University and the University of Illinois at Urbana-Champaign, the researchers were able to print interlocked stacks of hair-thin “inks” with the chemical and electrical properties needed for the batteries, according to a report by the Harvard School of Engineering and Applied Sciences.

It’s a breakthrough in the development of microbatteries, which to date have used thin films of solid materials that lacked the juice to power devices such as miniaturized medical implants, insect-sized robots and miniscule cameras, the researchers said. Small, 3D-printed batteries also could help propel development of wearable technology, decreasing the weight of products like Google Glass or smart-phone wrist watches.

The research team tackled the power problem by using a custom 3D printer to produce precise, tight stacks of ultrathin battery electrodes. The key was developing the printable electrochemical inks — one for the anode and one for the cathode, each made with nanoparticles of lithium ion metal oxide compounds, the researchers said.

After printing, they put the stacks into a container, added an electrolyte solution, then measured the power of the finished product. “The electrochemical performance is comparable to commercial batteries in terms of charge and discharge rate, cycle life and energy densities,” said Shen Dillon, a collaborator on the project led by Jennifer Lewis. “We’re just able to achieve this on a much smaller scale.”

Lewis, currently a professor at Harvard, led the project while at the University of Illinois at Urbana-Champaign, in collaboration with Dillon. They have published their results in the journal Advanced Materials. 3D printing (or additive manufacturing), once used mainly for prototyping circuit boards and other electronics, has exploded in the last few years, being used for everything from aircraft to flexible displays and, famously, guns. The Army 3D prints gear for troops in on the spot in Afghanistan. NASA wants to 3D print food on long space missions. And the Obama administration has touted it as the future of manufacturing.

Its possibilities are only likely to grow. Prices for 3D printers are coming down, making them available to innovators with any kind of budget. One engineer at the Massachusetts Institute of Technology is even pushing into 4D, researching how to print objects that change over time.

While many 3D printing projects are going big — even to the point of planning to print an entire house  — taking it to the nanoscale could have an even bigger impact. With the Harvard/UI team’s tiny batteries on board, the possibilities for microscopic implants, sensors, cameras, wearable computers and other gear just grew.

Retrieved from GCN

_68339797_018381888-1[1]

Millions exposed by Facebook data glitch

Millions exposed by Facebook data glitch

Facebook boss Mark Zuckerberg
Facebook said the impact of the data disclosure was “minimal”

Personal details of about six million people have been inadvertently exposed by a bug in Facebook’s data archive.

The bug meant email and telephone numbers were accidentally shared with people that would not otherwise have had access to the information.

So far, there was no evidence the data exposed was being exploited for malicious ends, said Facebook.

It said it was “upset and embarrassed” by the bug, which was found by a programmer outside the company.

Bug bountyThe data exposure came about because of the way that Facebook handled contact lists and address books uploaded to the social network, it said in a security advisory.

Typically, it said, it analysed the names and contact details on those lists so it could make friend recommendations and put people in touch with those they knew.

The bug meant some of the information Facebook generated during that checking process was stored alongside the uploaded contact lists and address books.

That meant, said Facebook, that when someone had downloaded their profile this extra data had travelled with it, letting people see contact details that had not been explicitly shared with them.

An investigation into the bug showed that contact details for about six million people were inadvertently shared in this way. Despite this, Facebook said the “practical impact” had been small because information was most likely to have been shared with people who already knew the affected individuals.

The bug had now been fixed, it added.

Facebook was alerted to the bug by a member of its “White Hat” program who checks the site’s code for glitches and other loopholes. A bounty for the bug has been paid to the programmer who found it.

Retrieved from BBC news

 

store1[1]

Apple Store suffers partial outages due to server issues

Padlocks-645x250[2]

Worried about the safety of your Google Docs? Don’t, SafeGDocs for Firefox has you covered

Padlocks

If you’re ever concerned about the safety of using Google Docs as your go-to file storage service for work or personal documents, SafeGDocs might be just what you’re looking for if you’re a Firefox user.

Researchers from Gradient (Galician Research and Development Center in Advanced Telecommunications) have been working with Isis Innovation, a part of the University of Oxford in the UK, to develop a free add-on for Firefox that automatically encrypts (and decrypts) your Google Doc files, so you don’t have to worry about any ‘man-in-the-middle’ attacks or anyone accessing your account, at least not for your Docs sake.

The system works by first installing the Firefox Add-on. When we tested this, despite downloading it in the Firefox browser it downloaded the .xpi, which then had to be installed by double-clicking and selecting to use Firefox to open the file.

Once that process had been navigated, you simply need to authorize SafeGDocs access to your Google account and away you go. I should note that as part of the sign up process you must enter a valid personal Gmail address, Google Apps accounts are not supported.

 

teamwork business concept - cube assembling from blocks

As Big Data Explodes, Are You Ready For Yottabytes?

The inescapable truth about big data, the thing you must plan for, is that it just keeps getting bigger. As transactions, electronic records, and images flow in by the millions, terabytes grow into petabytes, which swell into exabytes. Next Next come zettabytes and, beyond those, yottabytes.

A yottabyte is a billion petabytes. Most calculators can’t even display a number of that size, yet the federal government’s most ambitious research efforts are already moving in that direction. In April, the White House announced a new scientific program, called the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, to “map” the human brain. Francis Collins, the director of the National Institutes of Health, said the project, which was launched with $100 million in initial funding, could eventually entail yottabytes of data.

And earlier this year, the US Department of Defense solicited bids for up to 4 exabytes of storage, to be used for image files generated by satellites and drones. That’s right—4 exabytes! The contract award has been put on hold temporarily as the Pentagon weighs its options, but the request for proposals is a sign of where things are heading.

Businesses also are racing to capitalize on the vast amounts of data they’re generating from internal operations, customer interactions, and many other sources that, when analyzed, provide actionable insights. An important first step in scoping out these big data projects is to calculate how much data you’ve got—then multiply by a thousand.

If you think I’m exaggerating, I’m not. It’s easy to underestimate just how much data is really pouring into your company. Businesses are collecting more data, new types of data, and bulkier data, and it’s coming from new and unforeseen sources. Before you know it, your company’s all-encompassing data store isn’t just two or three times what it had been; it’s a hundred times more, then a thousand.

Not that long ago, the benchmark for databases was a terabyte, or a trillion bytes. Say you had a 1 terabyte database and it doubled in size every year—a robust growth rate, but not unheard of these days. That system would exceed a petabyte (a thousand terabytes) in 10 years.

And many businesses are accumulating data even faster. For example, data is doubling every six months at Novation, a healthcare supply contracting company, according to Alex Latham, the company’s vice president of e-business and systems development. Novation has deployed Oracle ORCL -9.26% Exadata Database Machine and Oracle’s Sun ZFS Storage appliance products to scale linearly—in other words, without any slowdown in performance—as data volumes keep growing. (In this short video interview, Latham explains the business strategy behind Novation’s tech investment.)

Terabytes are still the norm in most places, but a growing number of data-intensive businesses and government agencies are pushing into the petabyte realm. In the latest survey of the Independent Oracle Users Group, 5 percent of respondents said their organizations were managing 1 to 10 petabytes of data, and 6 percent had more than 10 petabytes. You can find the full results of the survey, titled “Big Data, Big Challenges, Big Opportunities,” here.

These burgeoning databases are forcing CIOs to rethink their IT infrastructures. Turkcell Turkcell, the leading mobile communications and technology company in Turkey, has also turned to Oracle Exadata Database Machine, which combines advanced compression, flash memory, and other performance-boosting features, to condense 1.2 petabytes of data into 100 terabytes for speedier analysis and reporting.

Envisioning a Yottabyte

Some of these big data projects involve public-private partnerships, making best practices of utmost importance as petabytes of information are stored and shared. On the new federal brain-mapping initiative, the National Institutes of Health is collaborating with other government agencies, businesses, foundations, and neuroscience researchers, including the Allen Institute, the Howard Hughes HHC +2.51% Medical Institute, the Kavli Foundation, and the Salk Institute for Biological Studies

Space exploration and national intelligence are other government missions soon to generate yottabytes of data. The National Security Agency’s new 1-million-square-foot data center in Utah will reportedly be capable of storing a yottabyte.

That brings up a fascinating question: Just how much storage media and real-world physical space are necessary to house so much data that a trillion bytes are considered teensy-weensy? By one estimate, a zettabyte (that’s 10 to the twenty-first power) of data is the equivalent of all of the grains of sand on all of Earth’s beaches.

Of course, IT pros in business and government manage data centers, not beachfront, so the real question is how can they possibly cram so much raw information into their data centers, and do so when budget pressures are forcing them to find ways to consolidate, not expand, those facilities?

The answer is to optimize big data systems to do more with less—actually much, much more with far less. I mentioned earlier that mobile communications company Turkcell is churning out analysis and reports nearly 10 times faster than before. What I didn’t say was that, in the process, the company also shrank its floor space requirements by 90 percent and energy consumption by 80 percent through its investment in Oracle Exadata Database Machine, which is tuned for these workloads.

Businesses will find that there are a growing number of IT platforms designed for petabyte and even exabyte workloads. A case in point is Oracle’s StorageTek SL8500 modular library system, the world’s first exabyte storage system. And if one isn’t enough, 32 of those systems can be connected to create 33.8 exabytes of storage managed through a single interface.

So, as your organization generates, collects, and manages terabytes upon terabytes of data, and pursues an analytics strategy to take advantage of all of that pent-up business value, don’t underestimate how quickly it adds up. Think about all of the grains of sand on all of Earth’s beaches, and remember: The goal is to build sand castles, not get buried by the sand.

Retrieved from Forbes

 

300px-US-CourtOfAppeals-FederalCircuit-Seal.svg_1[1]

Take That, SCOTUS: Appeals Court Reinstates Patent On Video-Ad Technology

Seal of the United States Court of Appeals for...  
Back at ya, SCOTUS. (Photo credit: Wikipedia)

The specialized court in Washington that handles patent appeals has reversed, for a second time, a ruling invalidating a patent on embedded Internet video ads,  setting up a conflict with a U.S. Supreme Court that seems bent on reining in overly broad patents on business methods.

Coming just days after the Supreme Court’s decision invalidating gene patents, in a case the high court had already kicked back to the Federal Circuit once for getting it wrong, the decision in Ultramercial v. Hulu could be seen as a show of defiance by the Washington appeals court.

The case, which no longer involves Hulu, revolves around a patent on a method for inserting ads in free online videos so that viewers must watch them in order to proceed with their entertainment. Leaving aside the obnoxiousness of the technology in question, a lower court held that the concept wasn’t eligible to be patented in the first place because it merely represented a set of abstract ideas assembled into a process.

The Federal Circuit disagreed, saying Patent No. 7,346.545, when it was filed in 2001, was a significant advance on banner ads and other methods of making money off of Internet content. The court erred by requiring Ultramercial to prove patentability, the court said, since “that is presumed,” under the law.

The patent required an “intricate and complex computer program,” the court said, and wasn’t overbroad. The limitations within the patent prevent it from covering all forms of making money from Internet videos.

“It does not say `sell advertising using a computer,’ and so there is no risk of preempting all forms of advertising, let alone ad vertising on the Internet,” the appeals court said.

By reversing the court for a second time and remanding the case for review, the Federal Circuit reasserted its view that rejecting patents as ineligible before a more intensive legal inquiry is a mistake. Technology is constantly evolving in new and unexpected ways, the court said, so it is a bad idea for courts to set up rigid rules covering what can and cannot be patented.

The Supreme Court has attempted to do just that, of course, with its rulings on gene patents and last year’s Mayo v. Prometheus decision involving a method for treating chronic diseases. In both cases, the high court reversed the Federal Circuit. In Mayo, as with the Ultramercial case, the appeals court reconsidered the Supreme Court’s decision but came to the same conclusion in favor of patentability.

In its decision, the court attacked the practice of beaking patents down into their component parts to see if they consist of a set of old or unpatentable ideas. Courts must look at the patent as a whole, the Federal Circuit said, even if each individual step relies on old technology or an abstract idea.

“Indeed, the abstract idea may be of central importance to the invention—the question for patent eligibility is whether the claim contains limitations that meaningfully tie that abstract idea to an actual application of that idea through meaningful limitations,” the court said. “This analysis is not easy, but potentially wrought with the risk of subjectivity and hindsight evaluations.”

Perhaps the Supreme Court will find it easier to decide whether a patent on inserting ads you can’t fast-forward through in online videos is a breakthrough deserving of patent protection.

Retrieved from Forbes