Category Archives: Home User

Apple to FBI: Please Hack Us Again

Now that the FBI has figured out a way to hack into a locked iPhone, Apple wants the feds to pull the same trick a second time, in a different case.

The FBI revealed this week that it no longer needs Apple’s help to extract information from the iPhone used by one of the dead San Bernardino killers. A mysterious third party, whom officials have not yet identified, came forward recently with a method that proved successful in retrieving information on the phone without destroying it.

But that hardly settles the fight between Apple and the feds. The tech giant now wants to know how the FBI cracked its seemingly secure device, and Apple is using an anotheractive court case—this one in New York—to do it.

Retrieved from The Daily Beat

Banking malware infections rise to highest level since 2002

Malicious software aimed at stealing online banking credentials surged in the third quarter of this year to a level not seen since 2002, according to a new report from Trend Micro.

The security vendor said it counted more than 200,000 new infections from July through September, the highest number it has recorded in a three-month period in 11 years. Between April and June, Trend counted 146,000 infections.

The infections were less concentrated in Europe and the Americas and were more distributed throughout the globe, indicating that cybercriminals are diversifying the banking customers they target.

The most affected countries were the U.S., which made up 23 percent of the new infections, followed by Brazil at 16 percent and Japan at 12 percent.

Other top countries affected included India, Australia, France, Germany, Vietnam, Taiwan and Mexico, Trend Micro’s report said.

The malware found was usually ZeuS, also known as Zbot, which dates back to 2006.

Cybercriminals plant ZeuS on websites that will then attack visitors and install the malware if the computer has a software vulnerability. It can then steal online banking credentials and send the details to a remote server, among many other malicious functions.

Trend Micro noted that it also saw KINS, a malicious software program modeled after ZeuS, along with Citadel, a banking credential stealer widely seen in Japan and elsewhere.

Retrieved from ComptuterWorld

5 disturbing BYOD lessons from the FAA’s in-flight electronics announcement

Last week, the Federal Aviation Administration (FAA) moved to let passengers use their smartphones and tablets during airline take-offs and landings. At first glance, this seems like a victory for reasonableness, productivity, and looking out for the rights of technology end users. Even The New York Times said “the  agency won unusually broad praise from pilots, flight attendants and members of Congress, along with passengers.”

But a closer look reveals that the FAA has in fact unwittingly written a guide of what NOT to do when creating a Bring Your Own Device (BYOD) policy – which is essentially what this is. The FAA’s policy may be a step in the right direction for fliers, but it remains plagued with vague instructions, unsupported reasoning, and painfully convoluted processes. Smart IT departments can learn some useful lessons about as they wrestle managing how their users are supposed to work with various devices and access corporate networks.

1. It’s confusing. Any techie worth his pocket protector knows that workable policies have to be clear to everyone. How can users do what they’re supposed to do if they can’t even figure out what they’re supposed to do? And the FAA’s policy has so many caveats, exceptions, and implementation variables that even flight attendants don’t have a clue about what’s acceptable – much less passengers.

2. It’s unenforceable. Say what you will about the old rules, at least you could tell from the aisle whether someone was using a banned device. But you can’t tell if a device is in airplane mode without making the user hand it over – which isn’t supposed to be part of the new rules. A good thing, too: Can you imagine a corporate BYOD policy that let IT folks demand execs hand over their smartphone like policemen asking for your papers?

3. Its logical underpinnings are suspect. Banning the use of portable electronics during take off and landing was based on the possibility that they could cause interference with airplane navigation systems. It turned out there was little evidence of that. But the FAA still says cellphone calls could disrupt radio communications, and the new approach actually makes it more likely for that to happen. If these things really are dangerous – and all the exceptions seem to indicate that they could be in certain cases – then why are we so eager to let people do this?

4. It throws ultimate responsibility on to the users. Instead of setting a single policy, the FAA is now requiring every airline to get an individual safety certification for each type of airplane it flies. Planes that may be more vulnerable to radio interference may have different rules. Oh, and if there’s bad weather and low visibility (estimated to be about 1% of the time), the airlines may be required to make passengers shut down their devices. That’s right?

5. It undermines respect for other rules. This whole thing is a complete debacle, from the patently ridiculous old rules to the confusing, illogical, and unenforceable new ones that airlines are required to interpret on the fly. It all adds up to making the FAA and the airlines look stupid and out of touch, and erodes passengers willingness to follow other – presumably more important – regulations. And that could have truly disastrous consequences.

 

Retrieved from NetworkWorld

Social Media Policy Offers Dos and Don’ts for Employees

Is social media part of your job? Many employees, not just those in marketing, are being asked to use their personal social   networking accounts on behalf of their companies.

Social media works best when companies target a social network — such as Facebook, Twitter, Tumblr, Instagram and Pinterest   — with their marketing message in hopes of reaching and piquing the interest of social media influencers, which, in turn,   can lead to a viral buzz with massive exposure. Nearly every employee needs to participate in order to pull it off.

Echoing this sentiment, Xerox’s social media policy succinctly states the following: “Individual interactions represent a   new model, not mass communications, but masses of communicators.”

Social Media Can Be Risky Business

For companies, there’s an element of danger in asking employees to spout off on social networks. After all, the public corporate   image is at risk. Employees also risk offending the company and losing their jobs. Social media in the enterprise is littered   with tales of employees getting sacked.

There needs to be clear communication between employer and employee on how employees should behave on social networks, in   the form of a written policy, not just for their safety but also to be more effective. We’re still in the heady days of the   social revolution where missteps happen all the time.

Xerox, for instance, has a social media policy for employees with social media as part of their formal job description, but   it apparently didn’t save a call center employee who says she was fired for an Instagram posting. DeMetra “Meech” Christopher   claims she never saw the social media policy because social media wasn’t officially part of her job.

Nevertheless, Xerox’s social media policy, which supplements a general Code of Business Conduct policy, provides a starting   point for better communication between employer and employee in the social revolution. It’s also worth a closer look, because   it helps employees become better social networkers.

The 10-page social media policy opens with general ethical guidelines and goes on to cover best practices in blogging, microblogging   (e.g, Twitter), message boards, social networking and video-audio sharing.

Among the general guidelines, Xerox employees are urged to get training in search optimization principles from a local Web   expert. When discussing Xerox-related matters that might encourage someone to buy Xerox products or services, employees are   required by the Federal Trade Commission to clearly identify themselves.

If employees are publishing content outside of Xerox, they should use a disclaimer such as, “The postings on this site are   my own and don’t necessarily represent Xerox’s position, strategies or opinions.”

Employees need to write in the first person to give a sense of individual accountability. They shouldn’t become embroiled   in public disputes or use sarcasm, ethnic slurs, personal insults, obscenity, “or engage in any conduct that would not be   acceptable in Xerox’s workplace,” states the policy. “You should also show proper consideration for other’s privacy and for   topics that may be considered objectionable or very sensitive — such as politics and religion.”

Xerox serves up helpful tips for employees to become better bloggers, social networkers and contributors on messaging boards.   Writing tips read like an English 101 composition class. They range from having an objective before tapping the keyboard to   using your natural voice to always telling the truth. Employees should act professionally when confronted with inaccurate   information or negative comments. Also, don’t write when you’re unhappy, the policy advises.

Tips for Twitter, Facebook and YouTube

Micro-blogging tips are a little more straightforward, such as understanding that tweets can become part of your permanent   record and employees shouldn’t comment on every single post lest followers see them as some sort of Big Brother.

Employees should give credit to people who retweet their messages, while avoiding too much marketing hype, which will turn   off followers. “Don’t make a professional account too personal, but don’t lack personal touch either,” the policy says.

On Facebook, employees should visit other Xerox pages regularly and engage with the content. “By commenting or clicking ‘like’   on postings, your friends see your activity in their newsfeeds and, as a result, may become a fan of other Xerox-related pages,”   the policy says.

When shooting video for YouTube, employees shouldn’t post personal information about themselves or others. The videos should   have the same tone of voice, look-and-feel as other Xerox videos. Titles should have searchable keywords, and videos need   to be placed in similar categories (probably next to competitors’ videos), so that videos can be found. Videos should have   catchy descriptions, as well as a link back to the Xerox website.

Lastly, keep them short. “Be mindful of appropriate video length,” the policy says. “Effective videos can be as short as 30   seconds. The longer a video, the tougher it is to keep viewers engaged.”

What If Your Job Doesn’t Involve Social Media?

Employees who work with social media as part of their jobs can learn the basic rules from policies such as Xerox’s, but policies   need to go further both in depth and breadth. Perhaps a social media policy needs to be created for all employees regardless   of job function.

As the line between work life and social life, physical world and digital world increasingly blurs, employers and employees   need to know what they can and cannot do with social media — and, of course, how to use social media effectively.

Retrieved from NetworkWorld

New malware variant suggests cybercriminals targeting SAP users

A new variant of a Trojan program that targets online banking accounts also contains code to search if infected computers have SAP client applications installed, suggesting that attackers might target SAP systems in the future.

The malware was discovered a few weeks ago by Russian antivirus company Doctor Web, which shared it with researchers from ERPScan, a developer of security monitoring products for SAP systems.

“We’ve analyzed the malware and all it does right now is to check which systems have SAP applications installed,” said Alexander Polyakov, chief technology officer at ERPScan. “However, this might be the beginning for future attacks.”

When malware does this type of reconnaissance to see if particular software is installed, the attackers either plan to sell access to those infected computers to other cybercriminals interested in exploiting that software or they intend to exploit it themselves at a later time, the researcher said.

Polyakov presented the risks of such attacks and others against SAP systems at the RSA Europe security conference in Amsterdam on Thursday.

To his knowledge, this is the first piece of malware targeting SAP client software that wasn’t created as a proof-of-concept by researchers, but by real cybercriminals.

SAP client applications running on workstations have configuration files that can be easily read and contain the IP addresses of the SAP servers they connect to. Attackers can also hook into the application processes and sniff SAP user passwords, or read them from configuration files and GUI automation scripts, Polyakov said.

There’s a lot that attackers can do with access to SAP servers. Depending on what permissions the stolen credentials have, they can steal customer information and trade secrets or they can steal money from the company by setting up and approving rogue payments or changing the bank account of existing customers to redirect future payments to their account, he added.

There are efforts in some enterprise environments to limit permissions for SAP users based on their duties, but those are big and complex projects. In practice most companies allow their SAP users to do almost everything or more than what they’re supposed to, Polyakov said.

Even if some stolen user credentials don’t give attackers the access they want, there are default administrative credentials that many companies never change or forget to change on some instances of their development systems that have snapshots of the company data, the researcher said.

With access to SAP client software, attackers could steal sensitive data like financial information, corporate secrets, customer lists or human resources information and sell it to competitors. They could also launch denial-of-service attacks against a company’s SAP servers to disrupt its business operations and cause financial damage, Polyakov said.

SAP customers are usually very large enterprises. There are almost 250,000 companies using SAP products in the world, including over 80 percent of those on the Forbes 500 list, according to Polyakov.

If timed correctly, some attacks could even influence the company’s stock and would allow the attackers to profit on the stock market, according to Polyakov.

Dr. Web detects the new malware variant as part of the Trojan.Ibank family, but this is likely a generic alias, he said. “My colleagues said that this is a new modification of a known banking Trojan, but it’s not one of the very popular ones like ZeuS or SpyEye.”

However, malware is not the only threat to SAP customers. ERPScan discovered a critical unauthenticated remote code execution vulnerability in SAProuter, an application that acts as a proxy between internal SAP systems and the Internet.

A patch for this vulnerability was released six months ago, but ERPScan found that out of 5,000 SAProuters accessible from the Internet, only 15 percent currently have the patch, Polyakov said. If you get access to a company’s SAProuter, you’re inside the network and you can do the same things you can when you have access to a SAP workstation, he said.

Retrieved from ComputerWorld

What’s holding back the cloud industry?

While cloud enthusiasts roaming the halls of McCormick Place convention hall in Chicago last week at Cloud Connect may be   high on the market, the reality is that many enterprises IT shops are still reticent to fully embrace public cloud computing.

Network World asked some of the best and brightest minds in the industry who were at the event about what’s holding the cloud   industry back. Here’s what they said:

The organization Eric Hanselman, Chief Analyst, 451 Research Group

Cloud sounds like a great idea, but how will it really work when it’s adopted? Hanselman says one of the biggest barriers   is an organizational one. Typically IT organizations are split into groups focusing on compute, network and storage. When   applications run from the cloud, those are all managed from one provider. That means the jobs from each of those groups within   IT may change. How can organizations evolve? “You’ve got to converge,” Hansleman says. That could be easier said than done   with people’s jobs at stake.

cloud computing

Cloud industry thought leaders (from left): Eric Hanselman, Krishnan Subramanian, Randy Bias, Bernard Golden, Andy Knosp

Security and application integration Krishnan Subramanian, director of OpenShift Strategy at Red Hat; founder of Rishidot Research

Security is still the biggest concern that enterprises point to with the cloud. Is that justified? Cloud providers spend a   lot of money and resources to keep their services secure, but Subramanian says it’s almost an instinctual reaction that IT   pros be concerned about cloud security. “Part of that is lack of education” he says. Vendors could be more forthcoming with   the architecture of their cloud platforms and the security around it. But doing so isn’t an easy decision for IaaS providers:   Vendors don’t want to give away the trade secrets of how their cloud is run, yet they need to provide enough detail to assuage   enterprise concerns.

Once IT shops get beyond the perceived security risks, integrating the cloud with legacy systems is their biggest technical   challenge, Subramanian says. It’s still just not worth it for organizations to completely rewrite their applications to run   them in the cloud. Companies have on-premises options for managing their IT resources and there just isn’t a compelling enough   reason yet to migrate them to the cloud. Perhaps new applications and initiatives will be born in the cloud, but that presents   challenges around the connections between the premises and the cloud, and related latency issues.

New apps for a new computing model Randy Bias, CTO of OpenStack company Cloudscaling

If you’re using cloud computing to deliver legacy enterprise applications, you’re doing it wrong, Bias says. Cloud computing   is fundamentally a paradigm shift, similar to the progression from mainframes to client-server computing. Organizations shouldn’t   run their traditional client-server apps in this cloud world. “Cloud is about net new apps that deliver new business value,”    he says. “That’s what Amazon has driven, and that’s the power of the cloud.” Organizations need to be forward thinking enough   and willing to embrace these new applications that are fueled by big data and distributed systems to produce analytics-based   decision making and agile computing environment.

It’s more than just technology Bernard Golden, VP of Enterprise Solutions for Enstratius, a Dell company

The biggest inhibitor to more prevalent cloud computing adoption is that organizations are still holding on to their legacy   processes, says Golden, who recently authored the Amazon Web Services for Dummies book. It’s not just about being willing   to use new big data apps, and spin up virtual machines quickly. It’s the new skill sets for employees, technical challenges   around integrating an outsourced environment with the current platform, and building a relationship with a new vendor. “For   people to go beyond just a small tweak, there needs to be a significant transformation in many areas of the organization,”    he says. “Each time there is a platform shift, established mechanisms are forced to evolve.”

Regulatory compliance Andy Knosp, VP of Product for open source private cloud platform Eucalyptus

One of the biggest hurdles for broader adoption of public cloud computing resources continues to be the regulatory and compliance   issues that customers need to overcome, Knosp says. Even if providers are accredited to handle sensitive financial, health   or other types of information, there is “still enough doubt” by executives in many of these industries about using public   cloud resources. Many organizations, therefore, have started with low-risk, less mission critical workloads being deployed   to the public cloud. Knosp says the comfort level for using cloud resources for more mission critical workloads will grow.   It will just take time.

Retrieved from NetworkWorld

Is that hotspot safe? Wi-Fi Alliance wants to help with its Passpoint program

Security-savvy mobile-device users are increasingly casting a skeptical eye on public Wi-Fi, and now the vendor consortium behind the wireless standard wants to make logging in via that coffee shop network a bit safer.

The Wi-Fi Alliance’s Passpoint program, based on the Hotspot 2.0 specification, will make public hotspots both safer and easier to use, according to CEO Edgar Figueroa.

“Today, for the most part, when we go on a public hotspot we are sending data without protection. With Passpoint the connections are secure and the communication is encrypted,” Figueroa said.

Also, users should no longer have to search for and choose a network, request the connection to the access point each time and then in many cases re-enter their password. All that can be handled by Passpoint-compatible devices, according to Figueroa.

“The beauty of Passpoint is that the whole industry has agreed to do it this way. More than 70 [devices] have been certified,” he said.

Mobile operators have come to see public Wi-Fi as an important part of their networks as they face growing data usage volumes, and between 60 and 70 big operators are members of the Alliance, Figueroa said.

But he had little to say about the uptake of Passpoint among operators, and would point out only two examples: the first Passpoint-compatible hotspot from Boingo Wireless at Chicago’s O’Hare International Airport, and 30 operators taking part in a trial conducted by the Wireless Broadband Alliance.

Work on an updated Passpoint program has also started. It includes new features for standardized user provisioning. Today everybody signs up users differently, and that makes it harder to implement roaming, according to Figueroa.

The plan is to have the new specification ready next year.

The most obvious problem for current Wi-Fi networks is performance in crowded environments, and Figueroa said the Alliance has addressed that issue with the 802.11ac certification program, which “offers a robust solution that takes you onto 5GHz. So far that band hasn’t been widely used, but ac makes it more compelling,” he said.

The Alliance has also added the new WiGig program, which approves products that operate in the 60 GHz frequency band and offer gigabit speeds over short distances. The technology will be used to connect PCs and portable devices to monitors, projectors and HDTVs wirelessly. Other applications include “instant backups,” according to Figueroa.

“This will be our attempt at making this market go … We have a critical mass of vendors who are investing,” Figueroa said.

Behind the scenes, other areas are also being explored.

“There is a lot of stuff going on around the connected home. There are also a lot of things we are working on for smart grids and there are interest groups looking at health care,” Figueroa said.

Retrieved from ComputerWorld

Adobe confirms Flash Player is sandboxed in Safari for OS X Mavericks

As outlined in a post to Adobe Secure Software Engineering Team (ASSET) blog, the App Sandbox feature in Mavericks lets Adobe limit the plugin’s capabilities to read and write files, as well as what assets Flash Player can access.

Adobe platform security specialist Peleus Uhley explained that in Mavericks, Flash Player calls on a plugin file — specifically com.macromedia.Flash Player.plugin.sb — used to define security permissions defined by an OS X App Sandbox. The player’s capabilities are then restricted to only those operations that are required to operate normally.

After years of fighting malware and exploits facilitated through Adobe’s Flash Player, the company is taking advantage of Apple’s new App Sandbox feature to restrict malicious code from running outside of Safari in OS X Mavericks.

Flash

In addition, Flash Player can no longer access local connections to device resources and inter-process communications (IPC) channels. Network privileges are also limited to within OS X App Sandbox parameters, preventing Flash-based malware from communicating with outside servers.

Uhley noted that the company has effectively deployed some method of sandboxing with Google’s Chrome, Microsoft’s Explorer and Mozilla’s Firefox browsers. Apple will now be added to that list as long as users are running Safari in Mavericks.

“Safari users on OS X Mavericks can view Flash Player content while benefiting from these added security protections,” Uhley said. “We’d like to thank the Apple security team for working with us to deliver this solution.”

Retrieved from Apple Insider

shipdata[1]

What to do when data’s too big to just transfer to the cloud

big data

As government agencies consider moving their enterprise data to the cloud, their first question might be: How does it get to the cloud? In most cases, data can be transmitted  via FTP or HTTP protocols, but for some applications — like life sciences, sensor and video surveillance applications — the data is just too big to fit through the pipe. What’s the best option?

Pack it up and ship it out.

Some major cloud vendors now offer a service whereby clients can ship physical media to the data center, where it can be uploaded, eliminating overly long data transfer times. Bulk imports are especially useful when data is first ported to the cloud or for backup and offsite storage. The fees for this service vary, and some cloud providers will also download data from the cloud and ship it via physical media.

AWS Import/Export accelerates transferring large amounts of data between the AWS cloud and portable storage devices that clients ship to Amazon. It uses the company’s multimodal content delivery network that can transmit terabytes of data faster than a T-3 leased line to transfer data from physical media to Amazon S3, Amazon EBS or Amazon Glacier. Amazon charges $80 for each device handled; other costs depend on which Amazon cloud is used as well as the time it takes Amazon to upload the data or decrypt the device. For more information, see the AWS Import/Export documentation.

Google Cloud Storage Offline Disk Import is an experimental feature that is currently available in the United States only. The service gives clients the option to load data into Google Cloud Storage by sending Google physical hard drives that it loads into an empty Cloud Storage bucket. Google requires that the data be encrypted. Because the data is loaded directly into Google’s network, this approach might be faster or less expensive than transferring data over the Internet. According to Google, import pricing is based on a flat fee of $80 per HDD irrespective of the drive capacity or data size. After that, standard Google Cloud Storage pricing fees apply for requests, bandwidth and storage related to the import and subsequent usage of the data, according to the company.

HP Bulk Import Service is still in private beta, but it allows users to load their data into HP Cloud Block Storage or HP Cloud Object Storage. The new service, which is expected to be released in fall 2013, will let users send hard drives directly to HP’s data centers, where data can be rapidly uploaded and transferred.

Rackspace’s Bulk Import to Cloud Files is a service that lets clients send Rackspace physical media to be uploaded directly at the data centers, where “migration specialists” connect the device to a workstation that that has a direct link to Rackspace’s Cloud Files infrastructure. Rackspace will not decrypt data, though the company plans to offer that option in the future. Rackspace charges $90 per drive for bulk imports.

For cases where the data is consistently too large to transmit and access demands won’t allow the latency inherent in shipping data, Apsera offers its Fast Adaptive Secure Protocol (FASP) data transfer technology that eliminates the shortcomings of TCP-based file transfer technologies such as FTP and HTTP, the company’s website explains. On a gigabit WAN, FASP can achieve 700-800 megabits/sec transfers with high-end PCs and 400-500 megabits/sec with commodity PCs, the company said.

Aspera said its software is in use and accredited for SIPRnet, JWICS and FIPS 140-2, and it has been vetted by the intelligence community for large data transfers over military networks. It is also used in the 1000 Genomes Project that exchanges data between the National Center for Biotechnology Information and the European Bioinformatics Institute.

Retrieved from GCN

1371492725000-200340315-001-1306171413_4_3_rx404_c534x401[1]

Upgrade your cable modem for faster speeds

I’ll be honest: Cable modems aren’t glamorous technology. In fact, you probably haven’t thought about yours since it was installed.

It’s easy to forget how important, and impressive, your cable modem actually is. It handles your Internet traffic 24/7 for years, usually without a hiccup. Some modems even pull double duty as your wireless router.

Beyond that, there is another good reason to think about your cable modem. If it’s more than a few years old, you might not get the Internet speeds you pay for.

Don’t guess at your Internet speed. Find out how fast it is in seconds with this free service at speedtest.net.

Cable companies are busy upgrading their networks for faster speeds. Naturally, to access these faster speeds requires a newer cable modem.

The newest standard for cable modems is DOCSIS 3 — although DOCSIS 3.1 is coming soon. DOCSIS 3 can have data download rates of 160 megabits per second, or better — four times faster than DOCSIS 2. Sounds great!

But don’t run out and grab a new modem just yet. Check with your cable provider to see if your connection uses DOCSIS 3. If your neighborhood network isn’t up-to-date, a new cable modem can wait.

There’s also no rush to upgrade if you have a basic low-speed Internet plan. You won’t get anywhere near your modem’s capacity.

The exception is if you have a really old DOCSIS 1.1 modem. That really needs to be replaced.

In addition to boosting your transfer rates, a newer modem could clear up any connection issues you’ve been experiencing. Many cable companies are phasing out DOCSIS 1.1 modems anyway.

To find out what kind of a modem you have, visit the modem manufacturer’s website. Then look up your model number. This information should be on the bottom or back of the modem.

Even if you’ve determined that you should upgrade, you aren’t done yet. The big question is should you buy a new modem or lease one from your cable company?

Both strategies have their pros and cons.

The major cable providers tack on a monthly fee of $3 or more for renting a modem. If anything goes wrong with it, the company will usually fix it or replace it for no charge.

Most cable companies keep the modem up-to-date with the latest firmware automatically. This can enhance the modem’s performance.

Call your provider and see if it will upgrade you to a DOCSIS 3 modem for free. It might do so to keep you a happy customer. Some people have even gotten the monthly rental fee waived.

So, what about buying your own modem? There’s a case for that as well.

Let’s say you lease a modem for $4 a month. After 4 years, you’ve shelled out $192.

You can buy an excellent DOCSIS 3 modem for $85-$100. It’s more expensive up front. But the longer you keep the modem, the more you save. I know people who have had the same modem for 6 years or more.

The downside is that you’re on the hook if something goes wrong with it. Plus, you’re responsible for staying current on firmware updates. However, those don’t happen too often.

If you decide to buy, check out your service provider’s support pages for recommended DOCSIS 3 modems. If you stick with top brands such as Motorola, Zoom, Linksys and D-Link, you’ll get reliability and a good warranty. You should also be able to use it with another provider if you move.

When shopping, you’ll notice some modems with built-in wireless routers. These are often called gateways. While a gateway is convenient space-wise, it has disadvantages.

Stand-alone wireless routers are more powerful and have more features. A gateway is tethered to the wall with a short coaxial cable. A standalone router is easier to place where you want, which means a better signal.

Don’t forget what happens if one part of the gateway goes kaput. You’ll have to buy a new gateway or a standalone unit anyway.

Additionally, with 802.11ac Wi-Fi gaining popularity, you’ll probably upgrade your router in a few years. That’s well before you want to upgrade your cable modem again.

Retrieved from USA Today