Gold Sponsors
Silver Sponsors
Bronze Sponsors
Telepresence Options Magazine
telepresence options catalog ad
IC14 Banner TTOP
sponsor telepresence
webrtc telepresence

Latest Telepresence and Visual Collaboration News:
Full Article:

How to Defend Your Boardroom Against "Videoconferencing Hackers" and Other Mythical Creatures

January 24, 2012 | David S. Maldow, Esq.
webcam_safety.jpgThe usual channels are buzzing about a certain New York Times article regarding videoconferencing security. The story is based upon an interview with representatives from Rapid7, a company that provides products and services identifying security issues and managing risk. I've read good things about Rapid7 and always support efforts for security, but in fairness it should be noted that projecting an atmosphere of security risk in videoconferencing is clearly in their interest.

After reviewing the NYT article with the other analysts at the Human Productivity Lab, we have decided that a response is necessary. Although I take issue with many of the conclusions of the article, I think it may be doing the industry a backhanded service, as it is presenting this opportunity to openly discuss the nature of security in videoconferencing. The NYT article is titled "Cameras May Open Up the Board Room to Hackers" which, while somewhat misleading, is not as bad as the subtitle which speaks for itself.

NYT_article_title.jpgThis accusation has a particular sting as the videoconferencing industry has long catered to the boardroom set, which has obvious reasons to be concerned about security. Let's see how it all holds up.

Videoconferencing Hackers
The article's title claims that the boardroom will be opened up to "Hackers." However, from the rest of the article it was clear that there was no real hacking involved. Videoconferencing signals use AES encryption (http://en.wikipedia.org/wiki/Advanced_Encryption_Standard). This isn't a new or rare development. AES has been standard on almost all major endpoints (at all price ranges) for a long time. The use of AES means that even if videoconferencing data signals are intercepted as they traverse the internet, the encryption would have to be hacked before anyone could watch the video or listen to the audio. No one is suggesting that this type of hacking is occurring. Rather than hacking into the boardrooms, Rapid7 was simply calling them. These systems apparently answered some of their calls, as they were designed to do.

Rapid7 did create a program to scan the internet for videoconferencing systems. From this they were able to get a list of IP addresses, which are like phone numbers for VC systems. However, they had no idea where these systems were located and who they belonged to. It was a phone book with no names, only numbers. Not a great tool for an effective targeted hacking attack. With this list in hand, they started random dialing and peeked around some empty rooms. It should be noted that this list only included systems that were not deployed behind firewalls. Any environment with real security requirements will have their VC systems behind firewalls.

Real hackers are scary. If someone does find a way to isolate VC traffic over the internet and hack encrypted VC signals from specific locations, I will be the first to raise the alarm. But I simply don't see a massive threat in the fact that it is possible to get lucky and randomly dial into an anonymous empty meeting room.

Flaws In Videoconferencing Systems
In the next section I will explain why it is not easy to stealthily dial into a meeting room while in use. But even if it is possible, is it fair to characterize that as a flaw with videoconferencing systems? I used to work at a law firm in a large office building in New Orleans. There were several business on every floor with fancy meeting rooms. Many of them (like the one in the NYT pic, below) had full glass walls making them completely visible from the waiting room. If someone had left a classified doc laying open on one of the tables I could have read it. Does this demonstrate a flaw in the design of these rooms? Or are some meeting rooms in low security environments where spying isn't a real concern? I think it is acceptable for low security rooms to have glass walls and video systems set to auto answer.

webcam_hack.jpg
The NYT Provides Proof of the Ability to See Into Low Security Empty Meeting Rooms

Videoconferencing systems are designed to be used behind a firewall when security is a concern, just like any other IP device. If a meeting room requires security, you are as likely to find an unprotected videoconferencing system as you are to find an unprotected desktop computer. IT admins for sensitive environments are generally knowledgeable about firewalls and internet security. They are not likely to allow any IP devices to exist outside the firewall under their watch. When properly deployed in this manner, video systems are safe from Rapid7's number finder.

Rapid7 suggests a significant number of systems in otherwise secure environments are being deployed outside the firewall. This was disputed in the article by Ira M. Weinstein, senior analyst at Wainhouse Research, who stated that, "The companies that really have to worry about breaches -- the Department of Defense, banks -- put their systems behind the firewall." Mr. Weinstein's words carry some weight, considering his years covering this industry.

The article does include one exception to its calls into open rooms. After browsing into one unsecured system, they allegedly found a directory entry for the Goldman Sachs boardroom. However, they didn't make the call because they didn't "want to cross a line." I would like to think Goldman Sachs has basic precautions in place and the call would have failed, or at best they would been able to dial into a system with a locked camera pointed at the wall and a muted microphone. Unlike a small company with one system in their one meeting room, Goldman Sachs likely is using a managed service provider to ensure that all of their systems are properly, and securely, provisioned.

The NYT seems to be simply unaware of the fact that the videoconferencing industry has had skin in the security game since its inception.

nyt_quote.jpgThis comment is problematic for several reasons. Referring to IP videoconferencing as a souped-up version of Skype is a little off, but understandable. NYT readers do not want a primer on the history of IP videoconferencing so the author has to simplify. But to say that today's business class videoconferencing solutions were not designed with security in mind is simply unfair and inaccurate. Military and other government agencies have historically been early adopters for videoconferencing technology, with all the expected security requirements. Furthermore, many vendors have invested significant dollars and investment cycles to meet additional levels of military security certification. The requirements for those certifications are not trivial.

In addition, the systems are often installed by experts with security as a priority. In a blog on this subject, IMCCA Director David Danto describes such an installation.

david_danto_quote_2.jpgClearly, installation can be customized to accommodate different levels of security based upon customer needs. If users without security needs choose a less secure deployment, this is a valid choice, not a technology flaw.

The NYT article also notes that the US Chamber of Commerce has been hacked via its office printer and thermostat. Why is there no NYT article indicating that "Flaws in Printers and Thermostats Put Offices at Risk?" Because the technology isn't to blame, the implementation is the issue. In order to create a secure environment you must deploy your IP equipment behind firewalls and take basic precautions based upon the type of equipment at issue. This applies equally to videoconferencing, computers, printers, and apparently thermostats. If you fail to secure your environment, your process is flawed, but the technology itself is not flawed.

Semantics Aside - Is There A Security Risk?
Putting aside the hyperbole of hackers and system flaws, the NYT article, and the subsequent blog entries in response, ask a fair question. How easy is it to use a boardroom videoconferencing system to spy on a meeting? The answer is that it may be possible, but it wouldn't be very easy.

NYT_quote_2.jpgHaving spent countless hours testing videoconferencing systems and video network infrastructure, with up to 18 videoconferencing systems set to auto answer at one time, I can assure you that it is not a silent process. The systems generally ring loudly when called, even if they then auto answer. Some add a few extra beeps and boops to let the user know the call has connected. Most systems have indicator lights and it is pretty noticeable when the cameras "wake up" and swing around. In addition, videoconferencing systems tend to be connected to rather large monitors. When called, the monitors will often "wake up" and display the system's logo if there is no incoming video. This is not likely to go unnoticed, particularly if it happened during a Goldman Sachs board meeting.

Hall_and_Oates_cover.jpg
They See Your Every Move

Videoconferencing systems weren't made with stealthy activation as a goal. They are communications devices, and they were very specifically designed to do a good job of alerting users to incoming calls. This makes them particularly ill-suited as spy cams. The NYT article claimed that Rapid7 called into a venture capital pitch meeting. The article does not say whether the meeting participants immediately noticed that the videoconferencing system woke up and stopped talking. In my experience, that is exactly what would happen, and what does happen.

If you did want to spy, your best bet would be to call into the room before the meeting started, and hope the monitor goes back to sleep before anyone gets into the room and the meeting starts. You would also have to hope that they were not planning on using the system, as they would immediately see that it was in a call as soon as they picked up the remote. The list of "ifs" necessary to make this work is getting rather lengthy...

  • IF - you can get the number in the first place
  • IF - the number you get happens to be anyone worth spying on
  • IF - it is an unsecured system and the call isn't blocked by a firewall
  • IF - the network doesn't redirect your call to a meet-me bridge
  • IF - the solution is set to auto answer
  • IF - the camera is pointed in the right place or controllable
  • IF - they don't have a lens cap on the camera
  • IF - the audio isn't muted (many systems answer with audio muted)
  • IF - you know when the meeting you want to spy on is going to happen
  • IF - no one notices you calling in
  • IF - the system isn't being used during the meeting
  • IF - no one notices the "in use" light on the system, camera, or mic
  • THEN - you might be able to spy on your random target

With that many "ifs" I think this may not be the top security concern for today's videoconferencing users. At this point VC spying is starting to look like Mission Impossible. Yes, if you leave a sensitive document open on a table in a room with a videoconferencing system and no firewall, someone theoretically might be able to see it. But, if you are leaving sensitive documents lying around meeting rooms, then videoconferencing is the least of your security concerns.

Securing Your Video Environment
I expect to see a lot of articles with VC security tips in the wake of this brouhaha. While professional security assessments are available for truly sensitive environments, the rest of us can get by with a few basic security measures. Whether you are an end-user or a managed service provider, here are a few simple tasks to beef up VC security.

  1. Firewalls - Like any IP device, videoconferencing systems should be behind a firewall. While that can make outside calling trickier, it is manageable. If you have an extremely small deployment (one system) and insist on staying outside the firewall, just be aware that your system is open to random calls if you leave auto answer on. You have a choice, you can consider that meeting room to be an open public room, or you can secure your video system. Neither answer is wrong.
  2. Meet-Me Rooms - A videoconferencing network can be configure to direct all incoming calls to a meet-me room in a video bridge. Rather than dialing into a physical boardroom, the "hackers" would dial into a videoconference, where their presence would be very apparent.
  3. Auto Answer - I have no problem with leaving auto answer enabled. However, most systems have an option to answer with audio muted. This is how I set up my systems. If I do get an unexpected call in the middle of a private conversation, the caller will not hear anything. If the NYT article made your co-workers nervous you can just turn off auto answer until everyone relaxes, but auto answer with audio muted is a perfectly acceptable secure setting.
  4. Camera Presets and Far End Camera Control - Cameras can be set to focus on a painting or even an empty wall when calls are initiated. This ensures that random callers will not be looking at your meeting area. Far end camera control should be disabled to keep random callers from peeking around at your empty meeting room.
  5. Physical Lens Covers - Many videoconferencing systems come with some sort of lens cap. Pop it on when the system isn't in use.
  6. Directory Protection - Do not publish your directories. If you do publish any numbers (for example, to take part in a B2B exchange) be aware of how they are being distributed and who can access them. If possible publish your meet-me bridge number, rather than the direct numbers to your endpoints.
  7. Passwords - VC systems can be password protected. This will prevent non-authorized users from browsing your directories.
  8. Vulnerability Assessment - If you are really security minded, you should undergo a professional vulnerability assessment every 90 days.
 
Suggested Vendor Response to NYT
Despite the lack of a problem, the NYT article will have to be addressed. Fair or not, vendors will have to respond. How do you respond to a non-problem? By addressing the feeling of insecurity that readers of the NYT will now feel. Default software settings can be adjusted, but that won't help because insecure users do not see software settings. I would suggest that all vendors implement a simple motorized lens cover on the next iteration of all their VC cameras. Whenever the system is not in a call, the cover should automatically move into place. It should be made clear that when that cover is in position, the system is incapable of capturing any video or audio. This would give users a clear visual indicator that they are secure. When the enter a meeting room, if they see that the lens cover is in place (perhaps make it colorful or otherwise eye catching), they can be sure that the system is completely asleep, incapable of spying, and their meeting is secure from any random callers.

Vendors might also want to take this opportunity to standardize the meaning of lighted indicators on microphones. Current some mics have a red light when muted and no light when live, some are lighted when live and unlit when muted, etc. While users of any particular system will soon learn to read their mic indicators, it would be a great move if the industry came together and coordinated on this minor product attribute in response to this discussion.

Conclusion
The public internet can be a rough neighborhood. As a result, security minded IT staff have long demanded that all IP devices live inside the firewall. A videoconferencing endpoint is just another IP device. If you choose to deploy it outside of the firewall, then it will be open to accepting calls. Even if a system is deployed outside of a firewall, it still isn't at serious risk of being hacked. It is at risk of accepting calls (which is what it was designed to do). A few common sense precautions can eliminate the risk of your system being in a call without you knowing about it.

When I am testing videoconferencing endpoints, I  often put systems on the public net outside of the firewall for testing purposes. I made a conscious choice to set them to auto answer (makes it easier to place multiple test calls). There was an assumption in my lab that I could be on camera at any time, and to act accordingly (no leaving confidential docs uncovered on tables, etc). If I wanted visual privacy (clients in the office), I would adjust the systems accordingly. Security was a two part puzzle. Part one was the room itself and its required use at the time, and part two was setting the VC systems to meet the needs of the room.

As David Danto suggested in his blog on this subject, people can get into your house if you leave the door wide open, but that doesn't mean your house is flawed. Similarly, people can misuse your VC systems if you don't take a few easy precautions, but that doesn't mean VC is a flawed or inherently insecure technology. Don't forget, these systems are in use in the Pentagon and have been for years. If the technology is secure enough for the Pentagon, it is secure enough for your boardroom.

About the Author
David_Maldow_with_text.jpgDavid Maldow is a visual collaboration technologist and analyst with the Human Productivity Lab and an associate editor at Telepresence Options. David has extensive expertise in testing, evaluating, and explaining telepresence and other visual collaboration technologies. David is focused on providing third-party independent testing of telepresence and visual collaboration endpoints and infrastructure and helps end users better secure their telepresence, videoconferencing, and visual collaboration environments.















Add New Comment

Telepresence Options welcomes your comments! You may comment using your name and email (which will not be displayed), or you may connect with your Twitter, Facebook, Google+, or DISQUS account.