Securing the Internet of Things

 

Last Friday’s attack was apparently caused by the Mirai botnet, which targeted unprotected IoT devices, including Internet-ready cameras. In its wake, the inevitable has happened. There have been calls for more government regulation:

A U.S. Senator has joined security officials calling for stiffer cybersecurity for Internet of Things (IoT) devices following a major attack last Friday.

In a letter to three federal agencies, Sen. Mark Warner (D-Va.) on Tuesday called for “improved tools to better protect American consumers, manufacturers, retailers, internet sites and service providers.”

People (including Ricochet members) have been warning about the risks of the IoT for ages, but this hasn’t stopped manufacturers from flooding the market with cheap, unsecured devices — nor has it stopped consumers from purchasing them. The consensus of most of the experts I’ve read is that this is indeed a classic tragedy of the commons problem, as Senator Warner suggests, and that the only solution is for the government to step in to solve the problem.

It’s certainly true that no industry could have been warned more often that it had a problem. I read the warnings, and I sure wasn’t keen to buy any of those devices. Frankly, everything I read about the IoT creeps me out and reminds me of this:

But I seem to be an outlier in my instinctive aversion. And it seems to be true that neither manufacturers nor consumers paid those warnings much mind, either out of greed, laziness, or incomprehension. It’s also true that the cost of their error was borne by everyone, not just the specific manufacturers and consumers.

Bruce Schneier, who’s always interesting to read, thinks there’s no conceivable market solution to the problem:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

So is this genuinely a situation where government must step in? And if so, is it reasonable to expect the government to be any good at regulating this industry?

Also, a question for the lawyers: Why do we need the government to “impose liabilities” on the manufacturers? That’s to say, what’s preventing Brian Krebs from suing them right now? What prevents the people who were inconvenienced by last Friday’s attack from joining a class action suit against the companies in question?

Published in General, Science & Technology
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 172 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Terry Mott Member
    Terry Mott
    @TerryMott

    Damocles:

    Eric Hines:

    Damocles:

    Eric Hines:The latter solution–holding end-users accountable–is absolutely correct. Change the incentive of the actors by holding them accountable for the outcomes of their behaviors; don’t punish someone else over an end user’s misbehavior.

    What are you going to do? Start suing Granny coz she had a bad password rotation scheme or didn’t keep her firmware patched?

    You want to let Granny off the hook coz she didn’t know how to drive her car well, but she drove it anyway and caused an accident?

    Eric Hines

    That’s funny, but I’m trying to get a straight answer out of you. Are you really thinking of lawsuits if someone doesn’t keep their equipment patched?

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply.  If the device was deemed to be defective, the owner can take it up with the manufacturer, individually.  Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked.  And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    • #121
  2. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    Terry Mott:

     

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply. If the device was deemed to be defective, the owner can take it up with the manufacturer, individually. Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked. And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    Why didn’t we think of this sooner.  If we just allow vigilantism we don’t need police either.  Finally, something conservatives and BLM activists can agree on.

    • #122
  3. Claire Berlinski, Ed. Member
    Claire Berlinski, Ed.
    @Claire

    One of the points Schneier makes often, and intuitively it seems to me a good one (this isn’t my field, so “intuitively” is as good as it gets with me, since I can’t speak from experience) is that the solution can’t involve trying to fix the user, as he puts it:

    The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem? …

    We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it—”stress of mind, or knowledge of a long series of rules.”

    Ultimately, if we’re relying on Granny to remember to rotate her password, we’re in trouble — we do know from long experience that Granny “isn’t good with computers,” and her vision and memory aren’t good enough to type in complex passwords, so Granny’s going to screw this up more often than not. The system has to be designed to be Granny-proof, because Granny’s the weak link — and the norm.

     

    • #123
  4. The Reticulator Member
    The Reticulator
    @TheReticulator

    By the way, securing our property is a different thing than securing our property rights. If it’s the government’s job to secure our property, I want it to put a new roof on our house.

    • #124
  5. Phil Turmel Inactive
    Phil Turmel
    @PhilTurmel

    Eric Hines:

    Matt Bartle: I would tell you a joke about UDP, but you probably wouldn’t get it.

    And you wouldn’t care.

    Eric Hines

    I giggled over Matt’s comment last night and got the usual query from my girls “whatcha’ laughing at?”  I read it to them.  They didn’t get it.  I’m sure Matt didn’t care. (-:

    My daughter, who fears my educational lectures, ventured to ask “how long would it take to explain the joke?”

    • #125
  6. cirby Inactive
    cirby
    @cirby

    Chuck Enfield:

    Terry Mott:

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply. If the device was deemed to be defective, the owner can take it up with the manufacturer, individually. Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked. And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    Why didn’t we think of this sooner. If we just allow vigilantism we don’t need police either. Finally, something conservatives and BLM activists can agree on.

    The “police” have already admitted that they have absolutely zero way of dealing with this sort of thing in an effective or timely manner. It’s not vigilantism to stop a device that’s causing damage – it’s self defense.

    Note that the “vigilante” solutions, in real life, end up being slightly more aggressive than hitting an off switch – almost all bricked devices could be recovered with a firmware reset. As it stands now, if you put a brand new device of this sort on the net, it’s got somewhere between five and eight minutes before being zombified.

     

    • #126
  7. Spin Inactive
    Spin
    @Spin

    Claire Berlinski, Ed.:One of the points Schneier makes often, and intuitively it seems to me a good one (this isn’t my field, so “intuitively” is as good as it gets with me, since I can’t speak from experience) is that the solution can’t involve trying to fix the user, as he puts it:

    The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem? …

    We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it—”stress of mind, or knowledge of a long series of rules.”

    Ultimately, if we’re relying on Granny to remember to rotate her password, we’re in trouble — we do know from long experience that Granny “isn’t good with computers,” and her vision and memory aren’t good enough to type in complex passwords, so Granny’s going to screw this up more often than not. The system has to be designed to be Granny-proof, because Granny’s the weak link — and the norm.

    This actually incorrect.  We can fix the user.  Maybe not granny.  But in general, user training works.  Most companies that conduct PhishMe type campaigns find that hit rates go way down as a result.  Now, this may be like safety training or sexual harassment training:  you have to do it regularly.  But it does work.

    • #127
  8. Spin Inactive
    Spin
    @Spin

    Claire Berlinski, Ed.: why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses?

    The answer here is simple:  the user refuses to have any restrictions applied to them in terms of what they can do with their computer.  It’s their computer, by golly, and they better never be hassled by something asking them for their explicit permission to do something.  This is a fight I fight literally every day.

    They say “I’m smart enough not to click on stuff.”  Then you learn that they’ve infected their computer with a virus because their friend from another company gave them a thumb drive with something cool on it.

    • #128
  9. Guruforhire Inactive
    Guruforhire
    @Guruforhire

    Terry Mott:

    Damocles:

    Eric Hines:

    Damocles:

    Eric Hines:The latter solution–holding end-users accountable–is absolutely correct. Change the incentive of the actors by holding them accountable for the outcomes of their behaviors; don’t punish someone else over an end user’s misbehavior.

    What are you going to do? Start suing Granny coz she had a bad password rotation scheme or didn’t keep her firmware patched?

    You want to let Granny off the hook coz she didn’t know how to drive her car well, but she drove it anyway and caused an accident?

    Eric Hines

    That’s funny, but I’m trying to get a straight answer out of you. Are you really thinking of lawsuits if someone doesn’t keep their equipment patched?

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply. If the device was deemed to be defective, the owner can take it up with the manufacturer, individually. Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked. And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    Its also free license for malicious individuals to brick devices.

    • #129
  10. Terry Mott Member
    Terry Mott
    @TerryMott

    Guruforhire:

    Terry Mott:

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply. If the device was deemed to be defective, the owner can take it up with the manufacturer, individually. Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked. And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    Its also free license for malicious individuals to brick devices.

    Malicious individuals are unlikely to worry about whether they have a “free license”.

    If the device is so insecure that it can be remotely bricked, it’s a safe bet that it’s already compromised and taking it offline would be a good thing, regardless of the motivations of the person doing the bricking.

    • #130
  11. genferei Member
    genferei
    @genferei

    If the problem is that ‘insecure’ devices on a network provide bad actors with the means to affect third parties, why blame the owners of the devices rather than the architects of the network?

    Sue ARPA! It’s all their fault!

    See – it’s really the government that was the problem all along.

    • #131
  12. Guruforhire Inactive
    Guruforhire
    @Guruforhire

    Terry Mott:

    Guruforhire:

    Terry Mott:

    Seems to me the earlier that the idea @cirby and @Phil Turmel had in comments #4 and #5 about allowing people to strike back & brick insecure equipment would accomplish all this more simply. If the device was deemed to be defective, the owner can take it up with the manufacturer, individually. Manufacturers would have a huge incentive to build devices that not only sport the latest whiz-bang feature that the marketing guys dreamed up, but also are secure against being hacked / bricked. And the compromised devices could be taken out so as to not cause ongoing trouble.

    A free market solution.

    Its also free license for malicious individuals to brick devices.

    Malicious individuals are unlikely to worry about whether they have a “free license”.

    If the device is so insecure that it can be remotely bricked, it’s a safe bet that it’s already compromised and taking it offline would be a good thing, regardless of the motivations of the person doing the bricking.

     

    Your joking right?

    • #132
  13. Joe P Member
    Joe P
    @JoeP

    You know, after reflecting on this for a day, I realized that I’ve heard all of this before.

    I am young, but I still remember that people said all the same things about PCs ten years ago. Specifically, Windows PCs whose default settings ar the time were not necessarily the most secure. I stopped hearing about it either when Windows Vista or 7 came out, and Microsoft made changes to how their security worked. Coincidentally, I stopped paying attention around that time, because I stopped using Windows on my machines at home. So it’s possible that, I might be missing something.

    Where did that moral panic go? I’m not sure if it’s still around or not, but I’m 99% certain that government regulation did not address it in any way whatsoever.

    • #133
  14. Eric Hines Inactive
    Eric Hines
    @EricHines

    Spin:

    Claire Berlinski, Ed.:One of the points Schneier makes often, and intuitively it seems to me a good one (this isn’t my field, so “intuitively” is as good as it gets with me, since I can’t speak from experience) is that the solution can’t involve trying to fix the user, as he puts it:

    The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem? …

    We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it—”stress of mind, or knowledge of a long series of rules.”

    Ultimately, if we’re relying on Granny to remember to rotate her password, we’re in trouble — we do know from long experience that Granny “isn’t good with computers,” and her vision and memory aren’t good enough to type in complex passwords, so Granny’s going to screw this up more often than not. The system has to be designed to be Granny-proof, because Granny’s the weak link — and the norm.

    This actually incorrect. We can fix the user. Maybe not granny. But in general, user training works. Most companies that conduct PhishMe type campaigns find that hit rates go way down as a result. Now, this may be like safety training or sexual harassment training: you have to do it regularly. But it does work.

    Schneier also didn’t propose any actual, viable alternatives: just a generic automate the thing from the producer’s end, with all the erroneously blocked stuff because some programmer’s algorithm didn’t recognize it and with no manual override, or use a Google facility to open a doc, complete with Google’s attitude toward privacy.

    Automated updates?  Microsoft does that, but they also provide both a manual override and an option to see and reject the update before it goes in.  Which is good, because one of its updates crashed my laptop, and I had to use the manual override (simple to execute, too, just time consuming in the recovery) to back out the update and then use the option to block that particular update from trying again.

    Eric Hines

    • #134
  15. Terry Mott Member
    Terry Mott
    @TerryMott

    Guruforhire:

    Terry Mott:

    Malicious individuals are unlikely to worry about whether they have a “free license”.

    If the device is so insecure that it can be remotely bricked, it’s a safe bet that it’s already compromised and taking it offline would be a good thing, regardless of the motivations of the person doing the bricking.

    Your joking right?

    Nope.  It might sound harsh, but these sorts of attacks will continue as long as these insecure devices are hooked up to the Internet.

    Government can’t fix it, at least out in the open. Government, rather than private, hackers could secretly brick these devices, but that raises its own set of ethical problems.

    Here’s what I envision:  Someone buys and installs a new web-enabled DVR.  The following day, it’s compromised and becomes part of a botnet.  It’s then used in a DDoS attack, and the affected party counter-attacks and bricks the device.  The owner discovers his DVR stopped working and returns it, hopefully under warranty.  The manufacturer now has incentive to secure the blasted things before selling them.

    • #135
  16. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    anonymous:Ricochet does use Amazon Route 53 as its DNS provider…

    The registrar for their domain name (and hence WHOIS server) is Dyn, but that shouldn’t affect DNS resolution.

    Yes, it does.  The authoritative server plays a role, and if that server isn’t available things will eventually stop working.  In fact, one of the strategies in DoS attacks against DNS is to query information that isn’t cached on any of the local DNS servers, which then requires a response from authoritative servers.  This magnifies the effect of the attack on the modest number of key servers.

    This would explain why the Ricochet outages were briefer, spotty, and didn’t occur at the same time for all users.  It’s reasonable to expect the data required to respond to common queries to be cached on local servers.  As that cached data expires, however, the local servers much search up the hierarchy for the info they need.  That cached data will expire at different times on different servers, but eventually it will age out everywhere.  Once all the cached data has expired the authoritative server must respond or the query will fail even if your local servers aren’t being attacked.

    • #136
  17. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    Spin:

    This actually incorrect. We can fix the user. Maybe not granny. But in general, user training works. Most companies that conduct PhishMe type campaigns find that hit rates go way down as a result. Now, this may be like safety training or sexual harassment training: you have to do it regularly. But it does work.

    I’m somewhere in the middle on this. I do think it’s reasonable to expect some basic knowledge from consumers.  The kind of stuff that’s in nearly every network product’s owner’s manual, the FAQ on your ISP’s website, etc..  That would cover the basics like, passwords, Wi-Fi security, antivirus, SSL, and software updates.  It requires some effort, but it’s not that hard.  Instead many people just start doing stuff without concern for best practices.  These people should be liable for the negligence – granny or not.

    On the other hand, if somebody installs a webcam in accordance with the manufacturer’s instructions for the purpose of monitoring their home when traveling, they shouldn’t be liable if that device is vulnerable to a SQL injection attack, gets compromised, and ends up part of a botnet. If you want to stifle technological advancement, hold users accountable for things they can’t possibly understand. Their only recourse will be to avoid new technologies altogether.

    Does that mean the manufacturer should be liable in this example?  Maybe.  Best practice applies to them too.  Network device manufacturers should be aware of common attack vectors and bugs, and their devices should be secured against those things.  If they neglect to do that they should be liable for that neglect.  If they perform due diligence and their devices get compromised anyway, that’s just bad luck and the manufacturer shouldn’t be liable.

    • #137
  18. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    Joe P: Where did that moral panic go? I’m not sure if it’s still around or not, but I’m 99% certain that government regulation did not address it in any way whatsoever.

    This just speculation and shouldn’t be interpreted as an argument for government intervention, but Brussels imposed a bunch of requirements on Microsoft over the years.  It’s possible that EU mandates brought features to our market that MS might not otherwise have offered.

    • #138
  19. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    Sorry anonymous.  I should have looked more closely at your attached output.  DYN is just the registrar.  AWS is authoritative.

    • #139
  20. Dave S. Member
    Dave S.
    @DaveS

    anonymous:

    Chuck Enfield:

    anonymous:Ricochet does use Amazon Route 53 as its DNS provider…

    The registrar for their domain name (and hence WHOIS server) is Dyn, but that shouldn’t affect DNS resolution.

    Yes, it does. The authoritative server plays a role, and if that server isn’t available things will eventually stop working. In fact, one of the strategies in DoS attacks against DNS is to query information that isn’t cached on any of the local DNS servers, which then requires a response from authoritative servers.

    The following is a dig +trace ricochet.com. It started from the root servers and never hit a Dyn.dyndns server.

     

    I agree it doesn’t look like Dyn is authoritative for ricochet.com.  Perhaps some of the elements on the site had a dependency on Dyn.  I would still expect the site to partially load, but maybe this is why some folks faced issues around the same time.

    ric

    • #140
  21. Damocles Inactive
    Damocles
    @Damocles

    anonymous:Ricochet does use Amazon Route 53 as its DNS provider:

    Whoops, my mistake… sorry for the misinformation!

    • #141
  22. Spin Inactive
    Spin
    @Spin

    Chuck Enfield: Does that mean the manufacturer should be liable in this example?

    If I make a car that has a fundamental flaw, say the steering mechanism fails for some related to poor design, am I liable for the deaths that flaw causes?  I do not expect the driver of the car to inspect the steering mechanism for manufacturing defects.

    • #142
  23. Fake John/Jane Galt Coolidge
    Fake John/Jane Galt
    @FakeJohnJaneGalt

    DNS attacks are not new.  It is one of the reasons that I tend to split my DNS clients search over two or more services.  For my main I use the local ISP and for the backup I use google and others.

    • #143
  24. James Gawron Inactive
    James Gawron
    @JamesGawron

    Spin:

    Chuck Enfield: Does that mean the manufacturer should be liable in this example?

    If I make a car that has a fundamental flaw, say the steering mechanism fails for some related to poor design, am I liable for the deaths that flaw causes? I do not expect the driver of the car to inspect the steering mechanism for manufacturing defects.

    Spin,

    Aside from the legal liability debate isn’t the systematic approach more appropriate. If you own a Ferrari capable of 150 mph and the speed limits are set at 65 mph you can’t complain. If the net sets standards for the proper security protocol for the “Internet of Things” and the existing devices that don’t meet standards are locked out because of it, the user can’t complain to the net. They might be very angry with the manufacturer who sold them the cheap product. The smart manufacturer who doesn’t want to lose business could give a trade-in allowance on the old component.

    That’s life on the information super-highway.

    Regards,

    Jim

    • #144
  25. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    Spin:

    Chuck Enfield: Does that mean the manufacturer should be liable in this example?

    If I make a car that has a fundamental flaw, say the steering mechanism fails for some related to poor design, am I liable for the deaths that flaw causes? I do not expect the driver of the car to inspect the steering mechanism for manufacturing defects.

    That’s just one possible analogy.  The distinction is that a car clearly needs to be steered to be useful, and if routine use renders it incapable of that, then yes, you should be liable.  If, on the other hand, I drive your car like a stuntman in a Dukes of Hazzard episode and the steering fails, then unless you advertise your car as being suitable for those conditions, you should not be liable.

    I’m not a lawyer, but I’m sure there are many law books focused on product liability, so I’m not going to attempt to cover every possible base here.  I don’t think I’m suggesting any new law.  My understanding is that manufacturers may be liable if their products are unfit for purpose or unsafe when used as directed.  This would include the ability to withstand the environmental conditions in which the device is intended to operate.  Internet-accessible electronic devices operate in an environment that includes frequent port scans and automated exploration for common defects.  If a device can’t withstand these conditions, it’s unfit for purpose. That said, if a device can withstand routine conditions, but fails under expert abuse or emergent conditions that couldn’t be anticipated by the manufacturer, then the manufacturer isn’t negligent and shouldn’t be liable.

    Consider the case of locks.  If you sell a bike lock that opens when shaken vigorously by hand, you should expect to be sued when a bunch of people have their bikes stolen.  If you sell a lock that takes an experienced locksmith 30 minutes to pick, it’s a pretty good lock and you shouldn’t be liable if somebody happens to pick it.  Perfection is not, and should not be, the standard.  Similarly, the ability to compromise a device is not inherently a defect, but a failure to protect a device from well-known and easily-defensible attack vectors is a defect.

    If I ran DYN I’d look hard at suing some of the larger manufacturers involved.  That said, I understand it may not make sense to do so. It may be an easy case to win, but nearly impossible to collect any damages.

    • #145
  26. Chuck Enfield Inactive
    Chuck Enfield
    @ChuckEnfield

    James Gawron: If the net sets standards for the proper security protocol for the “Internet of Things” and the existing devices that don’t meet standards are locked out because of it, the user can’t complain to the net.

    Is this just an idea, or can you explain how this locking out might work?

    • #146
  27. James Gawron Inactive
    James Gawron
    @JamesGawron

    Chuck Enfield:

    James Gawron: If the net sets standards for the proper security protocol for the “Internet of Things” and the existing devices that don’t meet standards are locked out because of it, the user can’t complain to the net.

    Is this just an idea, or can you explain how this locking out might work?

    You create a protocol for such devices. If the devices don’t show it they don’t get on-line. New devices will have it. Old devices will not. For instance when you go through an operating system upgrade some of your drivers don’t work. You will be forced to download new drivers (may happen automatically). In the case of the internet of things, the protocol is not in software but in firmware. The net would check for the protocol before allowing access. The device would require a firmware upgrade or complete replacement before it could get back on the net. The cost and irritation would be ameliorated by each manufacturer. The date when the protocol will be implemented will be well known so people can be alerted in advance that they will have a problem and what their particular solution will be. The Y2K event was such a situation.

    Regards,

    Jim

    • #147
  28. Spin Inactive
    Spin
    @Spin

    James Gawron: If the devices don’t show it they don’t get on-line. New devices will have it.

    So how would the network determine if something was an “IoT” device versus some other type of device.  Let us remember that “IoT” is just a buzz phrase, and that most of these devices are capable communicating on an IP network the exact same way your computer does.

    • #148
  29. James Gawron Inactive
    James Gawron
    @JamesGawron

    Spin:

    James Gawron: If the devices don’t show it they don’t get on-line. New devices will have it.

    So how would the network determine if something was an “IoT” device versus some other type of device. Let us remember that “IoT” is just a buzz phrase, and that most of these devices are capable communicating on an IP network the exact same way your computer does.

    Spin,

    Because the net will now be asking for the new protocol before it allows access.

    Regards,

    Jim

    • #149
  30. Eric Hines Inactive
    Eric Hines
    @EricHines

    anonymous: A firmware upgrade will normally install a new certificate.

    How would they get the upgrade without access to the Internet?

    Eric Hines

    • #150
Become a member to join the conversation. Or sign in if you're already a member.