Anyone using HTTPS in a CN?

We’re interested in HTTPS more for the encryption, and less for the identity verification.

They can’t fully be decoupled, though – as it doesn’t really matter if you’re sending your data over an encrypted channel if the recipient is a bad actor.

But maybe the problem is knowing what domain to put in the certificate - something wild-cardy might work ?

Yeah, I see DNS as the major challenge. A CA like Let’s Encrypt wouldn’t be willing to sign a .local/mDNS domain (as they’ll only sign something that you can prove ownership of, and falling under a global TLD). However, we could set up a custom domain (kolibrilocal.com) or something, and issue partner-specific subdomain certificates (with Let’s Encrypt on the backend) such as yourproject.kolibrilocal.com. The central (online) DNS records could then be set up to point to the static IP address of the offline server, in case someone connects to the LAN with a device (e.g. a cellphone) that has its own parallel Internet connection. And then the LAN gateway would need to have its own DNS server running with a hardcoded entry for that subdomain to map to the appropriate IP as well, for full offline support. This would probably only work for certain network topologies, and would require a few steps to set up, but would at least not require any self-signed certs or client-side CA root cert list modifications (so we’d be able to support BYOD).

Good point about Lets Encrypt - and it might be worth trying to reach out to them, I think we have contacts (there are good contacts between EFF and Archive).

I don’t think that complicated network topology is going to work in most setups, mostly because of the complexity and working that in with all the other complexities. After all most platforms cant even make it work with both people connected directly to a server via its hotspot and via a router !

The place I disagree though is on the decoupling between identity and encryption. For a bad actor to capture encrypted communications they have to be able to get their server setup to pretend to be the destination (e.g. as a person-in-the-middle attack), but AFAIK this is going to be an order of magnitude harder in most CN’s than just grabbing a sniffer and looking at unencrypted packets.

Could be interesting to see what Let’s Encrypt would say, yeah – but I don’t think they’re the ones blocking any of this (DNS is the main challenge). Though if there were some trick for them to be able to issue certs for private IP addresses or .local domains, it could be interesting, but seems unlikely (as no trusted CA should be allowed to do this, as they can’t verify ownership).

I agree that asking end-users to set up the network pieces for the “general case” would be hard, but the specific case I’m thinking of is preloaded hotspot devices like a Raspberry Pi or RACHEL (Intel CAP), which serve as the access point and also the web server. These devices already act as a DNS server (via dnsmasq), have their own fixed IP, and could be preloaded with a unique cert that does what we described above. No special configuration would then be needed by the end user. If a router were being used instead of something integrated into the server, then one additional step would be necessary: logging into the router and setting the DNS server to point to the local server’s IP. And if it’s a semi-connected scenario (with a low-bandwidth upstream Internet connection), then even without local DNS set up, the central DNS would kick in and point clients to the correct local IP for the server.

In terms of identity/MITM, an attack scenario could be:

  • Set up a spoofed access point with the same SSID as the one listed on instructions on the wall.
  • People accidentally connect to the spoofed access point instead of the legit one (no way to tell the difference).
  • They would go to the Kolibri web UI. It might have HTTPS set up as well, and look “trusted”. They enter their credentials, which are then logged by the attacker.

This is where identity comes in – in principle, the poster on the wall could also list the expected subdomain. If we only issued a single certificate for each subdomain (or only reissued to someone from the same org/email), then there’s no way the attacker could provide HTTPS over that same subdomain, so the certificate is serving its role as an identity provider. This of course assumes that the human would be paying attention to this, which is frequently going to be unlikely (just as it is on the Internet, with phishing sites – most users don’t know how to interpret URLs). However, in Kolibri, we also do a lot of P2P backend communications over the local network – e.g. syncing user data from one device to another – and computers are far better at paying attention to identity, so this could be a huge security boost to the backend communications).

Yes - your Identity/MITM attach is possible but also has risks of being discovered (two AP’s with same name) I’m comparing that to downloading a single sniffing app and looking at packets, which is effectively blocked by HTTPS (even without identity) and is not really AFAIK with wireless encryption.

If you are right, and it cant be solved with encryption, then it would be a long time lobbying effort (on browser companies) to support encryption even if no certs available, not impossible but unlikely.

Oh, for sure – sorry, to clarify: I wasn’t saying that encryption itself doesn’t help with privacy and raise the bar for an attacker (especially as many of our users have router encryption turned off, which often allows more client devices to connect over the same access point, a common bottleneck). I was just saying that for our use case (especially on the backend, where we can pin particular peers as being trusted/known, based on their subdomain), we do also get extra protections from paying attention to certificate identity.

Wow… this is a really interesting thread… thanks for contributing so much!

My two cents on this:

  • if you purchase SSL certs, you can have up to two years certs… you could use the same cert for all your devices, but anyone that could access one of them could be a MITM for any other
  • you could have a secondary system were devices get registered into a server of yours, and that server would request certs on their behalf (as it will have a public IP address) and hands them over to your devices. you will need to have an exclusive dns record that is accessible from the web, like *.my.rachelpi.org, and each of them have a unique address like that one… you could redirect from a general url (like my.rachelpi.org) to the one with the TLS cert. You will need to deal with the DNS on your network (some networks like the libremesh.org / librerouter.org would do it for you).

I have been analything this issue in the context of a similiar issue: HTTPS is required if Javascript code wants to access the full potential of the browsers: GPS, sensors, storage, … all of it only is possible with HTTPS. We need it in order to use it in the Progressive Web App of the LibreRouter. Here is what I have thought about it: https://hackmd.io/@nicopace/PWA-LibreRouter

TL;DR: for our usecase, though encryption is important, it is harder to obtain, so… we could relinquish security in favour of having access to the added features of PWA. So the way I saw was you could download the app.thisnode.info TLS cert from app.thisnode.info/cert (or something like that). We have not expored it yet.

The biggest issue is that web browsers have been made to exist within the web, that exists within the internet, so it assumes things that are certainly nonexistent in offline networks like TLS certs (instead of pep for example) or things that might complicate things more for offline or almost disconnected networks like DNS over HTTPS.

Oh, and another approach is to distribute your own certs on Apps… you can pin any cert that you want on your own custom apps…

Shall we write a wiki page about all this?

1 Like

Hi @nicopace, thanks for the info -

In addition to the privacy aspects, we too would love to run Kolibri as a PWA as this would greatly improve the UX for learners on flaky networks or wanting to borrow tablets from school to do homework.

The tradeoffs you’re describing make sense as a way to enable PWAs, if that is the primary goal. It also offers at least a modicum of protection against casual network snooping compared to plain http.

On the other hand, it might lead people to trust the security more than they should, and make best practices (e.g. “always use HTTPS”) harder to teach.

Shall we write a wiki page about all this?

That would be great!

I asked Peter Eckersley who started LETSencrypt - he suggested just use a long-lived certificate could be for a top-level server, or more likely just a wildcard since we arent worried about impersonation, we are worried about making sure the connection is encrypted over the air.

He suggested that LETSencrypt was designed to do one thing well, but automated, but he thought existing domain top-level registrars would be open to giving a special case long-lived certificate if approached.

No access from the CN to the internet would be required except to transfer the initial certificate.

He’d be happy to go into this in more detail after the current coronavirus crisis (we are collaborating on privacy/contact-tracing so a little busy at the moment).

1 Like

Thanks @mitra for this.
The long lived certificates were one of the options we thought could be explored.
Though registrars could be willing to help, for the general case of anyone that might not have relation or time to build that relation with them, it is enough to know that a valid option could be to get one of this certs every 2-3 years.
I still think that an automated LetsEncrypt cert every three months is also a possible approach, if the fetching of the cert and the deployment is also automated.
I haven’t come across a community that is so disconnected that no member of that community gets connection in three months with the outside world.
If that is the case, you could stream the TLS certificate over Satellite with something like Othernet/ Toosheh to beam down the certificate.

Sure Nico - theoretically possible, but you’ve got to have all that work seamlessly with non technical users, and if they fail to renew in time, then the system goes down until they get a new cert.

Indeed…

So then the options would be:

  1. Payed long-lived certs: simple and (maybe) expensive.
  2. Payed shared long-lived cert: simple and cheap, but no security in between the people that share it.
  3. LetsEncrypt cert: requires a regular update that if it doesn’t happen breaks the system till it gets updated.
  4. Self-signed cert: you need to accept the dreaded browser warnings.

The other option, if you have the chance, is not using the browser at all, and use your own app, were you can do SSL pinning, which has it’s own pros and cons, and introduces the complexity of how you distribute your own app (and mobile/desktop support).

Any other option to consider?

1 Like

Good summary @nicopace

The most important issue is what @devon mentioned: Use a fullly qualified domain name, such as myofflinedomain.com But even more than that, get a fully qualified domain name - and also set up a host on the internet, with some information about the site to let the user know about the content if they access it through another connection by accident - or that it is zero rated or free, or whatever the case may be, when accessed from their network. (If they’re accessing it on the offline network, a simple script on your page will be able to tell them, by just checking if the IP address of the request is in the range of the network.)

Then on your main router you can redirect the traffic to the local site via NAT, or you can use use split DNS to point it: eg. myofflinedomain.com to your local IP address for the server with the content.

The fact that technicians have to click past certificate warnigs when configuring wireless points sets a very bad precedent. I want to STRONGLY URGE every community network to help their communities understand how computers are different from the real world, and how anybody can steal information that you might not think is valuable, but that is valuable to for example the financial world, or the police - and why they should never click past a certificate warning and never use self signed certificates or to teach anybody in their network how to install a CA in their browser - for the simple reason that this makes it look like an acceptable thing to do, when it is not so when a criminal prompts someone to do it in order to steal their identity, and take out loans in their name or with their banking details, which they will be held responsible for instead of the criminal, it’s not something brand new that they’ve never done, or something that they’ve been warned not to do, and so they will be more resilient against exploitation.

I think this is the best option with automated and periodic cert updates via LetsEncrypt, and it requires one (or more if you like) trusted server in the CN that is connected to the Internet, with the responsibility of getting the updated certs regularly and pushing them out to local servers.

For example, you may have a CN node at 10.0.0.5 hosting a chat server, that you want any device in the CN to access, even if these devices don’t have Internet access. Let’s say you own the domain example.com and has a local DNS server in the CN that points chat.example.com10.0.0.5. In this case, not even the chat server at 10.0.0.5 has Internet access, it just can connect to the cert server that has Internet access.

To do this, you’d want to set up the cert server to use Letsencrypt’s DNS-01 challenge, which basically means publishing a TXT record as proof that you own example.com. Once it sees your challenge response via TXT record, it will issue you a cert, which you will then push out to 10.0.0.5 (via the CN or run a USB key over, that doesn’t matter). Note that the cert doesn’t care what IP address is serving the cert, as the verification is completely decoupled from the web server.

When someone in the CN asks a DNS server, whether it be the local or an Internet DNS server, you’d want to direct them to 10.0.0.5. The client will receive from 10.0.0.5 the valid cert issued from the Letsencrypt CA and be happy with it.

This setup is common in home networks (I do this for my own home devices with local IPs), or companies that run a lot of web servers and would delegate a single node to handle certs for everyone. Of course, these are all trusted environments, and in this CN case the cert server basically has the capability to act as anyone who depends on it, but it also handles all the complexity on behalf of the local servers.

I have some code here on how to set up the cert server to get certs for a list of domains, it uses dehydrated to do the Letsencrypt’s DNS-01 challenge, and Digital Ocean APIs to manage automated TXT publishing. Everything should be in the [ crontab, dehydrated, nginx ] folders and should only take a couple hours to set up, assuming you already have a domain name to use and ok with pointing NS to Digital Ocean. The part I don’t have here is how to push the fetched certs to 10.0.0.5, but that distribution strategy should be how the CN wants to handle, and should be straight-forward.

Lastly, this may be a useful read, it’s essentially the same problem. I think I a months late to this thread, but hope this helps.

2 Likes

Thanks @benhylau for joining the conversation.

The biggest challenge lies in networks that don’t have a reliable access to the internet, or no access to the internet at all.
I like the DNS txt challenge approach, as it is very simple to setup and then you can take the cert wherever you want.

And thanks everyone for working together to find out ways to go around a tricky issue that appears because CNs (and offline systems, among others) are not being taken into consideration when designing these structures.
I hope other encryption and trust chains can be developed in the future where this discussions can inform the process that shape them.

1 Like

There is a very interesting discussion thread on wicg.io about exploring new protocols for trust in the local networks, without relying on internet infrastructure.

I encourage you all to jump in and contribute.

2 Likes

Just implemented SSL for the local server in my community network.

  1. Get a domain, I got moinho.app because it’s short and cheap
  2. On a cloud server generate a wildcard certificate using certbot. Instructions here.
  3. Copy certificates and generate crt and key files:
mv fullchain.pem moinho.app.crt
openssl pkey -in privkey.pem -out moinho.app.key

Change moinho.app for your domain.

  1. Copy the crt and key files to the certs directory of the server (in my case nginx-proxy).
  2. Done!

All my local services have ssl, and when you access moinho.app online you see a slightly different version of the app, also with ssl.

Next step would be to automate this, seems like @benhylau has done some work in that direction.

2 Likes

Interesting thread on HackerNews

Ask HN: What’s your solution for SSL on internal servers?

2 Likes

FWIW, I’ve been using the howto at Lets Encrypt for internal hostnames | jsavoie.github.io for local https services and it works pretty cleanly.

1 Like

And now there is https://www.getlocalcert.net/ with an accompanying debate on HackerNews

1 Like