Anyone using HTTPS in a CN?

Is anyone using HTTPS in a CN when disconnected from the internet ?

Devon at Learning Equality (offline version of Khan Academy) was asking me recently about HTTPS, if I understood the question correctly, the problem would be that the browser doesn’t like the certificates because it can’t verify them without an active internet connection.

Has anyone else hit this problem, and if so did you come up with a solution?

  • Mitra
1 Like

Hi Mitra, good question!

Here we already use https with the offline network.
Actually we had to do a whole trick to install the certificate with openssl. I remember that at that time we had to connect the server momentarily on the internet and to a zerotier server inside an altermundi server that offered us a static ip to install the certificate. I remember that we needed an open port 80 to install the certificate.
But the certificate lasted 3 months and after that we couldn’t update the certificate because it depended on doing a whole configuration again on the altermundi server and we no longer dyed access.
Regarding DNS it was simple to solve internally with libremesh, but externally we acquired a public domain.
Currently we are in the process of acquiring a static ip only to update the certificates every 3 months.

1 Like

No, I don’t think the problem has anything to do with being offline.

CA (Certificate Authority) certificates are stored in the client’s OS. So a valid certificate will be checked against that.

The problem is either with the client OS and/or browser being outdated, or they are issuing self-signed certificates.

Self-signed certificates will always pose a once-off issue, in that the ‘visitor’ needs to accept it and add it to stored, and safe, exceptions.

I suspect the problem is with outdated CA certs on the clients, though.

1 Like

I understand they are using LETSencrypt certs, so I would have thought the CA would be there ?

Hi all, and thanks for posting Mitra -

To clarify our situation: users set up instances of the Kolibri server on their local networks, often just at IP addresses (e.g. or sometimes at locally-defined hosts (e.g. http://kolibri). Sometimes there are multiple instances of Kolibri on the same network.

We would like to be able to make the connections secure, and allow people to set up a server on their network at e.g. https://kolibri.

A self-signed certificate does not seem like a viable solution in situations where users bring their own devices.

We’re interested in HTTPS more for the encryption, and less for the identity verification. I believe this problem might not be solvable but maybe there’s a clever strategy out there.

Thanks for any tips you may have!

I haven’t tried this personally but it would appear that the answer lies with the mkcert utility. mkcert not only creates the local certificate but also is its own issuing authority. Some more info at

1 Like

Hey Steve, This looks good, I’ll test it out

@steve, reading the docs for that suggests it will only work if you run mkcert at the command line on the device each browser is running on, i.e. only for linux devices (or Macs), as its installing a new CA for the browser.

you’re right @mitra I realised after I posted it that I hadn’t quite thought it through beyond localhost. :roll_eyes:

I’m wondering why a LETSencrypt cert wouldn’t work - it could be setup at install time, I’d have thought that the CA would be in the browser ? But maybe the problem is knowing what domain to put in the certificate - something wild-cardy might work ?

We’re interested in HTTPS more for the encryption, and less for the identity verification.

They can’t fully be decoupled, though – as it doesn’t really matter if you’re sending your data over an encrypted channel if the recipient is a bad actor.

But maybe the problem is knowing what domain to put in the certificate - something wild-cardy might work ?

Yeah, I see DNS as the major challenge. A CA like Let’s Encrypt wouldn’t be willing to sign a .local/mDNS domain (as they’ll only sign something that you can prove ownership of, and falling under a global TLD). However, we could set up a custom domain ( or something, and issue partner-specific subdomain certificates (with Let’s Encrypt on the backend) such as The central (online) DNS records could then be set up to point to the static IP address of the offline server, in case someone connects to the LAN with a device (e.g. a cellphone) that has its own parallel Internet connection. And then the LAN gateway would need to have its own DNS server running with a hardcoded entry for that subdomain to map to the appropriate IP as well, for full offline support. This would probably only work for certain network topologies, and would require a few steps to set up, but would at least not require any self-signed certs or client-side CA root cert list modifications (so we’d be able to support BYOD).

Good point about Lets Encrypt - and it might be worth trying to reach out to them, I think we have contacts (there are good contacts between EFF and Archive).

I don’t think that complicated network topology is going to work in most setups, mostly because of the complexity and working that in with all the other complexities. After all most platforms cant even make it work with both people connected directly to a server via its hotspot and via a router !

The place I disagree though is on the decoupling between identity and encryption. For a bad actor to capture encrypted communications they have to be able to get their server setup to pretend to be the destination (e.g. as a person-in-the-middle attack), but AFAIK this is going to be an order of magnitude harder in most CN’s than just grabbing a sniffer and looking at unencrypted packets.

Could be interesting to see what Let’s Encrypt would say, yeah – but I don’t think they’re the ones blocking any of this (DNS is the main challenge). Though if there were some trick for them to be able to issue certs for private IP addresses or .local domains, it could be interesting, but seems unlikely (as no trusted CA should be allowed to do this, as they can’t verify ownership).

I agree that asking end-users to set up the network pieces for the “general case” would be hard, but the specific case I’m thinking of is preloaded hotspot devices like a Raspberry Pi or RACHEL (Intel CAP), which serve as the access point and also the web server. These devices already act as a DNS server (via dnsmasq), have their own fixed IP, and could be preloaded with a unique cert that does what we described above. No special configuration would then be needed by the end user. If a router were being used instead of something integrated into the server, then one additional step would be necessary: logging into the router and setting the DNS server to point to the local server’s IP. And if it’s a semi-connected scenario (with a low-bandwidth upstream Internet connection), then even without local DNS set up, the central DNS would kick in and point clients to the correct local IP for the server.

In terms of identity/MITM, an attack scenario could be:

  • Set up a spoofed access point with the same SSID as the one listed on instructions on the wall.
  • People accidentally connect to the spoofed access point instead of the legit one (no way to tell the difference).
  • They would go to the Kolibri web UI. It might have HTTPS set up as well, and look “trusted”. They enter their credentials, which are then logged by the attacker.

This is where identity comes in – in principle, the poster on the wall could also list the expected subdomain. If we only issued a single certificate for each subdomain (or only reissued to someone from the same org/email), then there’s no way the attacker could provide HTTPS over that same subdomain, so the certificate is serving its role as an identity provider. This of course assumes that the human would be paying attention to this, which is frequently going to be unlikely (just as it is on the Internet, with phishing sites – most users don’t know how to interpret URLs). However, in Kolibri, we also do a lot of P2P backend communications over the local network – e.g. syncing user data from one device to another – and computers are far better at paying attention to identity, so this could be a huge security boost to the backend communications).

Oh, for sure – sorry, to clarify: I wasn’t saying that encryption itself doesn’t help with privacy and raise the bar for an attacker (especially as many of our users have router encryption turned off, which often allows more client devices to connect over the same access point, a common bottleneck). I was just saying that for our use case (especially on the backend, where we can pin particular peers as being trusted/known, based on their subdomain), we do also get extra protections from paying attention to certificate identity.

Yes - your Identity/MITM attach is possible but also has risks of being discovered (two AP’s with same name) I’m comparing that to downloading a single sniffing app and looking at packets, which is effectively blocked by HTTPS (even without identity) and is not really AFAIK with wireless encryption.

If you are right, and it cant be solved with encryption, then it would be a long time lobbying effort (on browser companies) to support encryption even if no certs available, not impossible but unlikely.

Wow… this is a really interesting thread… thanks for contributing so much!

My two cents on this:

  • if you purchase SSL certs, you can have up to two years certs… you could use the same cert for all your devices, but anyone that could access one of them could be a MITM for any other
  • you could have a secondary system were devices get registered into a server of yours, and that server would request certs on their behalf (as it will have a public IP address) and hands them over to your devices. you will need to have an exclusive dns record that is accessible from the web, like *, and each of them have a unique address like that one… you could redirect from a general url (like to the one with the TLS cert. You will need to deal with the DNS on your network (some networks like the / would do it for you).

I have been analything this issue in the context of a similiar issue: HTTPS is required if Javascript code wants to access the full potential of the browsers: GPS, sensors, storage, … all of it only is possible with HTTPS. We need it in order to use it in the Progressive Web App of the LibreRouter. Here is what I have thought about it:

TL;DR: for our usecase, though encryption is important, it is harder to obtain, so… we could relinquish security in favour of having access to the added features of PWA. So the way I saw was you could download the TLS cert from (or something like that). We have not expored it yet.

The biggest issue is that web browsers have been made to exist within the web, that exists within the internet, so it assumes things that are certainly nonexistent in offline networks like TLS certs (instead of pep for example) or things that might complicate things more for offline or almost disconnected networks like DNS over HTTPS.

Hi @nicopace, thanks for the info -

In addition to the privacy aspects, we too would love to run Kolibri as a PWA as this would greatly improve the UX for learners on flaky networks or wanting to borrow tablets from school to do homework.

The tradeoffs you’re describing make sense as a way to enable PWAs, if that is the primary goal. It also offers at least a modicum of protection against casual network snooping compared to plain http.

On the other hand, it might lead people to trust the security more than they should, and make best practices (e.g. “always use HTTPS”) harder to teach.

Shall we write a wiki page about all this?

That would be great!

Oh, and another approach is to distribute your own certs on Apps… you can pin any cert that you want on your own custom apps…

Shall we write a wiki page about all this?

1 Like

Thanks @mitra for this.
The long lived certificates were one of the options we thought could be explored.
Though registrars could be willing to help, for the general case of anyone that might not have relation or time to build that relation with them, it is enough to know that a valid option could be to get one of this certs every 2-3 years.
I still think that an automated LetsEncrypt cert every three months is also a possible approach, if the fetching of the cert and the deployment is also automated.
I haven’t come across a community that is so disconnected that no member of that community gets connection in three months with the outside world.
If that is the case, you could stream the TLS certificate over Satellite with something like Othernet/ Toosheh to beam down the certificate.

I asked Peter Eckersley who started LETSencrypt - he suggested just use a long-lived certificate could be for a top-level server, or more likely just a wildcard since we arent worried about impersonation, we are worried about making sure the connection is encrypted over the air.

He suggested that LETSencrypt was designed to do one thing well, but automated, but he thought existing domain top-level registrars would be open to giving a special case long-lived certificate if approached.

No access from the CN to the internet would be required except to transfer the initial certificate.

He’d be happy to go into this in more detail after the current coronavirus crisis (we are collaborating on privacy/contact-tracing so a little busy at the moment).

1 Like