Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

penultimatename

I stopped trusting this application when I realized my some of my self hosted services were exposed without authentication, despite the configuration being set. Apparently there had been an bug open for months that authentication didn't work.

I accept it's my fault for not re-testing this often, but what a huge issue. It's vanilla Nginx or Caddy from here on out, it's not worth introducing a third-party security risk.

KMnO4

I have a bug where every time my server restarts and Nginx Proxy Manager starts up, it gives all my proxy hosts incorrect SSL certificates. In order to get my services working again I have to open NPM, manually open each host, and press edit, and (without making changes), press save.

This bug has existed for a year and a half in Github without any response from the developers.

As much as I like NPM, I need to move onto something more reliable.

jonasal

I also prefer to run Nginx as vanilla as possible, but having automatic renewal of certificates is something which is really nice. I looked around at some of the more popular solutions, but didn't like the mounting of the docker socket required by [acme-companion][1], or the lack of bootstrapping capability of [nginx-certbot][2], so I made [something][3] that solved both of those issues. A self contained Docker container that is able to populate the certificate request form what you write in your Nginx configuration files. Please check it out if you have time, and I will gladly take any feedback if you have any!

[1]: https://github.com/nginx-proxy/acme-companion [2]: https://github.com/wmnnd/nginx-certbot [3]: https://github.com/JonasAlfredsson/docker-nginx-certbot

zrail

Mine just stops renewing certificates so I stopped fighting it. Haven't replaced it with vanilla Nginx yet but that's a project on my todo list.

metadat

You might also consider using the fully OSS Traefik for a load balancer and proxy. It arrives to be friendly and approachable.

https://github.com/traefik/traefik

jacooper

You can try nginx-proxy, its similar to trafeik but based in nginx and a bit easier.

https://github.com/nginx-proxy/nginx-proxy

undefined

[deleted]

spindle

I happened to need to do this in NixOS yesterday, and look how easy it is:

  security.acme = {
    acceptTerms = true;
    email = "mail@whatever.net";
    };
   
  services.nginx = {
    enable = true;
    recommendedGzipSettings = true;
    recommendedOptimisation = true;
    recommendedProxySettings = true;
    recommendedTlsSettings = true;
        virtualHosts."whatever.net" = { default = true; enableACME = true; addSSL = true; locations."/".proxyPass = "http://127.0.0.1:9955/"; };
    };

and then each additional proxy takes just one more line. Not quite zero boilerplate, but almost!

evol262

This is rapidly becoming a bad joke. "You know how someone does Crossfit/is a vegan?"

You know how someone uses NixOS?

spindle

Right! I even felt slightly guilty for posting it ... but I posted it anyway because it's useful.

nyolfen

here's my caddy config for a reverse proxy with TLS:

sub.domain.com { reverse_proxy localhost:8080 }

mholt

You don't even need a config file for that!

    $ caddy reverse-proxy --from sub.domain.com --to localhost:8080
Done :)

kuschku

You mean

    sub.domain.com, sub.domain.com. { reverse_proxy localhost:8080 }
?

Because caddy currently doesn’t handle DNS names correctly, so you have to duplicate every single virtual hostname config (it’s a long-standing open bug)

[EDIT: Thanks to francislavoie for reminding me about the shorter syntax for this]

francislavoie

Actually, we mean:

    sub.domain.com, sub.domain.com. {
        reverse_proxy localhost:8080
    }
But seriously. Can you stop posting about this every single time there's even a vague mention of Caddy on HN? It's tired. You've gotten your answer before. Huge majority of people don't care about trailing dot domains.

Trailing dots are complicated and really not worth the complexity it would involve to support them. See https://daniel.haxx.se/blog/2022/05/12/a-tale-of-a-trailing-...

nousermane

Hey, at least caddy does case-insensetive hostname match, so you don't need to repeat the domain 2^12 times </s>

Seriously though - that's a pretty bizarre bug, absent from both nginx and apache httpd. And that attitude in sibling comment doesn't help either.

tinco

That's super interesting. Automatic ACME is not a feature in nginx right? So the maintainers of the nginx NixOS package have built this integration themselves just to make NixOS more powerful for this use case?

mholt

> Automatic ACME is not a feature in nginx right?

Correct. You need separate, less reliable tooling to use ACME with nginx.

kuschku

Doesn’t the built-in tooling only really provide an advantage if you’re using the http or sni challenges?

If you want to use wildcard certs via the dns-01 challenge (in my case via rfc2136), the entire challenge runs out of band anyway, so there’s no difference in reliability. (At least in my tests so far, though support for the RFC 2136 standard in caddy and/or plugins is quite poor afaict)

jphsnsir

How was the Nix experience? Did you try just nix or also NixOS?

radoomi

I used NGINX Proxy Manager for ages for my VPS where I host some side projects. But recently I switched to Caddy Docker Proxy (https://github.com/lucaslorentz/caddy-docker-proxy) and I'm happy with the switch.

Seems easier to set it up directly in the docker compose files rather than have an extra interface. I guess you can do more in NPM but for me it's enough.

Bayart

Don't Traefik and Caddy already do exactly that with their official containers ? What makes NGINX Proxy Manager special in that instance ?

sithadmin

The setup and UI for NGINX Proxy Manager is a lot more noob-friendly than Traefik. Haven't used Caddy before, so can't compare there.

If you have the skills to use Traefik, there's no reason to use NGINX Proxy Manager.

qbasic_forever

This has a little auth system with web UI where you can login in, create users, give them access to certain subdomains, etc. for a simple single sign on experience. Traefik and Caddy just give you the reverse proxy and auth but none of the user management or login UI.

francislavoie

As of Caddy v2.5.1, you can easily pair Caddy with Authelia to provide auth for all your services!

ulkesh

Does Traefik or Caddy have UI-built-in support for Let's Encrypt? If not, then perhaps that makes this special. If so, then probably nothing else except a relatively user-friendly UI.

francislavoie

No UI in Caddy, but it's only two lines of config to set up a domain with Automatic HTTPS and a reverse proxy to one of your apps. Don't really need a UI for that.

drxzcl

I love traefik, but getting started is kinda rough if you’re not quite technical. There are like three different places to put configuration, and even getting informative error messages out of the thing it’s not exactly trivial.

But once it works, it works.

donmcronald

TLDR; It's significantly simpler to configure NGINX Proxy Manager if you have a small amount of config to do in a home lab scale setup.

I've used Apache, Traefik, and now NGINX Proxy Manager for my self hosted development environment. I've also used HAProxy in front of them as a (TCP) load balancer using a single IP address so I can run multiple backends that all deal with their own TLS. Caddy is my preferred choice for spinning up a quick webserver.

I just started using NGINX Proxy Manager, so I haven't discovered the things I don't like yet, but I can tell you why I switched to it.

Apache and Certbot were what I used initially and I should have kept using it. NGINX Proxy Manager isn't a lot different and I've spent so much time switching through solutions that I should have just managed Apache by hand. I'd be way ahead in terms of time spent.

I didn't even make a serious attempt at Caddy because I want to use wildcard TLS certificates with Cloudflare DNS challenges. The DNS plugins aren't built in to the official Docker container and finding instructions for setting up a single wildcard certificate that could be used on multiple sites was a pain.

For example, the page for wildcard certs [1] links to a simple config example [2] and a json config example [3].

The first example is like the start of the owl drawing joke where it starts out "draw a circle". It's easy to understand, but it's too simple. There are two config blocks, foo and bar, that both contain a `respond` directive. One of those could show a simple reverse proxy config and it would help a ton for people trying to learn to use Caddy for TLS termination. The second example is like the "draw the rest of the owl" part of the joke. It dumps you into a full fledged, comprehensive json config example where you'll drown in info if you don't already know how everything works.

Caddy can also be a bit of a confusing black box. For example, I recently ran across the below config. Assume you don't know anything about Caddy and tell me how it's matching requests to route the web socket traffic to the correct backend.

    localhost {
        reverse_proxy website:8080

        @rendezvous {
            header Connection *Upgrade*
            header Upgrade websocket
        }

        reverse_proxy @rendezvous rendezvous:4000
    }
If I don't know anything about Caddy I don't even know if the `header` lines are matching that header or adding that header. What are the asterisks for? Why use a custom config language when the real HTTP headers are just as concise? Does the difference in capitalization mean something? Isn't something like this easier to skim, reason, and to learn about?

    set-header 'Connection: Upgrade'
    set-header 'Upgrade: websocket'
For some semi-related annoyance, look at the MDN docs for that [4] header. They use different capitalization in consecutive examples. It's like the goal is to confuse newcomers. How can someone be a developer or a tech doc writer and ignore those kinds of details?

Here's a snippet from my HAProxy config to contrast the Caddy config:

    acl is_certbot path_beg /.well-known/acme-challenge/
    use_backend http-example-com if is_certbot
Notice the difference? A simple HAProxy config can be written in a way that someone who has no idea how HAProxy works can skim the config and get a pretty good idea of what's going on.

Caddy is probably the best option for a simple, top down config because most of the sites would be super simple and it would be terse / easy to skim once you learn it. I think I would have ended up using it if I didn't have to deal with the hassle of getting the Cloudflare DNS plugin set up on my own.

I've used Traefik the most and I don't like it now. I remember the config for wildcard certificates being really hard to figure out. For http challenges you must tell services which cert resolver to use. For wildcard certificates you must omit the cert resolver config on all configs except one of them. That wasn't documented anywhere when I originally did my setup. That's v2. The v1 config was much more sane, but I don't recall exactly how it worked.

The second issue I have is the same as what you'd get with things like `caddy-docker-proxy`. Even though I have a single container with a wildcard cert doing TLS termination, I have config strewn across a dozen different places. It's really hard to get an overview of what's going on and I have at least 4 lines of config per host where I need to make up parts of the key. Ex:

    traefik.http.routers.gitea.tls=true
When you're trying to learn it, understanding where the `gitea` portion of that key comes from is frustrating. The config for external legacy sites (ie: using the `file` provider) are equally confusing. Traefik looks like it makes sense for large deployments where the labels are handled by additional tooling. All of my complaints revolve around the complexity of the config when putting it into `docker-compose.yml` files by hand, so what's most likely happening is that my use case of a home lab scale config doesn't really make sense and I should have picked something that's easier to maintain. Scalability is paid for in complexity, so choosing something that can scale massively for a home lab is often a bad choice based on my experience.

The appeal of NGINX Proxy Manager is that for simple reverse proxying, which is 95% of what I've got, it's really simple. I don't have to learn some newly invented configuration language until I need an advanced config. I don't know NGINX at all and it took me about 15m to get something running behind NGINX Proxy Manager. Compare that to Caddy where it took me a couple hours just to figure out what I'd need to install/configure just to get a wildcard TLS cert or Traefik where I wasted 10s of hours getting to the point where I could use it with the main (negative) reward being that now I have something that's a pain to maintain.

With NGINX Proxy Manager you get an overview of all your TLS termination and certificates in a single place, it only takes a few minutes to get up and going, and there's an escape hatch to provide advanced configs if needed. Plus, there are tons of config examples and troubleshooting resources for NGINX.

1. https://caddyserver.com/docs/automatic-https#wildcard-certif...

2. https://caddyserver.com/docs/caddyfile/patterns#wildcard-cer...

3. https://caddyserver.com/docs/json/

4. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Up...

francislavoie

FYI for Caddy + cloudflare DNS plugin in Docker, you just need to write a Dockerfile like this (see https://hub.docker.com/_/caddy):

    FROM caddy:2.5.1-builder AS builder
    RUN xcaddy build --with github.com/caddy-dns/cloudflare
    FROM caddy:2.5.1
    COPY --from=builder /usr/bin/caddy /usr/bin/caddy
This makes a custom build with xcaddy, with the cloudflare plugin added, then copies the build on top of the vanilla Caddy docker image.

Then in your Caddyfile, you configure the DNS plugin with your Cloudflare API key.

    example.com {
        tls {
            dns cloudflare <API-KEY-HERE>
        }
        reverse_proxy your-app:8080
    }
You can use that wildcard cert pattern you linked to if you must use a wildcard cert. You're right we could adjust that example to use reverse_proxy instead of respond, but we try not to make any assumptions about what you're serving in the examples. But we do assume you have at least read the Caddyfile Concepts doc to understand the structure and scope of the different parts of the config.

donmcronald

I appreciate the reply. I took some time to look at your example so I can give some feedback on where I end up when I think about building / maintaining my own image.

My immediate reaction is that the example is nice as a one-off build, but it's much more complex if I need to set up something I can maintain long term. I might be overthinking it, but in the context of thinking about something I can maintain my thought process is below. The questions are mostly rhetorical.

First, what versions am I getting? Does using `2.5.1-builder` result in a customer built binary that's version `2.5.1`? The command usage [1] of the `xcaddy` command says it falls back to the `CADDY_VERSION` environment variable if it's not set explicitly. Since it's not set explicitly, I go looking for that variable in the Dockerfile [2].

That's some templating language I'm not familiar with and I can't track down where the variable gets set, at least not quickly. I'd probably have to spend an hour learning how those templates work to figure it out. To make a quicker, educated guess, it most likely matches the builder version. The docs said the version can be set to any git ref, so I can explicitly set it to v2.5.1 on the command line [3] to be certain.

Now, what version of `caddy-dns/cloudflare` am I getting? The xcaddy custom builds section of the docs [4] says the version can optionally be specified, but it's not specified in the above example. There aren't any tags in the repo, so it's probably building off `master`. The doc says it functions similar to `go get`, but doesn't explain what the differences are and the default behavior isn't explained either.

The docs for `go get` [6] say it can use a revision, so maybe a specific commit can be used for that, but I'd need to test it since I'm not super familiar with Golang.

What other risks come along with building and maintaining my own custom image? I could end up with a subtly broken build that only occurs in my environment. Portability doesn't guarantee compatibility [7] and building custom images increases the risk of compatibility issues beyond what I get with official images (building and running vs just running). That blog post is a really cool read on it's own BTW.

I need to consider the potential for breakage even if it's miniscule because my Docker infrastructure is self hosted and will be sitting behind my custom built Caddy image. If my custom image breaks, I need a guaranteed way of having access to a previous, known good version. This is as simple as publishing the images externally, but adds an extra step since I'll need an account at a registry and need to integrate pushes to that registry into my build.

If I build a custom image, do I let other people I help with the odd tech thing use it or is all the effort for me only? I don't want to become the maintainer of a Docker image others rely on, so I can't even re-use any related config if I help others in the future since they won't have access to the needed image.

To be fair, I also see things I don't like in the NGINX Proxy Manager Dockerfile [7]. The two that immediately jump out at me are things I consider common mistakes. Both require unlucky timing to fail, but can technically cause failure IMO. The first is using `apt-get update` which will exit 0 on failure and has the potential to leave `apt-get install` running against obsolete versions. The second is using `apt-get update` in multiple parts of a multistage build. If I were doing it I'd run `apt-get update` in a base image and avoid it in the builder + runtime images to guarantee the versions stay the same between the build container and the runtime container.

It took me about 1h to work through that and write this comment, so it's not just a matter of building a Docker image and plugging in the config. There's a lot of nuance that goes into maintaining a Docker image (I'm sure you know that already) and not having an image with the DNS plugin(s) baked in is a show stopper for anyone like me that can't justify maintaining their own.

Also, a 4 line Docker file looks nice in terms of being simple, but explicitly declaring or even adding comments describing some of the things I pointed out above can save people a lot of time. Even comments with links to the relevant portions of the docs would be super useful.

My reason for wanting the Cloudflare DNS plugin is that I have some things I want to run 100% locally without ever exposing them to the internet. The desire for wildcard certificates is to keep things from being discoverable via CTLogs.

I hope that's useful feedback. I realize someone bemoaning the difficulty of running your stuff at home lab / small business scale isn't exactly the target audience in terms of picking up customers that pay the bills. Thanks again for the reply / example.

1. https://github.com/caddyserver/xcaddy#command-usage

2. https://github.com/caddyserver/caddy-docker/blob/master/Dock...

3. https://github.com/caddyserver/caddy/tree/v2.5.1

4. https://github.com/caddyserver/xcaddy#custom-builds

5. https://github.com/caddy-dns/cloudflare/tags

6. https://go.dev/ref/mod#go-get

7. https://www.redhat.com/en/blog/containers-understanding-diff...

8. https://github.com/NginxProxyManager/docker-nginx-full/blob/...

mholt

> If I don't know anything about Caddy I don't even know if the `header` lines are matching that header or adding that header.

The '@' sign is pretty ubiquitous these days to mean "at" or "toward". Email addresses and screen names use this all the time. I think this is not too complicated.

> What are the asterisks for?

What asterisks are almost _always_ for: wildcards! Connection headers can contain a comma separated list of values:

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...

Hence the need for the asterisk. Your other server configs are incompatible with HTTP spec.

> Why use a custom config language when the real HTTP headers are just as concise?

We _are_ using real HTTP headers.

> Does the difference in capitalization mean something?

Not in header keys, but values can be case sensitive. This is an HTTP spec thing, not a web server thing. Sounds like you should probably already understand this, though.

> Isn't something like this easier to skim, reason, and to learn about?

Not really, since your config isn't compatible with the HTTP spec. And they do different things.

> A simple HAProxy config can be written in a way that someone who has no idea how HAProxy works can skim the config and get a pretty good idea of what's going on.

I, for one, find that HAProxy config more confusing and complex.

> Caddy is probably the best option for a simple, top down config because most of the sites would be super simple and it would be terse / easy to skim once you learn it.

There are plenty of companies using Caddy with thousands of sites and complex configs. Caddy is great for all such use cases, not just simple ones.

Besides, it's a memory-safe web server unlike Apache, Nginx, or HAProxy.

> I think I would have ended up using it if I didn't have to deal with the hassle of getting the Cloudflare DNS plugin set up on my own.

What do you mean "set up on your own"? Our download page does it for you. Or Docker images can be automated to include it as well.

Not really disappointing, especially since no other servers come with Cloudflare-specific support baked in with automatic ACME challenge solvers. I'm not sure what bar you're holding Caddy to but it's not realistic.

> Compare that to Caddy where it took me a couple hours just to figure out what I'd need to install/configure just to get a wildcard TLS cert

You must have not read the documentation/wiki then. They just work with the DNS challenge: https://caddyserver.com/docs/automatic-https#wildcard-certif...

Takes about 2-5 minutes to set up, including getting credentials from your DNS provider.

---

You're not only losing memory safety by using a server written in C, but with external tooling managing your certs instead of having it built into the web server, you're forfeiting robust error handling, advanced revocation logic, automatic OCSP stapling, and more.

donmcronald

I wasn't trying to imply that Caddy is bad, so I hope it didn't come off like that. I was trying to highlight how things that seem simple might only be simple to people that already have a comprehensive understanding of the topic. I'll give you some follow up examples:

> What asterisks are almost _always_ for: wildcards!

However the link you gave doesn't include a single mention or example of using wildcards, so I can't even educate myself. Was that the link you intended to share?

The problem for me in understanding that particular config is that I don't know enough about HTTP to know if you can set headers with asterisks in them. I can't look at that config snippet and figure out immediately if it's matching incoming requests and sending the matched requests to the backend or if it's adding the headers to incoming requests (that are matched some other way) before sending them to the backend.

The root of the confusion for me is that other reverse proxies match on a location for that type of stuff (ex: /wss/) and I simply don't know enough about websockets to make an educated guess about how Caddy is deciding where the traffic gets routed. That's not the kind of thing I'd show up on your forum asking about either. I can figure it out if I need to, but that likely involves spinning up a Caddy server and watching the traffic to see what's going on.

Something as simple as `set-header` or `match-header` instead of `header` would make it easier for me to read and learn the config because it adds enough context that I can immediately figure out what the config is doing. If it's setting headers, that instantly tells me that my lack of knowledge is related to how the matching is happening and I can ignore all the other stuff like the asterisks and the capitalization, at least temporarily. In my specific case, I wanted to translate that config to nginx, so figuring out how the matching works is all I really need.

At the risk of sounding stupid, I'll say I still don't understand if it's routing traffic based on matched headers or adding headers to already matched traffic. This [1] answer seems like a similar config for nginx, but it's routing traffic based on matching those headers.

You have to remember that you can look at a config and instantly map it to the http spec. You probably know the context instantly because one of the possibilities is too stupid to consider. The thing is there's always going to be someone trying to learn that knows nothing and the contextual hints I'm talking about make a huge difference for someone that doesn't know everything.

> What do you mean "set up on your own"? Our download page does it for you.

The download page only gives me a binary, right? I need to deal with setting up a system service. It's not hard, but it's extra steps because I need to redo it every time there's an update or I need to automate it somehow. For my specific use case, running a Docker container (locked to a stable tag) that auto updates is ideal.

> Or Docker images can be automated to include it as well.

That kind of illustrates the point I was trying to make. Setting up an automated Docker build to bake in the functionality I want is significantly different than getting it out of the box. Imagine telling a young person trying to learn "start by setting up an automated Docker build that bakes Caddy + the DNS plugin into the container." Is it reasonable to expect the users of Docker images to be on the same level as the developers of them?

> Takes about 2-5 minutes to set up, including getting credentials from your DNS provider.

Unless you're an experienced sysadmin or a Docker expert, getting to the point where you can start to configure DNS challenges is a huge amount of learning and work. It would take me half a day to set up a good quality, CI built Docker image with the Cloudflare DNS plugin baked in and I've spent hundreds (if not thousands) of hours working with Linux, Docker, CI, etc.. It might take 5 minutes to *configure* if you already know what you're doing.

> no other servers come with Cloudflare-specific support baked in with automatic ACME challenge solvers

Traefik has Cloudflare DNS support baked in to the official Docker image. It's not nearly as easy as Caddy, but it has it.

I qualified my opinion up front saying "for a home lab scale setup". I agree 100% that Caddy is a better product on a technical level, but for a certain scale of setup NGINX Proxy Manager provides a better user experience for me because it's a single container with everything I need baked in, someone else maintains it, and the GUI makes simple reverse proxy configs easier than anything I've ever used (including Caddy).

I wasn't trying to say Caddy is bad and I'm sorry if it came off like that. The only constructive feedback I can give is that if there was an official Docker image with the Cloudflare DNS plugin baked in it's likely I would have put in the time to learn Caddy. I think you underestimate the amount of effort it takes for someone that isn't an expert in the space to get up to speed. For myself I would set aside a full day to familiarize myself with running Caddy, but once I add in the need to build (and maintain) my own Docker image it gets to the point where, pragmatically, I ended up choosing a "good enough" solution over what I'd consider "the best" solution (I tried Caddy first).

1. https://serverfault.com/a/923254/133882

corytheboyd

Use this on my home network to expose a couple services to the internet through a domain that I bought, works great.

You do end up having to add "custom configuration" which means putting nginx configuration into a textarea without any validation. So far it's just been the occasional service that uses websockets, you need to configure nginx to handle the upgrade.

Let's encrypt integration works well enough. It says it persists your Cloudflare api key for DNS validation for cert generation, but it just doesn't or I missed a volume to mount that wasn't called out through the Unraid template for NPM.

Yeah sure you could just do this all yourself you smart person you, but on my home server I just... don't want to. I want a collection of dumb GUIs that get me where I want to be, and this did that for my reverse proxy need. If it stops doing that I'll find something else, but that hasn't happened yet.

ryan29

My `dns_cloudflare_api_token` is in `/data/database.sqlite`.

corytheboyd

Thanks, was hoping someone would chime in on that :) I'll take a look at the database later, if I can hack it in I'll be happy, doesn't need to be pretty.

donmcronald

Assuming a container name of `nginx`, you can `grep` for it to see if it's already there.

    docker exec -it nginx \
        grep -a 'dns_cloudflare_api_token' /data/database.sqlite

Steltek

I previously used Dokku and I loved how the HTTP proxy was "automatic". I've moved past Dokku but don't want the complexity of k8s for a home network. I sort of envisioned a system that could read out running container names and create "$name.$domain" proxy entries automatically.

I never got around to it and my current method of automatically generating nginx.conf from a JSON file has a few tricks left, I guess. I guess one additional problem is that I have few things stacked up on a domain and under different "location" entries.

Shish2k

> I sort of envisioned a system that could read out running container names and create "$name.$domain" proxy entries automatically.

Traefik does this :) That’s what I’m using for my personal server - one traefik instance, a wildcard DNS entry, and then a docker container named “foo” is reachable at https://foo.shish.io

martontoth

fuzzy2

While this can be achieved using Traefik, it requires some creative combinations of static + dynamic configuration to get my_container.example.com with HTTPS and HTTP → HTTPS redirects and Let’s Encrypt. The container labels are a PITA.

I’m not sure there’s an easier solution though.

Shish2k

My setup does all of those things — HTTP->HTTPS redirect and Let’s Encrypt are handled in the global static config: https://gist.github.com/shish/f346a3102f5d690be2f64f6d1eb7d7...

And then only the per-container settings need to be set in the dynamic config using container labels

jacooper

Try this, its pretty easy, but it depends on the docker socket. https://github.com/nginx-proxy/nginx-proxy

jacooper

There is nginx-proxy for this, with auto TLS, its pretty good! https://github.com/nginx-proxy/nginx-proxy

francislavoie

You can also do this with Caddy, using this plugin: https://github.com/lucaslorentz/caddy-docker-proxy

nimbius

somewhat offtopic but, its sort of surprising to see just how little F5 networks has contributed to the NGINX product since its acquisition compared to the OS community.

inginexsucks

Stop building trash heaps with nginx!

nginx is a poorly written, legacy application with insane defaults, bad documentation, has shitty frameworks (openresty) and language integrations (fuck js and lua), and owned by a fraudulent company (f5).

Why the fuck does anyone use nginx these days? Because they hate themselves?

jsisto

I prefer to use SWAG. Docker + NGINX + LetsEncrypt + fail2ban https://github.com/linuxserver/docker-swag

shockeychap

Does anybody know how this compares with other reverse proxies, like Caddy?

Saris

It's very noob friendly, just click a few things on a nice webUI.

But as it's a layer doing a bunch of stuff on top of nginx, letsencrypt, etc.. There's more chance of odd bugs or things not working properly.

If you're familiar with Caddy, just keep using that.

edmcnulty101

This is dope. Is there a way to enable caching easily?

jacooper

Yes, there is an option to enable caching in the UI

Daily Digest email

Get the top HN stories in your inbox every day.

NGINX Proxy Manager - Hacker News