75 private links
Minimal forward authentication service that provides Google and OpenID Connect
Enabling SSO with traefik for multiple end-points
Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager - stefanprodan
Docker compose for Grafana Loki, like Prometheus but for logs - swarmstack
Explanations for logging containers which do not write to stdout/stderr
Explanations for setting up new driver and flushing nodes
A Docker swarm-based starting point for operating highly-available containerized applications. - swarmstack
When I had this problem, it was a DNS problem. Traefik was trying to connect to the wrong IP address - an IP address that was on a network with no access from traefik.
Setting providers.docker.network solved my problem - but careful, you need to use the full network name as visible with docker network ls (in my case it was pi_traefik-web - for you it should be traefik_proxy).
In theory, you could also set the label traefik.docker.network=traefik_proxy to your nextcloud-app service.
This problem can be diagnosed through the traefik dashboard: look for the network ip a specific service is assigned -- does it show the 'public' network compatible ip?
I am using portainer and am unable to manage remote endpoints. I tried using the command line to connect to remote docker nodes, but got a message Cannot connect to the Docker daemon at tcp:
Awesome traefik 2.0 explanation using docker (compose)
Really good introductory video for kubernetes concepts;
breaks down with simple analogies and really quick history: pods, services, ingress, deployments, resource allocation vs dynamic resource management; rolling update; and probably 1-2 things more
CodiMD - Realtime collaborative markdown notes on all platforms. - codimd
A tiny docker image with a working tectonic latex engine and biber with a primed cache.
Visit my page on docker hub at: https://hub.docker.com/r/dxjoke/tectonic-docker/ \
Visit my page on github at: https://github.com/WtfJoke/tectonic-docker \
Only ~75MB compressed.
A fully working latex engine. Packages that are not in the cache will be downloaded on demand.
I use this container to build my thesis automated. It includes bibtex8 and Glossaries
docker gitlab ssl traefik autodeploy
Various docker tools to learn
especially single-cluster setups like
- minikube
- microk8s
- kind (kubernetes in docker)
and more
Example project showing how to test Ansible roles with Molecule using Testinfra and a multiscenario approach with Docker, Vagrant & AWS EC2 as infrastructure providers - jonashackt
To explain what's going on: When building, Docker will keep track of the files added with each build step in it's cache. It's storing checksums for each file to skip build steps when they probably don't need to be repeated. E.g. if you would add a python package to the requirements.txt, Docker would notice the invalidated cache because of that checksum and start building again beginning from step 3 (COPY requirements.txt /usr/src/paperless/). Checkout Dockers best practices section for further details.
A side effect of this is, that only your local files are considered. E.g. Docker would not notice a newly uploaded package to PyPI which would be accepted by an cached requirements.txt. (you can use docker build --no-cache=true to force rebuild)
ENTRYPOINT or CMD can be placed anywhere in the Dockerfile, the last occurrence wins for each of them. Since it takes around 2 seconds on my machine to process each of these instructions, I've put them a bit up, so we can profit from another speedup there.