Get the top HN stories in your inbox every day.
montroser
tyropita
It was indeed very unexpected. The first time you would try out a cli tool you’d expect just calling its name to return help info and maybe an error.
ta988
This piece of software would have to handle all the intricacies of the GitHub actions but also be updated to the latest changes...
We are moving back to a makefile based approach that is called by the GitHub workflows. We can handle different levels of parallelism: the make kind or the indexed by worker number when running in actions. That way we can test things locally, we can still have 32 workers on GitHub to run the full suite fast enough.
I also like that we are less bound to GitHub now because it has been notorious unreliable for us this past year and we may move more easily to something else.
stepbeek
Yes! We use gradle as much as possible to accomplish the same thing. CI runs ‘gradle build’ and it’s easily migrated in future if need be.
counttheforks
Is this public?
tonyhb
Take a look at https://dagger.io/. Declarative pipelines using Node, Python, or Go. Parallelism built in, and caching built in - things are cached if they're unchanged.
Xelynega
You could implement something similar by splitting the make targets in the GitHub action before they get passed to make so each worker is assigned their own target, then have a make target that executes all the targets for local multithreaded builds via `make -j${NUM_CONCURRENT}`.
killingtime74
https://batect.dev/ this is like Act but agnostic and public
mdaniel
Are you sure you mean "like Act?" https://batect.dev/docs/using-batect-with/ci/github-actions seems to imply it is its own build system, closer to Dagger or Earthly, but not a GitHub Actions "emulator" as act is trying to be
ta988
No, sorry about that.
wkdneidbwf
i do the same, but with Task
benrutter
Finally! I've been wanting something like this for ages- generally speaking I don't consider myself an idiot, but I'm forced to pull that into questioning every time I test and debug ci/cd actions with an endless stream of pull request modifications
reubenmorais
Unfortunately act is only capable of running very simple workflows. I've found this action to be more useful against the endless PR stream: https://github.com/mxschmitt/action-tmate
You drop it in your workflow and get an SSH shell into the worker, figure things out iteratively, then push when it's working.
dwightgunning
Can you elaborate with some examples of workflows that it is incapable of?
So far I’ve not found any limitations or issues using Ubuntu runners on my OSX dev machine. A couple examples from my workflows: - building docker images - provisioning VMs with the Digital Ocean cli / http api - hardening VMs with ansible - creating/managing k3s clusters with k3sup - deploying apps with helm
I like your suggested approach of using tmate to access the worker mid-way through a run. This should make it faster to develop/debug the steps that make up the workflow. Though this doesn’t address the cycle time of push-to-GitHub/queue-workflow/watch-for-result.
I’m actually going to try combining the two techniques - use tmate to develop inside a local act runner.
Benjamin_Dobell
Workflows that interact with the Github API heavily will fail as they're not available in act e.g. actions like https://github.com/styfle/cancel-workflow-action. Dealing with secrets is also a bit cumbersome. You can throw the following on actions that are not compatible with act in order to skip them:
if: ${{ !env.ACT }}
That said, despite its limitations, I've been using both act and tmate in combination for a couple of years. Gets the job done.
choeger
I don't consider myself a cynic (1), but I am forced to pull that into questioning every time I see how many paid worker minutes are wasted by such endless streams of pull request modifications.
(1) Who am I kidding, of course I am a cynic.
goodpoint
It's ridiculous how we still tolerate the CI vendor lock-in.
naikrovek
name one (1) CI system, open or closed, which shares enough with another CI system, open or closed, that there is no pain when changing from one to another.
they are all at least semi proprietary.
speed_spread
All I ask from a CI is to be able to run Docker and access build resources in a structured manner.
Concourse, Circle CI and gitlab do that.
tjpnz
I normally create a draft PR first to test CI/CD changes and open a new one when it's working. The workflow still sucks but I at least look like less of an idiot in the eyes of my peers.
jaxn
Once I even had to hard fork our entire monorepo to refactor the release actions that tag main.
fgeiger
I got you covered: I try out changes to CI tasks in a clone of the repository. Nobody needs to be bothered by my commits in the actual repository.
zephyros
I found Dagger[1] and Earthly[2] which supposedly would solve the issue of debugging the CI locally. I haven't got time to try them out yet though.
[1]: https://dagger.io/
[2]: https://earthly.dev/
noroot
Yeah, I got so frustrated with the odd workflow (having no sane way to locally test new/more advanced pipelines and having to do lot's of "change .gitlab-ci commits") at work that I started investigating alternatives.
At home, for some hobby projects, I've been using earthly. It's just amazing. I can fully run the jobs locally and they are _blazing_ fast due to the buildkit caching. The CI now only just executes the earthly stuff and is super trivial (very little vendor lock in, I personally use woodpecker-ci, but it would only take 5 minutes to convert to use GH actions).
I am not a fan of the syntax. But it's so familiar from Dockerfiles and so easy to get started I can't really complain about it. Easy to make changes, even after months not touching it. Unless I update dependencies or somehow invalidate most of the cache a normal pipeline takes <10s to run (compile, test, create and push image to a registry).
This workflow is such a game-changer. It also allows, fairly easy, to do very complicated flows [1].
I've tried to get started with dagger but I don't use the currently supported SDK's and the cue-lang setup was overwhelming. I think I like the idea of a more sane syntax from dagger, but Earthly's approachability [2] just rings true.
[1]: https://github.com/phoenixframework/phoenix/blob/master/Eart...
[2]: https://earthly.dev/blog/platform-values/#approachability
undefined
shepherdjerred
+1 for Earthly. It isn't perfect, but it does a very good job at making CI and development 1:1
Arnavion
Rather than replacing your Makefile with GH Actions, replace your GH Actions with a Makefile, and make your GH Actions run `make` in a script task.
Do you really need that GH Action for pulling Docker images / installing $language_compiler / creating cloud resources ? A `docker` / `curl` / `sudo apt-get install` invocation in a Makefile / script needs to be written once and is the same in CI as on your dev machines. The Action is GH-specific, requires you to look it up, requires you to learn the DSL for invoking it, requires you to keep it up-to-date, requires you to worry about it getting hacked and stealing your secrets, ...
A Makefile already supports dependencies between targets. A shell script is a simple DSL for executing processes, piping the output of one to another, and detecting and acting on failure of subcommands. How much YAML spaghetti do you need to write to do those same things in a workflow file?
ilammy
This is a lesson that I've learned after going all-out on actions once.
Now my makefiles in addition to the usual "make" and "make test" also support "make prerequisites" to prepare a build environment by installing everything necessary and "make ci" to run everything that CI should check. With actual implementation being scripts placed under "scripts/ci".
The scripts do provide some goodies when they are run by GitHub Actions – like folding the build logs or installing dependencies differently – but these targets also work on the developer machine.
jaxn
What about caching to reduce ci time? GH setup scripts cache dependencies in a way that would seem hard to replicate in a make file.
ilammy
If it’s manageable – just don’t. Build from scratch. Make sure your build works from scratch and completes in acceptable timeframe. If it’s painful, treat the root cause and not the symptoms.
If it’s unbearable due to circumstances out of your control, there’s nothing wrong with adding some actions/cache steps to .github/workflows – this goes around the build: fetch previous cache before, update the cache after if needed.
The build is still reproducible outside of GitHub Actions, but a pinch of magic salt makes it go faster sometimes without being an essential part of the build pipeline married to GitHub.
If you need to install a whole host of mostly static dependencies, GitHub Actions support running steps in arbitrary Docker container. Prepare an image beforehand, it can be cached too, now you have a predictable environment. (The only downside is that it doesn’t work on macOS and Windows.)
makapuf
You can split your make file in two CI steps, one cached, and the other one depending on it.
rubicks
GH setup scripts cache dependencies in a way that is hard to replicate --- full stop.
2253539242
om.
FooBarWidget
A Makefile really sucks at displaying outputs/logs of commands, especially when there are lots of commands and when they run concurrently. It also really sucks at communicating what the overall progress is: how many jobs have finished, how many left, how much time has elapsed.
Heck make can make all this much better by just prepending each output line with some colored prefix and timestamp. But make hasn't changed in 30 years and likely won't change.
People are proud that it "solves" things since 1976. Yes if your requirements never changed since 1976. I'm not holding by breath that it will deliver basic usability-enhancing features that one can reasonably expect nowadays.
doliveira
You can just invoke make build, make test within the specific CI stages.
FooBarWidget
Then you reduce make to a simple small-scale task runner, basically admitting that it's unusable for large numbers of heterogeneous tasks or concurrency.
de_keyboard
I agree. GitHub Actions should call your scripts but your scripts should not depend on GitHub Actions API.
I also suggest Bazel as a consideration alongside Make. With Bazel, you get two advantages over Make:
1. It is easier to ensure that what GitHub Actions runs and builds is the same as what you have locally, since Bazel can fetch toolchains as part of the build process
2. Using a Bazel remote cache, you do not have to repeat work if the build fails halfway and you need to make some changes before running it again.
intelVISA
I use Makefiles anywhere I can fit them they're a brief respite from YAML hell. This issue was solved in 1976 -- I appreciate there's a lot of VC money in reinventing the wheel (and coming full circle) but I digress.
fbdab103
I too use Make everywhere, but what I would give for an improved tool that had better syntax, composability, and simultaneously deployed everywhere. Sadly, it is good enough, so we shall suffer forever.
tvaughan
This so hard. I like to think of the make targets, e.g. build, test, install, etc. as an API that should be consistent across repos. This really helps with cross team collaboration. The details of how these tasks happen is free to change at will without the need to “distribute” these changes to developers. There’s no disruption to anyone’s flow. Plus, with a little documentation, on boarding new developers is so much more simple.
mikepurvis
I’ve run into this with overly complicated Jenkins pipeline files as well. I think the root cause is just that a single entry point pipeline is boring— everyone wants a CI config that sets statuses and posts results and does things in parallel and interacts with plugins, and every one of those steps is something that is at least semi unique to the CI execution environment.
I think the method you describe is still absolutely how it should be, but this types of interactions are why there’s gravity in the other direction.
pydry
>every one of those steps is something that is at least semi unique to the CI execution environment.
Apart from triggers and environment set up none of those things have to be unique.
I often push complex CI logic in YAML into code where it is more easily debugged and I dont have to scratch my head to figure out how to use conditionals. Sending slack messages should always be in code IMHO.
kodisha
Can anyone point me out to a GH repo with this or similar setup. Eager to learn, especially if it runs a bit more complex node/next.js build.
nerdyadventurer
me too, examples please!
nickjj
This is what I do except I use a shell script instead of a Makefile.
A working example of this is at: https://github.com/nickjj/docker-flask-example/blob/912388f3...
Those ./run ci:XXX commands are in: https://github.com/nickjj/docker-flask-example/blob/912388f3...
I like it because if CI ever happens to be down I can still run that shell script locally.
sneak
You should still have a Makefile that calls your shell scripts when "make" or "make test" is run. Every person who writes shell scripts has a different filename and arguments. "make" and "make test" are always the same everywhere.
Epa095
I use this a lot. It's not perfect, but better than nothing. For example, last time I checked it did not support reusable workflows, and I have had a few cases I got some error from github but act worked. I guess it's a hard catchup game, not only do you have to get the semantic for correct workflows correctly, you also want to error out on the same errors.
I really don't understand why github don't release a local runner for github actions. Everyone I know which has worked with cicd wants the same, some way to run and debug the pipelines/workflows locally.
whinvik
This is pretty cool but I have never managed to make it work with our AWS ECR repos. There are probably some permissions that I am missing but not very clear in the documentation how to do that.
domh
Same. I've only got it to work with the most basic of actions. Anything that requires Docker didn't work for me the last time I tried it!
There's a way to download a much larger "base image" which in theory would make it work, but if I remember rightly it was something like 60gb of containers which I was never patient enough to download. It was always quicker to just push up to github and iterate that way unfortunately.
rebyn
Yeah, 60GB (“big”) and there is a “large” act’s Docker image whose size is 16GB: https://github.com/nektos/act/issues/107#issuecomment-760376...
Stracke
[dead]
letmeinhere
Ability to run {X} locally is _the_ problem with building atop paid services, not the dreaded vendor lock-in.
naikrovek
do your builds and tests in a container.
run the container on your dev machine or your dreaded paid service. same every time no matter what.
letmeinhere
Until your (to extend your example) container orchestration is complex enough that that too requires faster/less permissioned iteration than infrastructure-as-code provides, in which case you need to reimplement the paid service locally. Hopefully then the paid service is open source or has some good-enough-for-your-needs analog like this.
FWIW, while the above sort of recommends kubernetes-everywhere, I'm happy to make a bet on a service like AWS Fargate because I _don't_ think I need to iterate on container orchestration much (as an application developer). Something like DynamoDB, by contrast, seems quite treacherous to build atop, given how closely an application's code is likely to be tied to its primary database.
antman
Can this be used to emulate github actions to a git server other than github? e.g. gitea
veleek
Yup
enbugger
This is what Jenkins has been missing for years. Still no viable solution except keeping everything outside of Jenkinsfiles as much as possible.
rpep
I'm a strong advocate (after migration between various cloud CI providers free tiers when I worked in academia) of making your build process as agnostic as possible to the CI provider. That generally means putting things into shell scripts and calling them, which can be a bit painful, and you'll never get 100% there, but it also has the added benefit of being able to debug things locally too if something goes wrong with the pipeline.
The first time I used GH Actions I thought there was a strong vendor lock-in element to it which I wasn't hugely comfortable with. I'm a big fan of using GitLab CI these days which seems to be a good trade off between various considerations.
hardwaregeek
I really really want to use this but it doesn't work with podman which is unfortunately a bit of a blocker. IMO every CI system should be runnable locally, with minimum effort. Otherwise you end up testing via git push and that's just an ugly development cycle.
This does make me wonder if you could create a sort of "local first CI" where CI is just an extension of local checks. Therefore the CI is just a check that tests that pass locally also pass on a clean machine. Obviously we don't want to run CI locally if it'd take an hour, but on the flip side, if CI takes an hour on a (typically) beefy local dev machine, it'll probably take 2 hours on a remote machine.
reillyse
Super interesting you mention “local first CI”. Have a look at Brisk https://brisktest.com/ (I’m the author).
It’s designed to work “locally” pre-commit, giving you super fast tests, but then when you push you can also use it in your CI/CD pipeline.
We achieve the speed with massive concurrency and pre-built environments, it was built as a drop in replacement for your local test runner.
hardwaregeek
Awesome! Glad to see people are thinking the same things. I'll definitely check it out.
Get the top HN stories in your inbox every day.
We have used this many times during GitHub outages. It's great and does what it says.
But just one word of warning: when you run `act` with no arguments, what does it do? Displays usage? Nope -- it runs all workflows defined in the repo, all at once in parallel!
This seems like a crazy default to me. I've never wanted to do anything remotely like that, so it just seems both dangerous and inconvenient all at once.
Nice otherwise though...