Exodus – relocation of Linux binaries–and all of their deps–without containers

SanchoPanda

I often use exodus to move software onto a Synology NAS (after hosing my installation in the past through haphazard use of alternative package managers) and it has been incredibly helpful. It's great to be able to bring over utilities without fighting with docker, building appimages, or the like.

Thank you so much for it!

It just works most of the time, which is wild given how diverse the environments it operates in can be. One piece of software I have struggled with however is w3m.

   Wrong __data_start/_end pair
   Aborted (core dumped)
I get the above error messages which I understand to be related to libc, but I cannot get around it. Is this something anyone has seen before?

throwaway984393

I'm pretty sure __data_start/__data_end is the symbol for the first/last initialized data in an ELF binary, so possibly a conflicting glibc or a conflicting or missing library

throwaway984393

This probably won't work on binaries that contain references to system files necessary in order to run the binary that aren't code. You usually can't detect them other than running the program and watching it die, or running the program with strace and looking for the files it's trying to stat/open.

In the past what I've done when I have to copy a binary somewhere, is statically compile it in a Docker container and export the install files, then copy those over. I have about a half dozen tools prepped like that (gdb, curl, strace, busybox, etc)

foob

You usually can't detect them other than running the program and watching it die, or running the program with strace and looking for the files it's trying to stat/open.

You can actually pipe the output of strace into exodus and it will include the files that were accessed by the program in the bundle. For example:

    strace -f nmap --script default 127.0.0.1 2>&1 | exodus nmap

throwaway984393

Oh. That's pretty cool

gary_0

You could also use a package manager utility to list which files are needed, assuming the target binary was installed that way. (pkgs.org lists the files every package installs, or locally you could run something like `apt-files`).

cmeacham98

Maybe check out the --detect flag from the README that claims to do just that?

throwaway984393

That will detect files in a system package, but not files in a system package depended on by a system package depended on by a system package. It depends on how the distro packages deps and how apps use them. There are apps which use non-linked files that are only provided in extra packages. And even if you walk the whole dependency graph for a single package and export all listed files, you miss files that are created and updated during the setup steps of a package install (which are not listed in the package).

gary_0

Oh, wow! They've thought of everything.

captn3m0

Could this be used to relocate Android-native libraries to other platforms? Even if the architecture matches, the bionic libc calls is what I found more challenging.

jeroenhd

The tool moves runs the application and all of its dependencies in a container for compatibility, but on Android with bionic that would probably imply moving all of libc with the app. I don't think that'll go down as easily, as the entire bionic framework is integrated quite deeply with the rest of the system.

For "simple" bionic libc calls, an LD_PRELOAD shim might be enough for many binaries, though. You'd need to translate all the bionic libc calls to your system's libc calls, but that might just be easier than it seems because the level of compatibility between the two. Bionic is actually a subset of POSIX libc, so you should be able to map all calls to your normal system calls.

I don't know how well-maintained the project is, but https://github.com/Cloudef/android2gnulinux might just do the trick for you.

captn3m0

I've tried using https://github.com/libhybris/libhybris directly earlier. Will try this.

throwamon

Awesome. I wonder if this could be used to make packages that "just work" for NixOS while no one does it "the Nix way".

stabbles

I'm not sure you can easily link to the bundled libraries

seniorivn

nix already has something similar with pkgs.buildFHSUserEnv it creates a regular liking environment that can run regular binaries

enriquto

Virtual machines, then lean containers, now this... I predict that in a few years we will rediscover static binaries and we will finally find closure.

zbentley

I think not. The landscape of software packaging is too unpleasant and chaotic, the goals of its participants--users vs package collection (distro) maintainers vs. software authors--too misaligned.

Instead, what I think (hope) will happen is a simultaneous increase in prevalence of two things: "effectively static" software releases (whether that's an actual static binary or a container/flatpak/exodus-bundled folder/whatever), and sandboxing/isolation tools at the OS level, to prevent the statically linked dependencies in installed programs from causing harm to the system.

There will probably always be exceptions made for software that by necessity must be integrated tightly with the whole computer (window managers etc.), but I don't think people are going to rediscover dynamic linking en masse.

dig1

After this, we will get back to shared libraries, because updating that libpng would require updating all those static binaries. This is already happening with many go and rust programs.

AnIdiotOnTheNet

And eventually maybe it'll all converge back to the rational way to do things: a base set of shared libraries provided by the OS that maintain strict ABI compatibility, and everything else static.

guhidalg

No? Assuming all processes on a machine _want_ to use the same shared library is what leads to isolation mechanisms.

hackeraccount

I think the end goal is what we all want - be able to do any of the options - single binary to external libraries and anything in between easily. Not with the one way going in fashion and the other out but both supported and accessible.

snovv_crash

Right until you want to run more than one release on the same machine...

ReptileMan

No need anymore. Both bandwidth and storage are dirt cheap.

Linked libraries became obsolete the moment hard drives became huge.

still_grokking

Only if you don't care about security.

teknopaul

And became relevant again the moment people stopped using HDs in devices that run Linux ;)

dv_dt

The last discovery is to add all the container security boundaries to a super lightweight container called a process.

_448

:)

The cool thing would be multi-host static binaries i.e. binaries that host multiple "apps" and its dependencies and use commandline option to launch specific "app" from the binary(or without commandline option, provide a app list from which the user can select that app to run)

still_grokking

> The cool thing would be multi-host static binaries

My first thought was someone is thinking about something like:

https://ahgamut.github.io/c/2021/02/27/ape-cosmo/

There is also something similar that can make binaries that run even bare metal or as EFI apps. But I can't find at the moment. (Maybe someone else has the link?)

still_grokking

So you want in the extreme to package a distribution as unikernel?

Or add a binary launcher in front of a squashfs image of a live distro?

Something like that could be done with AppImage I guess.

cyberpunk

So… busybox?

agumonkey

alacarte staticity

null

[deleted]

stabbles

I've never seen that linker trick before to avoid rewriting rpaths! Very cool

stabbles

How does this work when locating libraries recursively though? It seems --inhibit-rpath only applies to the current executable:

    # Create a.out <- one/libx.so <- one/sub/liby.so
    $ echo 'int g(){return 42;}' | gcc -shared -o one/sub/liby.so -x c -
    $ echo 'extern int g(); int f(){return g();}' | gcc -shared -o one/libx.so -x c - -Lone/sub -ly '-Wl,-rpath,$ORIGIN/sub'
    $ echo 'extern int f(); int main(){return f();}' | gcc -x c - -Lone -lx '-Wl,-rpath,$ORIGIN/one'

    # works fine
    $ ./a.out
    $ echo $?
    42

    # can't locate library through rpath with --inhibit-rpath
    $ /lib64/ld-linux-x86-64.so.2 --inhibit-rpath '' ./a.out
    ./a.out: error while loading shared libraries: libx.so: cannot open shared object file: No such file or directory

    # *does* locate liby.so through rpath!
    $ /lib64/ld-linux-x86-64.so.2 --inhibit-rpath '' --library-path ./a.out
    $ echo $?
    42

null

[deleted]

dataflow

Tried it, didn't work.

Arch:

  $ exodus /usr/bin/git > ./foo
Ubuntu:

  $ ./foo
  Installing executable bundle in "${HOME}/.exodus"...
  Successfully installed, be sure to add ${HOME}/.exodus/bin to your $PATH.
  $ ~/.exodus/bin/git --version
  fatal: cannot handle x as a builtin

elteto

The docs explain that internally ~/.exodus/bin/git is actually a shell wrapper that calls ~/.exodus/bin/git-x (which itself might be a symlink). The -x suffix is added by exodus.

My guess is that git is parsing argv[0] to try and determine which porcelain command to launch and gets very confused by the -x because it is not a git builtin command.

ljm

Git is extended by adding `git-$commandname` executables to $PATH. So `git-x` would extend git with a new command called `x`, i.e. `git x`.

unbanned

So anything that relies on the name of the executing file will break.

null

[deleted]

wffurr

“If you ever try to relocate a binary that doesn't work with the default configuration, the --detect option is a good first thing to try.”

From the README file…

dataflow

I get literally the same error with --detect. And honestly when the README says "transferring a piece of software that's working on one computer to another is as simple as this" you can't blame me for taking that at face value.

foob

I'm the author and I agree that you're correct in assuming that what you ran should work. I just tested this with git on arch and I was able to reproduce the issue. I'll look into why this is happening and hopefully push up a fix soon, but I also invite you to try it out with another binary in the meantime. There seems to be something particular about git, and I think you'll have better luck trying it with almost any other ELF binary.

still_grokking

What are the pros and cons compared to AppImage?

This tools seems easier to use. But the high level result seems quite similar.

Would it maybe even make sense to integrate both tools? I think AppImage is compressed so has an advantage in that point.

growt

There is also outrun: https://news.ycombinator.com/item?id=26504131

Haven’t tested it myself yet but always wanted to

CyberShadow

You can use ~ in PATH? This seems to go beyond normal shell expansion - even `which` returns a path with ~ in it, so it must be aware of the syntax.

y4mi

The tilde is usually parsed/replaced by the shell, so yes it you can use it for path definitions in the shell. The parsing is disabled by single quotes however. A lot of my colleagues were confused by that.

CyberShadow

In this case it is not parsed by the shell. Double quotes disable parsing just as well as single quotes for ~. Try running the command in the video and then run "env".

teknopaul

Lot of work going on in this area at the moment.

I still

apt update; apt upgrade

And look for config in /etc

Linux is a community, sharing space is important to me.