Brian Lovin
/
Hacker News

Ask HN: What's Prolog like in 2024?

Hi, i am a compsci student that stumbled upon prolog and logic programming during my studies.

While i have seen the basics of vanilla prolog (atoms, predicates, cuts, lists and all that jazz) and a godawful implementation of an agent communication system that works on SICStus prolog. I would like to know more because i think that this language might be a powerhouse in per se.

Since my studies are quite basic in this regards i would like to expand my knowledge on it and kind of specialize myself both in this world and another world (ontologies :D) that i really enjoy.

What's prolog like in 2024? what are you wonderful people doing with it?

thanks from a dumbass :D

Daily Digest email

Get the top HN stories in your inbox every day.

upghost

Prolog has reached an exciting new milestone with Scryer prolog. It is the first highly performant open source iso-compliant Prolog.

I would check out Markus Triska's work to have your mind blown:

https://www.metalevel.at/prolog

https://youtube.com/@thepowerofprolog

mark_l_watson

I interviewed and helped hire Mark Thom, the original author of Scryer. I also follow Scryer with interest, even though most of my limited Prolog use has been with SWI Prolog (and one large project with ExperProlog in the 1980s).

One thing to check out: Prolog plays fairly well with Python, providing opportunities for hybrid projects.

klibertp

To playing well with Python, this was on a front page some time ago: https://arxiv.org/abs/2308.15893

"The Janus System: Multi-paradigm Programming in Prolog and Python"

philzook

I am quite pleased with the ability to easily use prolog from within python and vice versa. It makes it now one of the easiest and most expressive solvers to plug into for my tastes. I'm starting to accumulate useful solvers here https://github.com/philzook58/prologsolvers/tree/164297d87f6...

You need to install swi prolog https://www.swi-prolog.org/download/stable and pip install janus_swi

A simple example to get started: https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...

  import janus_swi as janus
  janus.consult("path", """
  edge(a,b).
  edge(b,c).    
  edge(c,d).

  :- table path/2.
  path(X,Y) :- edge(X,Y).
  path(X,Y) :- edge(X,Z), path(Z,Y).
  """)
  list(janus.query("path(a,Y)."))

nextos

On the topic of multi-paradigm programming, including logic programming, Oz/Mozart is an obligatory mention. See CTM and http://mozart2.org/mozart-v1/doc-1.4.0/tutorial/index.html.

The authors were fairly prominent Prolog researchers. It's sad Van Roy is retiring and nobody is taking this forward. AliceML, a StandardML dialect inspired by Oz is also abandonware.

mark_l_watson

Hey, thanks! That looks cool.

nerdponx

How do you normally use Prolog and Python together? I had looked into embedding logic programming within Python in the past, and found a lack of satisfying options, but maybe I didn't know where to look.

mark_l_watson

I have two short examples in one of my books that I am currently re-writing. Here is a link directly to the Python+ Prolog interop examples https://leanpub.com/pythonai/read#use-predicate-logic-by-cal...

overclock351

Do you have any papers comparing Scryer with other prolog systems (like SWI-prolog or SICStus prolog) performance-wise ?

jodrellblank

There are some benchmarks here of SWI Prolog's benchmark suite on diffrent Prolog systems by Jan Wielemaker the SWI Prolog author:

https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...

He finds Scryer performs worse, which he does comment on, he also explains some tradeoffs and historic choices in SWI's design which affects its performance. I think I have seen the author of Scryer saying that's not surprising and Scryer is still building up core functionality where SWI has had 30+ years to optimise, but I don't remember where I read that.

SWI has a document explaining some strengths and weaknesses regarding performance: https://www.swi-prolog.org/pldoc/man?section=swiorother

Edit: some discussion on Scryer previously on HN: https://news.ycombinator.com/item?id=28966133

jfmc

Another table (in the same thread) comparing more systems: https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...

b800h

So SWI appears to be more performant, it has an open license, so as per the GGP's claim regarding Scryer in the post above, it must not be ISO-compliant?

undefined

[deleted]

gorkempacaci

Prolog, and Constraint Programming especially are great to have in your toolbox. I’ve done research in the field for years, and my job in the industry today is writing Prolog. There are real issues with Prolog:

- no proper module nor package system in the modern sense.

- in large code bases extra-logical constructs (like cuts) are unavoidable and turn Prolog code into an untenable mess. SWI prolog has single-sided unification guards which tackle this to a degree.

- lack of static and strong types makes it harder to write robust code. At least some strong typing would have been nice. See Mercury as an example of this.

All being said, Prolog is amazing, has a place in the future of programming, and gives you a level-up understanding of programming when you get how the types in every OO program is a Prolog program itself.

tannhaeuser

I'd advise to not use Prolog as general-purpose programming language, but as an embedded DSL or as a service for the part it's really suited for (if your app involves exploration and search over a large combinatorical space in the first place, such as in discrete optimization in industry, logistics, and finance). You really don't need yet another package manager and pointless premature modularization for modelling your business domains in optimization.

cmrdporcupine

I've tried casually to take this approach and what I've found is that basically none of the Prologs out there are really designed to be properly embeddable. Even Scryer Prolog, written in Rust, isn't really set up to linked into a Rust program and run this way. I was able to "sort-of" make it happen, but it wasn't a workflow that had been optimized for.

To be clear what I'd like is to be able to fire up a thread hosting a Prolog runtime and stick predicates into it and query it using an API in the host language's syntax. Instead the best I could do was munge strings together and parse result out, sort-of. And that was after a bunch of time spent trying to reverse-engineer Scryer's API.

I would love to embed a Prolog to host my application's business rules and knowledge. I could see it super useful in a game even (think of the myriad of crazy rules and interactions and special cases in a game like Dwarf Fortress...)

tannhaeuser

Yeah the preferred approach would be to run a Prolog engine as a service and access it via usual JSON-over-HTTP/REST protocols. Has the benefit of being able to adapt and scale/provision the specific Prolog engine load as well. For smaller projects I guess you could use Minikanren which is specifically for embedding as I understand it, but even standard job shop scheduling and factory/office resource planning tasks would be better served with a Prolog (micro-/whatever) service already IMO.

Karrot_Kream

At this point, why not use one of the many other CP solver packages out there and the layers on top like OR-tools?

tannhaeuser

The domain-specific Prolog code bases you're going to create still can become large and represent a significant development effort. Prolog being an ISO standard with many conformant (or at least mostly conformant) implementations available and relatively strong mindshare and ecosystem compared to extremely niche "CP solver packages and OR-tools" (which one exactly?) significantly reduces project risks such as not being able to find experts, the system not meeting functional or performance requirements, or becoming obsolete down the road. The same cannot be said for some mythical "CP solver packages and OR-tools"; you've nowhere to go if your "CP solver packages and OR-tools" project fails. Optimization and scheduling/planning projects, by their nature, are somewhat experimental and need exploration. It would thus be very difficult to pick "CP solver packages and OR-tools" upfront.

throwaway3306a

Which one is as developed, as universal and as capable as Prolog with CLP and/or DCG?

Serious question, I'd like to have something that's easy to integrate with Node.js.

jimbokun

To me this makes Prolog sound like a tool to reach for similar to SQL. Specialized language for asking specific kinds of search or query over your data.

gorkempacaci

Indeed Prolog programs are also called databases sometimes. Some things Prolog can do over SQL:

- infinite data defined by recursive predicates

- flexible data structures (think JSON but better, called complex terms) and a way to query them (called unification algorithm)

- execution strategy fine-tuned for reasoning (called resolution algorithm). You can do this with SQL but you’d have to formalize things using set operations and it’d be very very slow.

On the other hand, SQL can query plain data very very fast.

wwweston

I've also wondered why Prolog or at least Datalog isn't available/used more widely as a query layer, especially considering that the promise of SQLs natural-ish language really didn't lead to a level of adoption among non-tech workers even approaching the popularity of the spreadsheet, so the reason for that style of syntax didn't really pan out, and Prolog would appear to have some syntactic and capability advantages.

inkyoto

I concur, Prolog particularly excels at being an advanced configuration, embeddable DSL that allows one to express system configurations that would otherwise be not easily possible using a bespoke configuration language or a format. I have used an embedded Prolog core to express complex installation configurations in the past with a great success, and I would do it again for the right problem space.

hosh

The cluster autoscaler in Kubernetes uses a constraint solver. It's translating configuration against dynamic, and changing state within the cluster.

Using something like an embedded Prolog or miniKenran as the core of a Kubernetes operator is something I've wanted to try my hands on.

kibwen

> when you get how the types in every OO program is a Prolog program itself

"Any sufficiently complicated type system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Prolog."

Arch-TK

Any sufficiently complex type system is indistinguishable from an esolang.

ToucanLoucan

Maybe it's just me but I see a lack of a package manager as a massive, massive pro. I can't stand how seemingly every language has a package manager which requires it's own installation and you have to learn how to use THAT thing and then you need some library off github that does some minor task really well but you can't just download the fucking code, you have to import it via, idk, the Fork-Lyft manager which requires Python 3.3 and the PillJump framework and it's just like, I just want a fucking function to parse JSON, I don't want to saddle my system with 600 MB of shit I don't need.

Old_man_yells_at_cloud.jpg

phailhaus

You can always just download the code, nobody's forcing you to use a package manager. It just turns out that unless you want to spend most of your life building and fixing other people's code, it's much easier to use the package manager. The inefficiency is the price we pay, but it's worth it.

duranga1234

Me too! I absolutely see a lack of package manager as a pro. I also hate to saddle anything with 600MB I don't need. 100% agree.

I would go as far as to say that Prolog is more a problem solving language rather than a system building language. Package managers and module systems are for modularization of big systems. You don't need that when solving small recurrent problems. Furthermore, lack of them forces you to avoid dependencies, that most of the time would end as technical debt. IMHO.

qu1j0t3

don't confuse "module system" with "package manager"

radomir_cernoch

You write Prolog code for a living? Where? Do you happen to have a story to share? I'm very curious.

gorkempacaci

Yes :) we make software that helps sell complex products (If your product has a million options and takes up a whole factory floor you can’t just have a series of dropdowns)

_glass

Thanks for the insight, Görkem! I was always thinking CPQs make a really good use case. We had so many problems with performance, not your product, CPQ is becoming a standard software for pricing contracts.

ecshafer

There are a lot of problems that Prolog / Constrain programming will solve very elegantly, and much more easily than imperative languages. I think constraint based programming is seriously under used in the industry, and too many programmers are unaware or unable to write constraint based code. I have always hoped to have just a constrain based programming subsystem in a lot of languages, for those niche cases.

grose

It's great to hear new people are interested in the language! I was enlightened a couple years ago and fell in love.

Currently I'm focusing on creating easy-to-use embeddings of Trealla Prolog using Wasm. You can find my TypeScript library here: https://github.com/guregu/trealla-js and Go library here: https://github.com/trealla-prolog/go. The goal is to make the libraries as painless as possible. Trealla is a portable and lightweight Prolog written in C that supports CLP(Z) and is broadly compatible with Scryer. It's quite fast! I'm currently using it for some expert system stuff at $work and as an internet forum embedded scripting language for $fun.

Speaking of Scryer, they recently got their WebAssembly build working and I hope to contribute a JS library for them in the future as their API stabilizes. Scryer and Trealla are both aiming for ISO compatibility, so it's my hope that we can foster an ecosystem for modern ISO Prolog and provide more embeddings in the future. It's super convenient to get logic programmer superpowers in your favorite language. Also check out Scryer's new website: https://www.scryer.pl/

For something on the silly side, check out https://php.energy. Prolog Home Page, it's web scale :-). It's proof that you can integrate Prolog with bleeding edge stuff like Spin (server-side wasm ecosystem).

tomstuart

Suppafly

>Q: What if Prolog is not suitable for my employer’s problem domain?

>

>Prolog is not suitable for any problem domain, although this is more readily apparent for some domains than others.

At least they are honest about it LOL

drmeister

Dang, substitute Lisp for Prolog and this describes me. Seriously though - Prolog is an awesome tool to have in your toolbox. I've implemented Prolog-like logic programming solutions in several places in my 40+ years of programming. Like rules for assigning molecular mechanics force field atom types.

infinite8s

> Like rules for assigning molecular mechanics force field atom types.

Can you describe a bit more how prolog helped you here? Thanks!

DonHopkins

If it's an official production system you want, then use OPS-5, not Prolog!

https://en.wikipedia.org/wiki/OPS5

>OPS5 is a rule-based or production system computer language, notable as the first such language to be used in a successful expert system, the R1/XCON system used to configure VAX computers.

>The OPS (said to be short for "Official Production System") family was developed in the late 1970s by Charles Forgy while at Carnegie Mellon University. Allen Newell's research group in artificial intelligence had been working on production systems for some time, but Forgy's implementation, based on his Rete algorithm, was especially efficient, sufficiently so that it was possible to scale up to larger problems involving hundreds or thousands of rules.

wduquette

I used DEC's VAX OPS5 for a couple years about around 1990. I quite liked it, and the later versions had some really nice extensions over Forgy's original design.

Then we discovered that our particular rule base could easily be ported into C using a sequence of nested if/thens that ran much faster, and we stopped using OPS5. It was a great tool for doing the initial development, though.

overclock351

Looks fun :D, i think that if i ask my manager to build something out of Prolog i would probably get stab... i mean fired since most of us work in OOP. I would love to be that insane one asking for that :D.

Avshalom

You can use https://logtalk.org for oop in Prolog, use it on top of SWI and you have bidirectional bridges to Python an Java

https://www.swi-prolog.org/FAQ/Python.md

https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...

nickpeterson

I only have one thing to say to this man, “hey! Quit stealing my moves!”

chx

> Prolog is not suitable for any problem domain, although this is more readily apparent for some domains than others.

Fuckin' A.

jjtheblunt

what does that mean?

marcosdumay

It's an except from the article. Getting an explanation out of context is worthless.

z5h

In theory, Prolog is the king of languages. Simultaneously a logical formalism, and (with a resolution system) a language for computation, AND the ultimate meta-programming language as its homoiconic but only goals are evaluated (there is no eager/lazy evaluation fuss - a term is just a term), and goals can only succeed (and have any consequence) if there is already a matching clause.

In practice, there are some very performant and maintained implementations with small but helpful communities.

Also in practice. With all of this power, it's clear that anything could be done (well) in Prolog, but it's not always clear what that way might be. DCGs are an example of a beautiful, elegant, simple, powerful way of building parsers (or state machines) that was not immediately evident to the Prolog community for some time. The perpetual conundrum as a user will be "I could do it this way, but there are certainly better ways of doing this, and I have many avenues I could explore, and I don't know which might be fruitful in what timeline".

jodrellblank

What is it like? 50 years of historic cruft. Questionable whether there are more trip hazards than usefulness for ordinary coding. A fractured community which feels like there are more Prolog systems than Prolog code. Learning Prolog is less "how do I do things in Prolog" and more "how do I contort my things to avoid tripping over Prolog?".

A few dedicated clever people and idealists and dreamers talking about ontologies and building things I don't understand, e.g. the link in https://news.ycombinator.com/item?id=40994780 that could either be genuinely "Prolog is suitable for things no other language is" or "Fusion is 10 years away" or "Perpetual motion is here and so is cold fusion!", I can't tell. But I suspect from the lack of visible activity out in the wider world, closer to the latter than the former. Or perhaps the people able to make use of its strengths are few and far between.

There's a saying about driving to a town which has been hollowed out and is now a road through some empty store fronts and car parks: "there's no there there". The soul of a place is missing, it's no longer a destination, just some buildings on some land. Prolog has the opposite of that, a main road straight past it, few buildings or people, but there is a there there - an attractor, spark of something interesting and fun. Buried in years of cruft. Might be a Siren's call though, a trap - but if it is it appears less dangerous than the LISP one.

everforward

> A few dedicated clever people and idealists and dreamers talking about ontologies and building things I don't understand

I was briefly deeply interested in ontologies via OWL and I suspect Prolog has the same issues that I think plague ontologies in general.

They are a fantastic tool for a system complex enough to be nearly useless. Modelling an ontology for a reasonably complex domain is unreasonably difficult. Not because the tools are bad, but because trying to define concrete boundaries around abstract ideas is hard.

What is a camera? A naive attempt would say an item that takes pictures, but that would include X-rays. Are deep-space radio telescopes cameras? Trying to fix those issues then causes second order issues; you can say it’s something that takes images from the visible light spectrum, but then night vision cameras aren’t cameras anymore.

The reasoning systems work well, they just don’t solve the hard part of designing the model.

strangattractor

I had similar discussions with people that wanted to encode published research into ontologies. I would ask researchers what they think - the answer was always great idea. I would then follow up with - How would you use it? No response. I finally concluded that it would never happen.

1. No one wanted it enough to pay for it to happen.

2. There is always a turn over of ideas coming and going which can never be sufficiently updated to keep it useful. Again no one would pay anyway.

Tools like LLMs seem to be fill the role now. I would like to see a Prolog integrated with LLMs is someway (lack of imagination fails me how that would happen).

rscho

A theorem prover for the medical literature:

https://github.com/webyrd/mediKanren

http://minikanren.org/workshop/2020/minikanren-2020-paper7.p...

Not prolog though. But gives an idea about the goals behind the classification of science papers.

ta988

How would you use it? For searches.

If I want to find something in the brain but not in bone structures. If I want to find something in a kind of cell but that have a nucleus.

They are also extremely useful for automated annotation. Your automated system may annotate with a upper term because it doesn't have enough information to be more precise. That's already a big help for a human to come and put a more precise term.

We are at a convergence of technologies, with ontologies, graphs, llms and logic programming. A lot of people were too early on this and discouraged from pursuing further by people that couldn't grasp why it was so important.

varjag

This is why Lenat and CYC had settled on micro-theories. They found it impossible to build a useful universal ontology so had to fracture them on domain boundaries.

imglorp

I was just pondering if the Prolog universal quantifier would be applicable to reasoning about Cyc frames. Does your comment imply it's not?

riku_iki

your camera example demonstrates that human knowledge is loosely structured and formalized in general, so you can't create strict onthology. One way to work around is to assign some confidence score on statements, so you will have something like that Nikon device is likely camera, and x-ray machine is unlikely camera based on current world model.

drdeca

I don’t see an issue with saying “X-ray photography machines, and deep-space radio telescopes, are (or at least contains-a, in the case of the telescope) cameras”. They just aren’t ordinary cameras of the sort that a typical person might take a picture with.

I think most of the reasoning you would want to do with a concept of “camera” that excludes X-ray machines and telescopes, but includes night-vision, could be handled with “portable camera”?

Hm, I guess you probably want to include security cameras though..

Ok. “Portable cameras or security cameras”.

HelloNurse

An universal ontology cannot have any notion of an "ordinary" camera, not because of expressive limitations but because it's subjective.

Is a CAT machine a camera? Maybe only its sensor and the computers that reconstruct images? Maybe just the sensor? It mostly depends on your location in the supply chain.

Is a box with a projection plane and no means to capture images a camera? Before about 1830, definitely (and then making photographs became a simple upgrade for your "camera obscura").

wslh

Yes, I think that is the experience, for example, in what we called (or call) data science: most of the time is spent in ETLs rather than using ML methods. In a real company linking data difficulty is not technical but time and resource consuming.

worik

Ontology: Study of the nature of being, becoming, existence or reality, as well as the basic categories of being and their relations (philosophy)

What does that have to do with this?

Is there some use of "ontology" in logic I have not heard of?

everforward

It would be this version of ontology: https://en.wikipedia.org/wiki/Ontology_(information_science)

Loosely speaking, ontologies are categories of objects defined by their attributes and relationships to other things. Where a hierarchy is a branching structure where items can only appear on the tree once, ontologies do not require everything to stem from a single "root" node and items can appear in the tree in more than one place.

It's a way of working around how hierarchies can't model some things very well. E.g. "bipedal" is an attribute that can apply to both animals and robots; where does it go in a hierarchy that it can apply to both animals and robots without also implying that robots are animals or vice versa.

zdragnar

Domain Driven Design - one of those things like Agile that triggers all sorts of holy wars - has a lot of overlap with the general concept of ontologies, to the point that I've seen some teams formalize all communication between microservices through a shared "ontology", which in reality was essentially a giant XML based descript of valid nouns and verbs that events could use to communicate between services.

Additionally, there's a good deal of overlap with the "semantic web" concept, which itself had a good deal of hype with very limited (but important) application- even the W3C has some published content on how all three fit together: https://www.w3.org/2001/sw/BestPractices/SE/ODA/

scheme271

Maybe more in philosophy and classic general AI. Basically ontologies are systems of categorizing and classifying knowledge. E.g. if you want to reason about self driving, you would have an ontology that lets you separate traffic signs from billboards.

mannyv

In this context ontology means common vocabulary/categories.

chamomeal

What do you mean by LISP as a siren call?

I’ve just started learning clojure and besides the lack of static types (which is pretty harsh for me), it seems like a fun and practical language.

wk_end

Imagine it's, like, 1980 - or even earlier - and you can work in a language roughly as nice as Clojure, except the rest of the world is stuck working with pre-ANSI C or Pascal or FORTRAN or COBOL or raw assembly language. There's no Python or Java or C# or Ruby or Perl or Haskell or Scala or Kotlin or Rust or JS/TS. Nothing really resembling our modern idea of a high-level language.

(OK, there was Smalltalk. Let's ignore Smalltalk. Lord knows everyone else did.)

That'll alter your perception of reality a bit. Here they were, in possession of a tool massively more powerful - and more elegant - than what everyone else is using. And moreover, everyone else took a look at it and turned their noses up.

Clearly, you and your fellow Lisp programmers are a different breed, capable of seeing further than the rest of the unwashed masses. In a word, you were better than them.

It sounds like I'm being disparaging, but to a certain extent, I don't even think this was totally a wrong attitude to have. Elitist, definitely, but not wholly unwarranted. Lisp really was - in terms of expressiveness, anyway - really that far ahead of the competition. And yet somehow that competition won. The world is cruel and unjust.

So Lisp becomes a kind of Us v. Them cult: if you've heard the good word of McCarthy, you're one of Us. If not, you're best ignored - too stupid to possibly have anything worthwhile to say.

(If you think I'm exaggerating, spend some time reading the words of Usenet Lisp institution Erik Naggum - R.I.P. - who serves as the most extreme but hardly the only example.)

This blinded Lisp diehards to the outside world, which slowly but surely, in many respects, began to catch up or even exceed Lisp.

The other thing is - not only is Lisp a powerful language, at its core is a beautiful and simple and expressive mathematical idea. Combine that with the way macros allow you to extend the language virtually infinitely, there can be a near religiosity at the heart of Lisp - from one lambda all things depend. Lisp isn't just good engineering - it's a glimpse at the fundamental nature of computation, of the universe itself.

I'm not going to sit here and tell you that this is somehow a terrible thing, per se. But it can be incredibly alluring to the right kind of mind, and once you're in its thralls it's hard to get out. You might be working with the tool, but in another sense the tool is working with you. A Siren Song.

forgetfulness

There's also the not all small factor that there was glam to Lisp.

Early Internet discourse around programming was dominated by people who had ties to elite universities in the 1980s, who yearned for the times when the US Government was throwing an abundance of money to the AI industry of the time.

They were the ones rubbing elbows with researchers from MIT, Stanford, Harvard, and Berkeley, who were using specialized hardware and software beyond the capabilities available to that of developers working on more mundane applications, all graciously funded by DARPA initiatives.

That experience was, in truth, unrelatable to young people reading the recollections of ESR and RMS of the period, the in-jokes of these people, their ideas and interactions, but the tales of Lisp, the Lisp hackers and their fabled Lisp machines would be extremely appealing to someone who was very passionate about programming, striving for excellence as a programmer, and to advance in life through merit. Paul Graham would seal the deal with his essays.

pjmlp

> (OK, there was Smalltalk. Let's ignore Smalltalk. Lord knows everyone else did.)

As someone that used Smalltalk/V on Windows 3.x, was aware of Smalltalk role on OS/2 and SOM, alongside the whole Visual Age line of products before Sun coming up with Java, there were enough people looking around Smalltalk until 1996.

asimpletune

Learning about the curse of lisp is always an eye opening point in one’s career

chamomeal

Wow, thank you for the context!! That was a fun read. And definitely explains some of the stuff I’ve read about Lisp. (I only ever thought to look into lisp because of this xkcd https://xkcd.com/224/)

epgui

Clojure is probably the most beautiful language I've ever worked with. Nothing is perfect, but Clojure is very simple and elegant.

7thaccount

Only downside is I don't know Java, so some things that should be obvious are opaque to me.

xigoi

What makes Clojure a non-starter for me is that it runs on the JVM.

ethagnawl

core.logic is pretty neat, too. Especially as it applies to this thread and the ancestor comments.

jolt42

Lisps don't get in your way, but they also have no opinions, which is problematic for community development. As a static type fan, it's the only language I think the pros outweigh than one con.

undefined

[deleted]

slashdave

Who still has nightmares of infinitely nested parenthesis?

forgetfulness

The more nightmarish thing about Clojure is realizing that, in truth, you have no idea what all these dicts you are passing around the terse, nil-punning functions of your codebase hold at any given time.

nogridbag

That was the case for me. I went all in drinking the Clojure koolaid and wrote some small internal CLI tools with it. If I came back to that code a month or two later I could only properly understand it if I opened up a REPL to debug it. I ported those tools to Java and they were dead simple to comprehend.

koito17

Yup. I learned Clojure just so I can use a Lisp and get paid for it, but there is some weird cult against all forms of typing. Even coming from a Common Lisp background, this was strange to me. In Common Lisp, there are implementations (like SBCL and ECL) that can make use of type declarations to produce efficient machine code and allow the compiler to catch errors that would otherwise be run-time errors. There's also other benefits like contextual autocomplete. The autocomplete in Clojure tooling is very basic, and many Clojure libraries try to make up for this by using qualified keywords everywhere. That way, rather than seeing all keywords ever interned, you can type ":some.namespace/" and your editor shows a dozen keys instead of hundreds of unrelated keys.

Many in the Clojure community believe that occasionally validating maps against a schema "at the boundaries" is good enough. In practice, I have found this to be insufficient. Nearly every Clojure programmer I know has had to "chase nils" as a result of a map no longer including a key and several functions passing down a nil value until some function throws an exception. (Note: I don't specify which exception, because it depends on how that nil value gets used!)

Refactoring Clojure code in general is a nightmare, and I suspect it is why many in the community are reluctant to change code in existing libraries and build entirely new things in parallel instead. Backwards compatibility is one often-cited reason, but I do think another reason is that refactoring Clojure code creates an endless game of bug fixing unless you have full test coverage of your codebase and use generative testing everywhere. (I've never seen a Clojure codebase with both of these things. I can count on one hand the number of Clojure codebases where generative testing is used at all).

Function spec instrumentation provides something that feels like runtime type checks in Common Lisp, but now you have to manually run certain functions at the REPL just to ensure some change in your codebase did not introduce a type error.

On the flip side, Java has things like DTOs which always felt too boilerplate-ish for me (though at least it provides useful names for endpoint data when generating Swagger/OpenAPI documentation). Even then, records in Java provide what are essentially maps with type safety and similar characteristics as DTOs.

I think the structural typing offered by languages like OCaml and TypeScript provide exactly what I'd want in Clojure. But when faced with feature requests in Clojure, people will state something like "I have never had a use-case for X, therefore you don't need X". In the case of criticisms, the response is often "I may have ran into X before, but it's so rare that I don't consider it a problem".

jolt42

They seem to disappear with parinfer.

GistNoesis

The "magic" of Prolog is built upon two interesting concepts : Unification ( https://en.wikipedia.org/wiki/Unification_(computer_science)... ) and Backtracking ( https://en.wikipedia.org/wiki/Backtracking ).

Often bad teachers only present the declarative aspect of the language.

By virtue of being declarative, it allows to express inverse problems in a dangerously simple fashion, but doesn't provide any clue for a solution. And you are then using a declarative language to provide clues to guide the bad engine toward a solution. Making the whole code an awful mashup of declarative and imperative.

Rules :

- N integer, a integer > 1, b integer > 1

- N := a * b

Goal :

N = 2744977

You can embed such a simple problem easily but solving it is another thing.

The real surge of Prolog and other declarative constraint programming type of language will be when the solving engines will be better.

Unification is limited to the first order logic, high-order logic unification is undecidable in the general case. So we probably will have to rely on heuristics. By rewriting prolog goal solving as a game, you can use deep learning algorithms like alphago (Montecarlo tree search).

This engine internally adds intermediate logical rules to your simply defined problem, based on similar problems it has encountered in its training set. And then solve them like LLM, by picking the heuristically picking the right rule from intuition.

The continuous equivalent in a sort of unification is Rao-Blackwellisation (done automagically by deep-learning from its training experience) which allows to pick the right associations efficiently kind of the same way that a "most general unification algorithm" allows to pick the right variable to unify the terms.

abeppu

> The continuous equivalent in a sort of unification is Rao-Blackwellisation (done automagically by deep-learning from its training experience) which allows to pick the right associations efficiently kind of the same way that a "most general unification algorithm" allows to pick the right variable to unify the terms.

I don't know how to reconcile this statement about deep learning with my understanding of Rao-Blackwell. Can you explain:

- what is the value being estimated?

- what is the sufficient statistic?

- what is the crude estimator? what is the improved estimator?

Roughly, I think sufficient statistics don't really do anything useful in deep learning. If they did, they would give a recipe for embarassingly parallel training that would be assured to reach exactly the same value a fully sequential training. And from an information geometry perspective, because sufficient statistics are geodesics, the exploratory (hand-waving) and slow nature of SGD could be skipped.

GistNoesis

Once you view prolog goal reaching as a game. You can apply Reinforcement Learning methodologies. The goal is writing a valid proof, aka a sequence of picking valid rules and variables assignment.

Value being estimated : The expected discounted reward of reaching the goal. The shorted the proof the better.

The sufficient statistic : The embedding representation of the current solving state (the inner state of your LLM (or any other model) that you use to make your choices). You make sure it's sufficient by being able to regenerate the state from the representation (using an auto-encoder or vae does the trick). You build this statistic across various instances of problems. This tells you what is a judicious choice of variable based on experience. Similar problems yield similar choices.

The crude estimator : All choices of have the same value therefore a random choice, The improved estimator : The choice value is conditioned on the current embedding representation of the state using a Neural Network.

You can apply Rao-Blackwell once again, based by also conditioning one-step look-ahead. (Or at the limit applying it infinitely many times by solving the bellman equation.)

(You can alternatively view each update step of your model, as an application of Rao-Blackwell theorem on your previous estimator. You have to make sure though that there is no mode collapse.)

You don't have to do it explicitly, it happens under the hood by your choice of modelisation in how to pick the decision.

inkyoto

A unique property of Prolog is that, given an answer, it can arrive at the original question (or, a set of questions – speaking more broadly).

Or, using layman terms, a Prolog programme can be run backward.

ted_dunning

To be precise, a small number of very small Prolog programs can be run backwards.

There are essentially no significant Prolog programs that are reversible with acceptable efficiency.

derdi

To be even more precise, Prolog programs only ever run forward because the order of evaluation is fixed as top-down, left-to-right. These notions of "forward" and "backward" are very unhelpful and should be given up. Beginners find the order of evaluation hard enough to understand, let's not confuse them even more.

Also, the notion is woefully incomplete. Let's say we consider this "forward":

    ?- list_length([a, b, c], Length).
    Length = 3.
Then you would say that this is "backward":

    ?- list_length(List, 3).
    List = [_A, _B, _C].
Fine, but what's this then? "Inward"?

    ?- list_length([a, b, c], 3).
    true.
And then presumably this is "outward":

    ?- list_length(List, Length).
    List = [], Length = 0 ;
    ...
    List = [a, b, c], Length = 3 .
None of these cases change the order of evaluation. They are all evaluated top-down, left-to-right. The sooner beginning Prolog programmers understand this, the better. The sooner we stop lying to people to market Prolog, the better.

inkyoto

I was largely joking. Even though the capability is there, it is not computationally practical nor possible to accomplish such a feat for any sufficiently complex programme.

In the most extreme case, attempting to run a complex Prolog programme backwards will result in an increase in entropy levels in the universe to such an extent that it will cause an instant cold (or hot) death of our universe, and life as we know it will perish momentarily (another tongue in cheek joke).

agumonkey

the bidirectional (relational) aspect of prolog is what got me into this. I love symmetries so it was a natural appeal even before I learned about logic programming (Sean Parent made a google talk about similar ideas implemented in cpp). That said it's very limited. But I wonder how far it could go. (the kanren guys might have more clues)

radomir_cernoch

Do you see a good way to include backtracking in an imperative programming language?

I can imagine how unification would work, since the ubiquitous "pattern matching" is a special case of Prolog's unification. But I've never seen how backtracking could be useful...

vmchale

backtracking is perilous in general; logic programming languages have really nice abilities for such but I don't know how to avoid pathological inefficiency.

YeGoblynQueenne

With memoization as in tabling (a.k.a. SLG-Resolution):

https://www.swi-prolog.org/pldoc/man?section=tabling

Re-evaluation of a tabled predicate is avoided by memoizing the answers. This can realise huge performance enhancements as illustrated in section 7.1. It also comes with two downsides: the memoized answers are not automatically updated or invalidated if the world (set of predicates on which the answers depend) changes and the answer tables must be stored (in memory).

Known to the Prolog community since about the 1980's if I got my references right.

vmchale

Girard has some commentary scattered about his writing.

The search algorithms for logic programming are simply slow, it's a very interesting idea in programming languages, but there's a reason it's not widely used.

> PROLOG, its misery. Logic programming was bound to failure, not be- cause of a want of quality, but because of its exaggerations. Indeed, the slogan was something like « pose the question, PROLOG will do the rest ». This paradigm of declarative programming, based on a « generic » algorithmics, is a sort of all-terrain vehicle, capable of doing everything and therefore doing everything badly. It would have been more reasonable to confine PROLOG to tasks for which it is well-adapted, e.g., the maintenance of data bases.

> On the contrary, attempts were made to improve its efficiency. Thus, as systematic search was too costly, « control » primitives, of the style « don’t try this possibility if... » were introduced. And this slogan « logic + control13 », which forgets that the starting point was the logical soundness of the deduction. What can be said of this control which plays against logic14? One recognises the sectarian attitude that we exposed several times: the logic of the idea kills the idea.

> The result is the most inefficient language ever designed; thus, PROLOG is very sensitive to the order in which the clauses (axioms) have been written.

withoutboats3

This is a great quote and sadly true. What text is this from?

YeGoblynQueenne

For me this kind of criticism is very familiar. It comes from theoretical computer scientists who have these purist ideological convictions about how a declarative language should look and behave, that are as unrealistic, because impossible to implement on a real-world computer, as they are uninteresting for practicing programmers because strictly a matter of aesthetics. Such critics have never made anything useable themselves and are simply angry that someone else made something that works in the real world while they were busy intellectually masturbating over their pure and untouchable vision.

Although I concede that my comment might be a bit unfair to Girard who did, after all, invent the mustard watch.

withoutboats3

This comment is idiotic.

thih9

"The Blind Spot: Lectures on Logic" by Jean-Yves Girard

rramadass

Though i only know Prolog cursorily it is in my todo list of languages to study. I think it has great value in that it teaches you a different paradigm for programming.

You might also want to look at Erlang which is used in the Industry and would be helpful for your future. Joe Armstrong was originally inspired by Prolog and he conceived Erlang as Prolog-Ideas+Functional/Procedural+Concurrency+Fault-Tolerance. Hence you might find a lot of commonalities here. Here is a recent HN thread on a comparison - https://news.ycombinator.com/item?id=40521585

There is also "Erlog" (by Robert Virding, one of the co-creators of Erlang) which is described as, Erlog is a Prolog interpreter implemented in Erlang and integrated with the Erlang runtime system. It is a subset of the Prolog standard. An Erlog shell (REPL) is also included. It also says, If you want to pass data between Erlang and Prolog it is pretty easy to do so. Data types map pretty cleanly between the two languages due to the fact that Erlang evolved from Prolog. - https://github.com/rvirding/erlog

tannhaeuser

Sure, Erlang was prototyped on Prolog because Prolog has excellent built-in facilities for domain-specific languages: you can define new unary or binary operators along with priorities and associativity rules (you can use this to implement JSON or other expression parsing in like two lines of code, which is kindof shocking for newcomers, but comes very handy for integrating Prolog "microservices" into backend stacks), and you get recursive-decent parsing with backtracking for free as a trivial specialization of Prolog evaluation with a built-in short syntax (definite clause grammars) even.

But apart from syntax, Erlang has quite different goals as a backend language for interruption-free telco equipment compared to Prolog.

rramadass

In The Development of Erlang Joe Armstrong says "We concluded that we would like something like Prolog with added facilities for concurrency and improved error handling".

See pdf linked here - https://news.ycombinator.com/item?id=40998632

derdi

You're reading too much into that quote. This is in a section titled "early experiments". It was an initial goal.

There is a lot of historical connection to Prolog, due to the original implementation, and there are syntactic similarities and non-linear pattern matching and dynamic types and a general declarative vibe, but the actual end result of Erlang's evolution, despite the goal of "something like Prolog", is not very much like Prolog at all. Erlang is a functional language, not a logic language. Prolog is a logic language, not a functional language. General goals like in that quote can change over the decade-long development of a language.

YeGoblynQueenne

Yeah but to be honest, Erlang ended up being not something like Prolog at all.

I think Joe Armstrong was a user here and I interacted with him waaaay way back when I first joined. He's dead now :(

btbuildem

Ha! That explains a lot. I've started looking into Prolog recently, and there were some... familiar echoes in there, reminiscent of Erlang.

But of course, the submarine is like a cigar, not cigar like a submarine.

rramadass

The Development of Erlang by Joe Armstrong (pdf) - https://dl.acm.org/doi/pdf/10.1145/258948.258967

ristos

Prolog is a really interesting language. It's like lisp in that, it's definitely worth learning very well, even if you don't find a use-case for it, because the things you learn help you think about programming in a whole new way.

The prolog community is pretty active. SWI has a discourse group. There's SWISH, CLP(FD/Z), abduction via CHR (a rewrite system) or libraries like ACLP. Prolog is homoiconic, and it achieves it in a unique way, via things like functor/3 and =../2 rather than a macro system. There's growing interest in ISO-standard, pure, monotonic prolog for writing large, clean prolog codebases. SWI is the most mature prolog, but Scryer and Trealla are very active and ISO conformant. Trealla is quite embeddable, particularly in javascript codebases. There's also janus for python, and the community is looking to integrate prolog with LLMs.

Prolog shines for writing bidirectional parsers, NLP, expert systems, abductive reasoning, and constraint logic programming. Pure monotonic prolog has some very useful properties in terms of debuggability, making it useful for large prolog programs. There's also some interesting work in developing pure io (library(pio)). Prolog also has a few different techniques for coroutining, including shift/reset. Markus Triska has a very nice youtube series and book on prolog that's worth watching/reading.

The main downside to prolog is really just that there's a steep learning curve to it that puts a lot of people off and prevents it from gaining more traction, similar to why langs like lisp, haskell, and idris have trouble gaining traction. SWI has a lot of features, but it's also not ISO conformant, and a lot of libraries aren't portable and/or feel very procedural/imperative, which defeats the purpose of prolog. The useful libraries can often be ported to less popular prologs that are more promising, like scryer and trealla. For example, I managed to port ACLP to trealla yesterday without much effort, which is a pretty useful abductive system for writing expert systems or any sort of abductive reasoning.

ristos

You can also change the search strategy used in Prolog, ie using library(search), supporting BFS and iterative deepening. Tabling is also supported.

Another useful tool for homoiconicity is clause/2:

?- assertz((foo(X) :- append(X, _, [1,2,3]))).

true.

?- clause(foo(X), Body).

Body = append(X, _, [1, 2, 3]).

If you really like Haskell and OCaml's pattern matching, you'll probably really love Prolog. Prolog's pattern matching is much more powerful.

sirwhinesalot

Not sure about Prolog itself but Datalog really needs to overtake SQL, it's just so much better.

Related areas like constraint programming are still very relevant.

pfilo8

Could you explain more or point out some interesting references? I'm currently trying to understand how Datalog compares to SQL and, potentially GraphDBs

felixyz

TypeDb is a practical Datalog-based database system [1] (with a different syntax). TerminusDb is a project in a similar vein [2], but actually an RDF store at its core. If you want to experiment with the connections between Datalog, relational algebra, and SQL, check out the Datalog Educational System. And if you want to jump into the theory, Foundations of Databases (the "Alice book") is very thorough but relatively readable [4]! Oh, and there's a Google project, Logica, to do Datalog over Postgres databases [5].

[1]: https://typedb.com/ [2]: https://terminusdb.com/ [3]: http://www.fdi.ucm.es/profesor/fernan/des/ [4]: http://webdam.inria.fr/Alice/ [5]: https://github.com/evgskv/logica

burakemir

Mangle is a language that includes "textbook datalog" as a subset https://github.com/google/mangle ; like any real-world datalog language, it extends datalog with various facilities to make it practical.

It was discussed on HN https://news.ycombinator.com/item?id=33756800 and is implemented in go. There is the beginnings of a Rust implementation meanwhile.

If you are looking for datalog in the textbooks, here are some references: https://github.com/google/mangle/blob/main/docs/bibliography...

A graph DBs short intro to datalog: just like the edges of a graph could be represented as a simple table (src, target), you could consider a database tuple or a datalog or prolog fact foo(x1, ..., xN) as a "generalized edge." The nice thing about datalog is then that as one is able to express a connections in an elegant way as "foo(...X...), bar(...X...)" (a conjunction, X being a "node"), whereas in the SQL world one has to deal with a clumsy JOIN statement to express the same thing.

sirwhinesalot

Don't have any interesting references, sorry. My reasoning is mainly one of simplicity and power. In SQL you need to think in terms of tables, inner joins, outer joins, foreign keys etc. whereas datalog you do everything with relations as in prolog.

Not only is it conceptually much simpler, it's also a "pit of success" situation as thinking in terms of relations instead of tables leads you towards normal forms by default.

Add the ability to automatically derive new facts based on rules and it just wins by a country mile. I recommend giving Soufflé a try.

I haven't worked with GraphDBs enough to comment on that.

greenavocado

Prolog and Datalog example (they are identical in this case)

    % Facts
    parent(john, mary).
    parent(mary, ann).
    parent(mary, tom).

    % Rules
    ancestor(X, Y) :- parent(X, Y).
    ancestor(X, Z) :- parent(X, Y), ancestor(Y, Z).

    % Query
    ?- ancestor(john, X).
The Prolog code looks identical to Datalog but the execution model is different. Prolog uses depth-first search and backtracking, which can lead to infinite loops if the rules are not carefully ordered.

Datalog starts by evaluating all possible combinations of facts and rules. It builds a bottom-up derivation of all possible facts:

a. First, it derives all direct parent relationships.

b. Then, it applies the ancestor rules iteratively until no new facts can be derived.

For the query ancestor(john, X):

It returns all X that satisfy the ancestor relationship with john. This includes mary, ann, and tom. The order of rules doesn't affect the result or termination. Datalog guarantees termination because it operates on a finite set of possible facts.

Prolog uses a top-down, depth-first search strategy with backtracking.

For the query ancestor(john, X):

a. It first tries to satisfy parent(john, X). This succeeds with X = mary.

b. It then backtracks and tries the second rule: It satisfies parent(john, Y) with Y = mary. Then recursively calls ancestor(mary, X).

c. This process continues, exploring the tree depth-first.

Prolog will find solutions in this order: mary, ann, tom.

The order of clauses can affect both the order of results and termination: If the recursive rule were listed first, Prolog could enter an infinite loop. Prolog doesn't guarantee termination, especially with recursive rules.

SQL is more verbose. The equivalent of the Datalog/Prolog example above is:

    -- Create and populate the table
    CREATE TABLE Parent (
        parent VARCHAR(50),
        child VARCHAR(50)
    );

    INSERT INTO Parent VALUES ('john', 'mary');
    INSERT INTO Parent VALUES ('mary', 'ann');
    INSERT INTO Parent VALUES ('mary', 'tom');

    -- Recursive query to find ancestors
    WITH RECURSIVE Ancestor AS (
        SELECT parent, child
        FROM Parent
        UNION ALL
        SELECT a.parent, p.child
        FROM Ancestor a
        JOIN Parent p ON a.child = p.parent
    )
    SELECT DISTINCT parent AS ancestor
    FROM Ancestor
    WHERE child IN ('ann', 'tom');
This is a more interesting example of how one might use Datalog on a large dataset:

    % Define the base relation
    friend(Person1, Person2).

    % Define friend-of-friend relation
    friend_of_friend(X, Z) :- friend(X, Y), friend(Y, Z), X != Z.

    % Define potential friend recommendation
    % (friend of friend who is not already a friend)
    recommend_friend(X, Z) :- friend_of_friend(X, Z), not friend(X, Z).

    % Count mutual friends for recommendations
    mutual_friend_count(X, Z, Count) :- 
        recommend_friend(X, Z),
        Count = count{Y : friend(X, Y), friend(Y, Z)}.

    % Query to get top friend recommendations for a person
    top_recommendations(Person, RecommendedFriend, MutualCount) :-
        mutual_friend_count(Person, RecommendedFriend, MutualCount),
        MutualCount >= 5,
        MutualCount = max{C : mutual_friend_count(Person, _, C)}.
The equivalent Postgres example would be:

    WITH RECURSIVE
    -- Base friend relation
    friends AS (
        SELECT DISTINCT person1, person2
        FROM friendship
        UNION
        SELECT person2, person1
        FROM friendship
    ),

    -- Friend of friend relation
    friend_of_friend AS (
        SELECT f1.person1 AS person, f2.person2 AS friend_of_friend
        FROM friends f1
        JOIN friends f2 ON f1.person2 = f2.person1
        WHERE f1.person1 <> f2.person2
    ),

    -- Potential friend recommendations
    potential_recommendations AS (
        SELECT fof.person, fof.friend_of_friend, 
            COUNT(*) AS mutual_friend_count
        FROM friend_of_friend fof
        LEFT JOIN friends f ON fof.person = f.person1 AND fof.friend_of_friend = f.person2
        WHERE f.person1 IS NULL  -- Ensure they're not already friends
        GROUP BY fof.person, fof.friend_of_friend
        HAVING COUNT(*) >= 5  -- Minimum mutual friends threshold
    ),

    -- Rank recommendations
    ranked_recommendations AS (
        SELECT person, friend_of_friend, mutual_friend_count,
            RANK() OVER (PARTITION BY person ORDER BY mutual_friend_count DESC) as rank
        FROM potential_recommendations
    )

    -- Get top recommendations
    SELECT person, friend_of_friend, mutual_friend_count
    FROM ranked_recommendations
    WHERE rank = 1;
Full example you can run yourself: https://onecompiler.com/postgresql/42khbswat

dkarl

> Prolog uses depth-first search and backtracking, which can lead to infinite loops if the rules are not carefully ordered

Is this an issue in practice? Most languages can create programs with infinite loops, but it's easy to spot in code reviews. It's been over a decade since I encountered an infinite loop in production in the backend. Just wondering if the same is true for Prolog.

dmpk2k

How does the Datalog approach compare with RETE?

worldsayshi

Are there any production ready open source databases using it?

networked

DataScript, Datahike, Datalevin, and XTDB 1.x are open-source. (XTDB 2.x is also open-source but has switched from Datalog to its own query language and SQL.) DataScript, Datalevin, and XTDB have been used in production; not sure about Datahike. All of these databases come from the Clojure community and target Clojure as the primary language. The XTDB team has published a comparison matrix at https://clojurelog.github.io/.

Aside: I write a lot more Python than Clojure, and I wish someone ported Datalevin/Datahike/persistent DataScript to Python. I'd try it as an alternative to SQLite. I suspect with thoughtful API design, an embedded Datalog could feel organic in Python. It might be easier to prototype with than SQLite. There are Datalog and miniKanren implementations for Python, but they are not designed as an on-disk database. PyCozo might be the closest thing that exists. (A sibling comment https://news.ycombinator.com/item?id=40995652 already mentions Cozo.)

cmrdporcupine

Not sure if "production ready" but it's worth looking at Cozo:

https://github.com/cozodb/cozo

Has a dialect of Datalog + some vector support. Multiple storage engines for backend including SQLite, so if your concern is data stability that seems like a reasonable, proven option.

refset

Compiling Datalog to SQL with Logica is possibly the easiest path if you need a production ready open source Datalog setup (i.e. choose your favourite managed Postgres provider): https://logica.dev/

sirwhinesalot

Datomic uses Datalog with a weird clojure syntax instead of the usual prolog-like syntax.

worldsayshi

Not open source though?

felixyz

Shameless plug: you should check out my podcast The Search Space for a view of the broader landscape of Prolog and logic programming: https://thesearch.space/

I don't publish episodes often but I have a lot of good interviewees lined up :)

In general, I would advice you to look beyond Prolog and explore Answer Set Programming, the Picat language, and the connections between logic programming and databases (SQL, RDF or otherwise). Not instead of Prolog, but in parallel. Prolog is awesome!

duranga1234

I love your podcast! I wish you published episodes more often!

I particularly enjoyed the first episode, the conversation with Robert Kowalski.

harperlee

Good to know there is further content lined up! I’m subscribed and eagerly waiting for it!

forks

I'll second the plug: it's an excellent podcast

agumonkey

thanks for the thread for allowing to find you and you for making the interviews

overclock351

ASP is in another uni course of mine ;). I'll check the podcast, thanks

Daily Digest email

Get the top HN stories in your inbox every day.

Ask HN: What's Prolog like in 2024? - Hacker News