Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

ncruces

It's not really signed vs unsigned that's the issue, IMO. It's (mostly, in C) undefined behavior and implicit conversions?

I'm not sure Go is saner just because len is an int. Well, maybe, depending on how you look at it. Defining len to be signed int, means the largest valid len is half your address space, which also means half of all possible indexes are always invalid; which makes some things easier.

But it's really that integer arithmetic is not undefined behavior regardless of signedness, that bounds are checked, and that even indexing your slice with an int64 on a 32-bit CPU does the full correct bounds check. In fact, you can use any integer type as an index.

Given all of the above, indexing with a uint or an int is actually indiferent. In that case, the bound check is a single unsigned <len compare (despite the fact that len is signed).

What's really painful, is trying to handle a full 32-bit address space with 32-bit addresses and sizes, like in Wasm; you need 33-bit math. So in a sense, limiting sizes to 31-bit (signed) does help. But at the language level, IMO, the rest matters more.

uecker

For signed overflow we have sanitizers, and for conversions C compilers warnings in C. Bounds checking can also be done with sanitizers (but is a bit more tricky). So no, I do not think the undefined behavior is really a big problem. In fact, it helps us find the problem because every overflow can be considered a programming error.

Error due to unsigned wraparound are a much bigger issue, because the lead to subtle issues where neither automatic warnings nor sanitizers help, exactly because it is well-defined and no automatic tool can tell whether the behavior is intended or wrong.

tialaramex

> Error due to unsigned wraparound are a much bigger issue

This is a type design mistake. The unsigned integers should not wrap by default. It makes absolute sense, given all the constraints and the fact that it's doing New Jersey "implementation simplicity dominates" design that K&R C only provides a wrapping unsigned type, but that's an excuse for K&R C which is a 1960s programming language.

The excuse gets shakier and shakier the further you move past that. C3 even named these types differently, so they're certainly under no obligation to provide the wrapping unsigned integers as if that's just magically what you mean. In most cases it's not what you mean. The excuse given in the article is way too thin.

Rust's Wrapping<u32> is the same thing as the wrapping 32-bit unsigned integer in C or C++ today, but most people don't use it because they do not actually want the wrapping 32-bit unsigned integer. This is a "spelling matters" ergonomics class again like the choice to name the brutally fast but unstable general comparison sort [T]::sort_unstable whereas both C and C++ leave the noob who didn't know about sort stability to find out for themselves because they name this just "sort" and you get to keep both halves when you break things...

uecker

Unsigned is certainly a misnomer for a wrapping type. That does not mean it is a type design mistake. And I agree that people should not use it much.

But what I do not believe is that there is a real need for a non-wrapping non-negative integer type.

duped

> The unsigned integers should not wrap by default.

What would you do instead?

ncruces

Do you always run with those sanitizers in place?

Just this week I've had a C compilers silently delete me an entire function call because of UB (infinite loop without side effects). Took me a day to figure out. So that's a problem for me.

I don't think I've ever had an hard to debug issue in Go because of signed/unsigned wrap around. Particularly a memory issue.

If anything, and there I guess I agree with the article, I wish Go had implicit conversions to wider types: to make the problematic ones stand out.

I guess the reason it doesn't is that they're different named types, which would be a problem when you create a named type for the purpose of forcing explicit type conversions. But maybe the default ones could implicitly implement a numeric tower, where exact conversions can be implicit.

uecker

That depends. But some sanitizer are cheap enough that you can usually always run them.

Regarding infinite loops, C++ and C differ with C++ being more aggressive. But also compilers differ with clang being more aggressive. https://godbolt.org/z/Moe6zYKqo

In general, I do not recommend to use clang if you worry about UB. gcc is a bit more reasonable and also has better warnings.

duped

> In fact, it helps us find the problem because every overflow can be considered a programming error.

High performance, lock-free FIFOs/channels are commonly implemented in a way that requires overflow.

kevin_thibedeau

Systems programmers love to hate on unsigned integers. Generations have been infected with the Java world model that integers have to be pretend number lines centered on zero. Guess what, you still have boundary conditions to deal with. There are times when you really really need to use the full word range without negative values. This happens more often with low level programming and machines with small word sizes, something fewer people are engaged in. It doesn't need to be the default. Ada has them sequestered as modular types but it's available to use when needed.

pjmlp

Java doesn't have unsigned as primitive types, because James Gosling did a series of interviews at Sun among "expert" C devs, and all got the C language rules for unsigned arithmetic wrong.

Yes I miss them in Java as primitives, however there are utility methods for unsigned arithmetic, that get it right.

jstimpfle

The way he conducted those interviews, and the conclusions he drew from them, may have been flawed. Because the situation now is that C has unsigned types and Java mostly has not.

And despite all pitfalls especially around mixing signed and unsigned in C, unsigned types are very useful, I'd in fact say that for low-level programming they are essential.

pjmlp

Doesn't seem to affect the extent Java is used across the industry, including many workloads that in the last century companies would use C instead.

Books like Yourdon Structured Method were mainly targeted to business C back in the day.

__s

He could've removed implicit signed conversion

layer8

Java has char as an unsigned 16-bit integer type. They should have made byte unsigned as well.

pjmlp

Usually you don't do arithmetic with char in Java, this isn't C culture of anything goes.

uecker

Having them available is not the issue, using them for sizes and indices is what causes a lot of tricky bugs.

jltsiren

I find it the opposite. Unsigned integers are intuitive, while signed integers are unintuitive and cause a lot of tricky bugs. Especially in languages, where signed overflow is undefined behavior.

It's pretty rare to have values that can be negative but are always integers. At least in the work I do. The most common case I encounter are approximations of something related to log probability. Such as various scores in dynamic programming and graph algorithms.

Most of the time, when you deal with integers, you need special handling to avoid negative values. Once you get used to thinking about unsigned integers, you quickly develop robust ways of avoiding situations where the values would be negative.

uecker

It is interesting that you find unsigned integers more intuitive. My experience (also with students, but also analysis of CVE give plenty of evidence) is that the opposite is true: signed integers in C are a model of integers which have a nice mathematical structure which people learn in elementary school. Yes, this breaks down on overflow, but for this you have to reach very high numbers and there is very good tooling to debug this. In contrast, unsigned integers in C are modulo arithmetic which people learn at university, if at all, and get wrong all the time, and the errors are mostly subtle and very difficult to find automatically.

You are right that often you need to constrain an integer to be non-negative or positive, but usually not during arithmetic, but at certain points in the logic of a program. But then in my experience it is better expressed as some assertion.

throwaway894345

Why does an unsigned type for sizes or indices fare worse than a signed type? When do I want the -247th element in an array? When do I have a block that is -10 bytes in size?

charlie90

Because doing subtraction on sizes/indicies is common, and signed handles the case where you subtract below 0. Unsigned yields unintuitive results. i.e, unsigned fails silently. For example, looping to the 2nd to last item in an array or getting the index before the given index.

The source of confusion is that unsigned is a terrible name. Unsigned does not mean non-negative. Its 100% complete valid to assign a negative value to an unsigned, it just fails silently.

If you want non-negative integers, then you should make a wrapper class that enforces non-negativity at compile and runtime.

uecker

the reason is not that you want a negative index or size, but that you want the computation of the index to be correct, and you want to have obvious errors. Both turns out to be easier with signed types.

kevin_thibedeau

There are (rare) times when you want negative array indices. C lets you index in both directions from a pointer to the middle of an array. That's why array indexing is signed in C. Some libc ctypes lookup tables do this. For sizing there is no strong case for negatives other than to shoehorn them into signed operations.

wavemode

> When do I want the -247th element in an array?

You never want any element of an array, except elements within the range [0, array_length). Anything outside of that is undefined behavior.

I think people tend to overthink this. A function which takes an index argument, should simply return a result when the index is within the valid range, and error if it's outside of it (regardless of whether it's outside by being too low or too high). It doesn't particularly matter that the integer is signed.

If you aren't storing 2^64 elements in your array (which you probably aren't - most systems don't even support addressing that much memory) then the only thing unsigned gets you is a bunch of footguns (like those described in the OP article).

im3w1l

Let's say you have two indices into an array: a and b. You want to know how much earlier a is than b so you compute b-a, but as b was in fact the earlier one you get a negative number.

You can deal with this by casting before doing the subtraction, or you can deal with it by storing the indices as signed integers at all times. The latter is more ergonomic at the cost of wasted capacity.

pron

In Java, unsigned arithmetic is available through an API and, as you said, it is pretty much only needed when marshalling to certain wire protocols or for FFI. Built-in unsigned types are useful primarily for bitfields or similar tiny types with up to 6 bits or so.

pjmlp

I miss them for doing bit juggling like file headers or networking packets.

However I do concede writing a few helper methods isn't that much of a burden.

pron

I think all the unsigned arithmetic you need is already offered. Unsigned shift right is an operator; the primitive wrappers offer compareUnsigned, divideUnsigned, and remainderUnsigned, as well as conversion methods; unsigned exponentiation is offered in Math (because signed types in Java wrap, there's no need for special unsigned addition/subtraction).

tialaramex

> Systems programmers love to hate on unsigned integers

I don't see this hate in Rust. I think this is a big thing in the C-related languages, and that the author has chosen to pretend that's the same for any "systems language" but it is not.

adev_

> I don't see this hate in Rust.

Nor it is in C++. Most default flag setup will report implicit conversions on numerical as error in C++

Unsigned / signed is mainly an issue if your language chose to silent implicit conversions.

Which honestly, is terrible design beyond simply signed / unsigned.

einpoklum

> There are times when you really really need to use the full word range without negative values.

There are a few of those, but that is the niche case. Certainly when we're talking about 64-bit size types. And if you want to cater to smaller size types, then just just template over the size type. Or, OK, some other trick if it's C rather than C++.

pixelesque

Sometimes (and very often in some scenarios/industries, i.e. HPC for graphics and simulation with indices for things like points, vertices, primvars, voxels, etc) you want pretty good efficiency of the size of the datatype as well for memory / cache performance reasons, because you're storing millions of them, and need to be random addressing (so can't really bit-pack to say 36 bytes, at least without overhead away from native types, which are really needed for maximum speed without any branching).

Losing half the range to make them signed when you only care about positive values 95% of the time (and in the rare case when you do any modulo on top of them you can cast, or write wrappers for that), is just a bad trade-off.

Yes, you've still then only doubled the range to 2^32, and you'll still hit it at some point, but that extra byte can make a lot of difference from a memory/cache efficiency standpoint without jumping to 64-bit.

So very often uint32_t is a very good sweet spot for size: int32_t is sometimes too small, and (u)int64_t is generally not needed and too wasteful.

marshray

As I generally believed in Moore's law, i.e., accepted the notion that transistors were exponential, I was surprised at how long the difference between a 2 GiB address space and a 3 GiB address space was relevant in practice.

In theory, it should have been at most a year. In practice, Windows XP /3GB boot switch (allocates 3 GB of virtual address space user mode and 1 GiB for the kernel instead of the usual 2 and 2) was relevant for many years.

einpoklum

> HPC for graphics and simulation with indices

Those are not sizes of data structures.

> Losing half the range

It's not a part of the range of sizes they can use, with any typical data structure.

> Losing half the range to make them signed when you only care about positive values 95% of the time is just a bad trade-off.

It's the right choice sizes in the standard library (in C++) or standard-ish/popular libraries in C. And - again, it's the wrong type. For example, even if you only care about positive values, their difference is not necessarily positive.

jstimpfle

Signed quantities are a good default, and are easier to deal with when doing subtractions and mixing integers of different widths. (And integers includes pointers here, so it's very hard to not have different widths).

However unsigned integers are still very useful, I'd say essential, in low-level programming. For example when doing buffer management and memory allocation.

   - bitwise operations
   - modular arithmetic implemented with just ++, -- (ringbuffers, e.g TCP sequence numbers)
   - using the full range of a 8-bit, 16-bit, 32-bit datatype (quite common)
   - splitting a positive quantity into two smaller quantities, e.g. using a 16-bit index as 8-bit major index plus 8-bit minor index.
etc.

Don't forget that the signed vs unsigned integer is in some sense an artificial distinction. Machines have you put the distinction in the CPU instructions themselves, they don't track a "signed" property as part of values. And it can make sense to use the same value in different ways. However, C and many other languages decided to put a tag on the type, so operator syntax can be agnostic to signedness, and the compiler will choose the appropriate CPU instruction.

zozbot234

> However, C and many other languages decided to put a tag on the type, so operator syntax can be agnostic to signedness, and the compiler will choose the appropriate CPU instruction.

It mostly comes up with widening conversions (signed numbers must extend the sign bit, unsigned numbers set the extra bits to zero), unsigned/signed divide (and multiply, in case of a widened result) and greater than/less than comparisons (and of course geq/leq). (With signed comparison, A is less than B if by starting from INT_MIN (included) and iteratively incrementing you reach A before B. With unsigned comparison, A is less than B if by starting from 0 (included) and iteratively incrementing you reach A before B. This way of phrasing comparison as range inclusion is convenient, since it works around the wrapping concern in a rather clean way.)

Groxx

>If sizes are unsigned, like in C, C++, Rust and Zig – then it follows that anything involving indexing into data will need to either be all unsigned or require casts. With C’s loose semantics, the problem is largely swept under the rug, but for Rust it meant that you’d regularly need to cast back and forth when dealing with sizes.

TBH I've had very little struggle with this at all. As long as you keep your values and types separate, the unsigned type that you got a number from originally feeds just fine into the unsigned type that you send it to next. Needing casting then becomes a very clear sign that you're mixing sources and there be dragons, back up and fix the types or stop using the wrong variable. It's a low-cost early bug detector.

Implicitly casting between integer types though... yeah, that's an absolute freaking nightmare.

kibwen

> As long as you keep your values and types separate, the unsigned type that you got a number from originally feeds just fine into the unsigned type that you send it to next.

Part of me feels like direct numeric array indexing is one of the last holdouts of a low-level operation screaming for some standardized higher-level abstraction. I'm not saying to get rid of the ability to index directly, but if the error-resistant design here is to use numeric array indices as though they were opaque handles, maybe we just need to start building support for opaque handles into our languages, rather than just handing out numeric indices and using thoughts and prayers to stop people from doing math on them.

For analogy, it's like how standardizing on iterators means that Rust code rarely needs to index directly, in contrast with C, where the design of for-loops encourages indexing. But Rust could still benefit from opaque handles to take care of those in-between cases where iterators are too limiting and yet where numeric indices are more powerful than needed.

bradrn

> Part of me feels like direct numeric array indexing is one of the last holdouts of a low-level operation screaming for some standardized higher-level abstraction.

This paragraph reminds me a bit of Dex: https://arxiv.org/abs/2104.05372

dnmc

Maybe this isn't what you're suggesting, but it's already possible to make an interface that prevents callers from doing math on indices in Rust — just return a struct that has a private member for the index. The caller can pass the value back at which point you can unwrap it and do index arithmetic.

kibwen

More than that, in theory an opaque handle would also do things like statically prevent a handle taken from one array from being used to access a different array. I feel like this should be possible in Rust with type-level shenanigans (e.g. GhostCell).

Groxx

You do need to store those if they're totally opaque though, e.g. how do you represent a range without holding N tokens? Often I like it, and it allows changing the underlying storage to be e.g. generational with ~no changes, but it kinda can't be enforced for runtime-cost reasons.

Using a unique type per array instance though, that I quite like, and in index-heavy code I often build that explicitly (e.g. in Go) because it makes writing the code trivially correct in many cases. Indices are only very rarely shared between arrays, and exceptions can and should look different because they need careful scrutiny compared to "this is definitely intended for that array".

LegionMammal978

> But what about the range? While it’s true that you get twice the range, surprisingly often the code in the range above signed-int max is quite bug-ridden. Any code doing something like (2U * index) / 2U in this range will have quite the surprise coming.

Alas, (2S * signed_index) / 2S will similarly result in surprises the moment the signed_index hits half the signed-int max. There's no free lunch when trying to cheat the integer ranges.

lerno

The difference is that in the unsigned case you get a seemingly plausible value, and in the signed case you get a negative value which you can be sure is wrong. This is the problem.

undefined

[deleted]

jltsiren

In some languages, the signed version is undefined behavior. You may get a negative value, INT_MAX / 2, or an error. Or the compiler may detect the undefined behavior, which according to the standard cannot happen, and mutilate your code in unexpected ways.

ok123456

jcmoyer

There is a really convincing set of arguments against this idea by Robert Seacord[1]. I used to be in the signed size camp, but I've come around to preferring unsigned as much as possible because it's much easier to reason about. I think there are far more footguns than people realize when it comes to signed integers.

[1] https://www.youtube.com/watch?v=82jVpEmAEV4

ok123456

The only real benefits to unsigned math are that overflow is generally defined as a simple wrap (odometer), and it doubles the range. Relying on that doubled range for bounds is flirting with disaster, though.

The downside is a pervasive, constant footgun every time you are dealing with indices.

norir

In my reading, what Stroustroup is saying is that given other problems in c/c++, that singed sizes are less bad than unsigned but both have clear and significant deficiencies. A new language doesn't have to inherit all of these deficiencies.

ok123456

No. He says that signed/unsigned arithmetic is a universal problem. And in the context of std::span, using signed arithmetic is the correct choice rather than shoehorning in size_t to make it more cosmetically consistent with the rest of the STL.

alberto-m

I might be a contrarian in that I actually like using unsigned integers for sizes and indexes. In my experience, most of their trappings can be prevented by treating any subtraction involving them as a `reinterpret_cast`: i.e.

* Do your utmost to rewrite the code in order to avoid doing that (e.g. reordering disequations to transform subtractions into additions). * If not possible, think very hard about any possible edge case: you most certainly need an additional `if` to deal with those. * When analyzing other people's code during troubleshooting merge reviews, assume any formula involving an unsigned integer and a minus sign is wrong.

deathanatos

> The former is easier to define, but has the downside of essentially “silencing warnings”. Let’s say the code was originally written to cast an u16 to u32, but later the variable type changes from u16 to u64 and the cast is now actually silently truncating things. Here we have casts becoming a sort of “silence all warnings”.

Well … we even mention Rust in the paragraph right before this. In Rust, you can up a u16 to a u64 this way:

  let bigger: u32 = x.into();
or

  let bigger = u32::from(x);
The conversion `from` is infallible, because a u16 always fits in a u32. There is no `from(u64) -> u32`, because as the article notes, that would truncate, so if we did change the type to u64, the code would now fail to compile. (And we'd be forced to figure out what we want to do here.)

(There are fallible conversions, too, in the form of try_from, that can do u64 → u32, but will return an error if the conversion fails.)

Similarly, for,

  for (uint x = 10; x >= 0; x--) // Infinte loop!
This is why I think implicit wrapping is a bad idea in language design. Even Rust went down the wrong path (in my mind) there, and I think has worked back towards something safer in recent years. But Rust provides a decent example here too; this is pseudo-code:

  for (uint x = 10; x.is_some(); x = x.checked_sub(1))
Where `checked_sub` is returns `None` instead of wrapping, providing us a means to detect the stopping point. So, something like that. (Though you'd probably also want to destructure the option into the uint for use inside the loop.) Of course, higher-level stuff always wins out here, I think, and in Rust you wouldn't write the above; instead something like,

  for x in (0..=10).rev()
(And even then, if we need indexes; usually, one would prefer to iterate through a slice or something like that. The higher-level concept of iterators usually dispenses with most or all uses of indexes, and in the rare cases when needed, most languages provide something like `enumerate` to get them from the iterator.)

flohofwoe

Finally a language doing the right thing :)

My two ruls of thumb for C code are:

1. use signed integers for everything except bit-wise operations and modulo math (e.g. "almost always signed")

2. make implicit sign conversion an error via `-Werror -Wsign-conversion`

The problem with making sizes and indices unsigned (even if they can't be negative) is that you'd might to want to add negative offsets, and that either requires explicit casting in languages without explicit signed/unsigned conversion (e.g. additional hassle and reducing readability), or is a footgun area in languages with implicit sign conversion.

pseudohadamard

But a blog doing the wrong thing. Who decided that light grey on white was a great way to present text?

For anyone else struggling to read it, Ctrl-A will make it legible.

lucas_t_a

it did decide that javascript is necessary to enable correct contrast

Panzerschrek

In my programming language I use unsigned sizes. Signed sizes have no sense -sizes can't be negative. They provide larger range and don't waste an extra bit. Range checking is simpler, it requires no comparison. Also some operations like division or modulo are faster for unsigned integers.

Using signed sizes adds a lot of footguns and performance degradations and in exchange gives only small code simplifications in rare and niche cases.

ks2048

I know language designers have a lot of trade-offs to consider... But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.

The potential bugs listed would be prevented by, e.g. "x--" won't compile without explicitly supplying a case for x==0 OR by using some more verbose methods like "decrement_with_wrap".

The trade-off is lack of C-like concise code, but more safe and explicit.

pron

> But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.

Except that's not quite what unsigned types do. They are not (just) numbers that will always be >= 0, but numbers where the value of `1 - 2` is > 1 and depends on the type. This is not an accident but how these types are intended to behave because what they express is that you want modular arithmetic, not non-negative integers.

> e.g. "x--" won't compile without explicitly supplying a case for x==0

If you want non-negative types (which, again, is not what unsigned types are for) you also run into difficulties with `x - y`. It's not so simple.

There are many useful constraints that you might think it's "better to have a type that reflects that" - what about variables that can only ever be even? - but it's often easier said than done.

norir

This is true, which means that a language has to be designed from the ground up to deal with these problems or there will always be inscrutable bugs due to misuse of arithmetic results. A simple example in a c-like language would be that the following function would not compile:

    unsigned foo(unsigned a, unsigned b) { return a - b; }
but this would:

    unsigned foo(unsigned a, unsigned b) {
      auto c = a - b;
      return c >= 0 ? c : 0;
    }
Assuming 32 bit unsigned and int, the type of c should be computed as the range [-0xffffffff, 0xffffffff], which is different from int [-0x100000000, 0x7fffffff]. Subtle things like this are why I think it is generally a mistake to type annotate the result of a numerical calculation when the compiler can compute it precisely for you.

pron

First, your code is about having unsigned types represent the notion of non-negative values, but this is not the intent of unsigned types in C/C++. They represent modular arithmetic types.

Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit?

You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations).

Groxx

That's true for signed numbers too though? `int_min - 2 > int_min`

I agree they're a bit more error-prone in practice, but I suspect a huge part of that is because people are so used to signed numbers because they're usually the default (and thus most examples assume signed, if they handle extreme values correctly at all (much example code does not)). And, legitimately, zero is a more commonly-encountered value... but that can push errors to occur sooner, which is generally a desirable thing.

pron

> That's true for signed numbers too though? `int_min - 2 > int_min`

As someone else already pointed out, that's undefined behaviour in C and C++ (in Java they wrap), but the more important point is that the vast majority of integers used in programs are much closer to zero than to int_min/max. Sizes of buffers etc. tend to be particularly small. There are, of course, overflow problems with signed integers, but they're not as common.

oasisaimlessly

> That's true for signed numbers too though? `int_min - 2 > int_min`

No, that's undefined behavior in C, and if you care about correctness, you run at least your testsuite in CI with -ftrapv so it turns into an abort().

ks2048

> you also run into difficulties with `x - y`.

If you have "uint x" and "uint y", then for "x - y", the programmer should explicitly write two cases (a) no underflow, i.e. x >= y, and (b) underflow, x < y. The syntax for that... that is an open question.

> what about variables that can only ever be even

Yes, maybe you should have an "EvenInt" type, if that is important. Maybe you should be able to declare a variable to be 7...13, just like a "uint8" can declare something 0...255. Of course, the type-checker can get complicated, and perhaps simply fail to type-check some things. But, having compile-time constraints to what you know your variables will be is good, IMHO.

renox

Note that in Zig, unsigned integer have the sqle semantic qs integers on overflow (trap or wrap or UB). You also have operators providing wrapping. That is the correct solution.

mamcx

I think it should be alike in Pascal where you have size ranges as types, and then, you can declare that this collection fall on this range (and very nicely, you can make it at enum):

https://www.freepascal.org/docs-html/ref/refsu4.html

elch

And none of those "amenities" existed anymore in Oberon.

shirro

With all respect to Christoffer and Bjarne and many others who are much smarter and more experience than me who have said similar things I am far from convinced. Their languages are not memory safe and they either are not doing bounds checking or proving it unnecessary. If iteration is causing underflow or overflow then perhaps the problem isn't signed or unsigned indexes.

I don't recall similar arguments being made for Pascal or ADA.

Look around at the state of our C++ and C software and all the CVEs I think we probably shouldn't care about unsigned or signed loop indexes and move on before regulatory pressure forces us. Please language designers, give us some interesting alternatives to Rust.

Daily Digest email

Get the top HN stories in your inbox every day.