Get the top HN stories in your inbox every day.
wvenable
dspillett
> I'm not sure why anyone would choose varchar for a column in 2026
The same string takes roughly half the storage space, meaning more rows per page and therefore a smaller working set needed in memory for the same queries and less IO. Also, any indexes on those columns will also be similarly smaller. So if you are storing things that you know won't break out of the standard ASCII set⁰, stick with [VAR]CHARs¹, otherwise use N[VAR]CHARs.
Of course if you can guarantee that your stuff will be used on recent enough SQL Server versions that are configured to support UTF8 collations, then default to that instead unless you expect data in a character set where that might increase the data size over UTF16. You'll get the same size benefit for pure ASCII without losing wider character set support.
Furthermore, if you are using row or page compression it doesn't really matter: your wide-character strings will effectively be UTF8 encoded anyway. But be aware that there is a CPU hit for processing compressed rows and pages every access because they remain compressed in memory as well as on-disk.
--------
[0] Codes with fixed ranges, etc.
[1] Some would say that the other way around, and “use NVARCHAR if you think there might be any non-ASCIII characters”, but defaulting to NVARCHAR and moving to VARCHAR only if you are confident is the safer approach IMO.
gfody
utf16 is more efficient if you have non-english text, utf8 wastes space with long escape sequences. but the real reason to always use nvarchar is that it remains sargeable when varchar parameters are implicitly cast to nvarchar.
tialaramex
UTF-16 is maybe better if your text is mostly made of codepoints which need 3 UTF-8 code units but only one (thus 2 bytes) UTF-16 code unit. This is extremely rare for general text and so you definitely shouldn't begin by assuming UTF-16 is a good choice without having collected actual data.
downsplat
The old defense of 16-bit chars, popping up in 2026 still! Utf8 is efficient enough for all general purpose uses.
If you're storing gigabytes of non-latin-alphabet text, and your systems are constrained enough that it makes a difference, 16-bit is always there. But I'd still recommend anyone starting a system today to not worry and use utf8 for everything.j
exceptione
What do you mean with non-english text? I don't think "Ä" will be more efficient in utf16 than in utf8. Or do you mean utf16 wins in cases of non-latin scripts with variable width? I always had the impression that utf8 wins on the vast majority of symbols, and that in case of very complex variable width char sets it depends on the wideness if utf16 can accommodate it. On a tangent, I wonder if emoji's would fit that bill too..
SigmundA
The non sargeablilty is an optimizer deficiency IMO. It could attempt to cast just like this article is doing manually in code, if that success use index, if it fails scan and cast a million times the other way in a scan.
beart
I agree with your first point. I've seen this same issue crop up in several other ORMs.
As to your second point. VARCHAR uses N + 2 bytes where as NVARCHAR uses N*2 + 2 bytes for storage (at least on SQL Server). The vast majority of character fields in databases I've worked with do not need to store unicode values.
wvenable
> The vast majority of character fields in databases I've worked with do not need to store unicode values.
This has not been my experience at all. Exactly the opposite, in fact. ASCII is dead.
SigmundA
Vast majority of text fields I see are coded values that are perfectly fine using ascii, but I deal mostly with English language systems.
Text fields that users can type into directly especially multiline tend to need unicode but they are far fewer.
SigmundA
To complicate matters SQL Server can do Nvarchar compression, but they should have just done UTF-8 long ago:
https://learn.microsoft.com/en-us/sql/relational-databases/d...
Also UTF-8 is actually just a varchar collation so you don't use nvarchar with that, lol?
_3u10
Generally if it stores user input it needs to support Unicode. That said UTF-8 is probably a way better choice than UTF-16/UCS-2
SigmundA
UTF-8 is a relatively new thing in MSSQL and had lots of issues initially, I agree it's better and should have been implemented in the product long ago.
I have avoided it and have not followed if the issues are fully resolved, I would hope they are.
Dwedit
The one place UTF-16 massively wins is text that would be two bytes as UTF-16, but three bytes as UTF-8. That's mainly Chinese, Japanese, Korean, etc...
croes
Since MS SQL Server 2019 varchar supports unicode so now it’s the opposite, you use nvarchar instead of varchar for backwards compatibility reasons.
jklowden
I’m not sure why the top-rated reply begins by presuming anything about the problem domain. Many domains have a specified language and implied if not explicit collation. Rejecting characters outside that domain is part of the job. There are no emojis listed on the NASDAQ.
paulsutter
Utf8 solved this completely. It works with any length unicode and on average takes up almost as little storage as ascii.
Utf16 is brain dead and an embarrassment
wvenable
Blame the Unicode consortium for not coming up UTF-8 first (or, really, at all). And for assuming that 65526 code points would be enough for everyone.
So many problems could be solved with a time machine.
kstrauser
The first draft of Unicode was in 1988. Thompson and Pike came up with UTF-8 in 1992, made an RFC in 1998. UTF-16 came along in 1996, made an RFC in 2000.
The time machine would've involved Microsoft saying "it's clear now that USC-2 was a bad idea, so let's start migrating to something genuinely better".
Dwedit
It gets worse for UTF-16, Windows will let you name files using unpaired surrogates, now you have a filename that exists on your disk that cannot be represented in UTF-8 (nor compliant UTF-16 for that matter). Because of that, there's yet another encoding called WTF-8 that can represent the arbitrary invalid 16-bit values.
applfanboysbgon
I think this is a rather pertinent showcase of the danger of outsourcing your thinking to LLMs. This article strongly indicates to me that it is LLM-written, and it's likely the LLM diagnosed the issue as being a C# issue. When you don't understand the systems you're building with, all you can do is take the plausible-sounding generated text about what went wrong for granted, and then I suppose regurgitate it on your LLM-generated portfolio website in an ostensible show of your profound architectural knowledge.
ziml77
This is not at all just an LLM thing. I've been working with C# and MS SQL Server for many years and never even considered this could be happening when I use Dapper. There's likely code I have deployed running suboptimally because of this.
And it's not like I don't care about performance. If I see a small query taking more than a fraction of a second when testing in SSMS or If I see a larger query taking more than a few seconds I will dig into the query plan and try to make changes to improve it. For code that I took from testing in SSMS and moved into a Dapper query, I wouldn't have noticed performance issues from that move if the slowdown was never particularly large.
cosmez
This is a common issue, and most developers I worked with are not aware of it until they see the performance issues.
Most people are not aware of how Dapper maps types under the hood; once you know, you start being careful about it.
Nothing to do with LLMs, just plain old learning through mistakes.
keithnz
actually, LLMs do way better, with dapper the LLM generates code to specify types for strings
SigmundA
Yes I have run into this regardless of client language and I consider it a defect in the optimizer.
wvenable
I wouldn't consider it a defect in the optimizer; it's doing exactly what it's told to do. It cannot convert an nvarchar to varchar -- that's a narrowing conversion. All it can do is convert the other way and lose the ability to use the index. If you think that there is no danger converting an nvarchar that contains only ASCII to varchar then I have about 70+ different collations that say otherwise.
SigmundA
Can you give an example whats dangerous about converting a nvarchar with only ascii (0-127) then using the index otherwise fallback to a scan?
If we simply went to UTF-8 collation using varchar then this wouldn't be an issue either, which is why you would use varchar in 2026, best of both worlds so to speak.
undefined
briHass
I've found and fixed this bug before. There are 2 other ways to handle it
Dapper has a static configuration for things like TypeMappers, and you can change the default mapping for string to use varchar with: Dapper.SqlMapper.AddTypeMap(typeof(string),System.Data.DbType.AnsiString). I typically set that in the app startup, because I avoid NVARCHAR almost entirely (to save the extra byte per character, since I rarely need anything outside of ANSI.)
Or, one could use stored procedures. Assuming you take in a parameter that is the correct type for your indexed predicate, the conversion happens once when the SPROC is called, not done by the optimizer in the query.
I still have mixed feelings about overuse of SQL stored procedures, but this is a classic example of where on of their benefits is revealed: they are a defined interface for the database, where DB-specific types can be handled instead of polluting your code with specifics about your DB.
(This is also a problem for other type mismatches like DateTime/Date, numeric types, etc.)
ziml77
Sprocs are how I handle complex queries rather than embedding them in our server applications. It's definitely saved me from running into problems like this. And it comes with another advantage of giving DBAs more control to manage performance (DBAs do not like hearing that they can't take care of a performance issue that's cropped up because the query is compiled into an application)
bonesss
As a general issue of hygiene I tend to wrap any ORM and access it through an internal interface.
1) The joy of writing and saying DapperWrapper can’t be overstated.
2) in conjunction with meaningful domain types it lets you handle these issues across the app at a single point of control, and capture more domain semantics for testing.
diath
It's weird that the article does not show any benchmarks but crappy descriptions like "milliseconds to microseconds" and "tens of thousands to single digits". This is the kind of vague performance description LLMs like to give when you ask them about performance differences between solutions and don't explicitly ask for a benchmark suite.
pllbnk
I disagree. I think it's a nice discovery many might be unaware of and later spend a lot of time on tracking down the performance issue independently. I also disagree that a rigorous benchmark is needed for every single performance-related blog post because good benchmarks are difficult to write, you have to account for multiple variables. Here, the author just said - "trust me, it's much faster" and I trust them because they explained the reasoning behind the degradation.
nmeofthestate
The writing style certainly screams LLM.
_vertigo
> No schema changes. No new indexes. No query rewrites. Just telling Dapper the correct parameter type.
pllbnk
Are we automatically discarding everything that might or might not have been written or assisted by an LLM? I get it when the articles are the type of meaningless self improvement or similar kind of word soup. However, if hypothetically an author uses LLM assistance to improve their styling to their liking, I see nothing wrong with that as long as the core message stands out.
rmunn
I've seen so many LLM-generated articles by this point that obviously had no human editing done beforehand — just prompt and slap it onto the Web — that it makes me wonder every time. If I read this article, will I actually learn only truth? Or are there some key parts of this article that are actually false because the LLM hallucinated them, and the human involved didn't bother to double-check the article before publishing it?
If someone was just using the LLM for style, that's fine. But if they were using it for content, I just can't trust that it's accurate. And the time cost for me to read the article just isn't worth it if there's a chance it's wrong in important ways, so when I see obvious signs of LLM use, I just skip and move on.
Now, if someone acknowledged their LLM use up front and said "only used for style, facts have been verified by a human" or whatever, then I'd have enough confidence in the article to spend the time to read it. But unacknowledged LLM use? Too great a risk of uncorrected hallucinations, in my experience, so I'll skip it.
downsplat
Did this post come out of a freezer from 1998? Who on earth creates databases in Latin1 in 2026?
Nevermind, looks like Sql Server didn't add utf8 collations until 2019 (!) and for decades people had to choose column by column between the 16-bit overhead of "nvarchar" and latin1... And still do if they want a bit of backwards compatibility. Amazing.
rmunn
"Just use Postgres" (which defaults to UTF-8 encoding unless specifically configured to use something else) is looking like better and better advice every day.
Doesn't help those tied to legacy systems that would require a huge, expensive effort to upgrade, though. Sorry, folks. There's a better system, you know it's a better system, and you can't use it because switching is too expensive? I've been there (not databases, in my case) and it truly sucks.
elmigranto
Third party dependencies are very easy: you just have to intimately know how it is implemented in addition to knowing your own code and stack, and then you are golden!
Nothing to learn, just focus on making your app, it’s all taken care of by This One Simple Package ;)
These things are so far from free as our tooling presents with “just nuget it or whatever”.
DeathMetal3000
I’m sure writing their own ORM would have given them instantaneous insight into this issue and introduced no other challenges. Open source developers hate this one weird trick!
elmigranto
Especially for things used directly, you need to understand both, own and third party code, roughly to the same level. With own code, you only care for your own use case; with third-party — you have to kind of get everyone else's.
Depending on what you do and the dependency's scope, either way can make sense.
maciekkmrk
Interesting problem, but the AI prose makes me not want to read to the end.
jiggawatts
This feels like a bug in the SQL query optimizer rather than Dapper.
It ought to be smart enough to convert a constant parameter to the target column type in a predicate constraint and then check for the availability of a covering index.
valiant55
There's a data type precedence that it uses to determine which value should be casted[0]. Nvarchar is higher precedence, therefore the varchar value is "lifted" to an nvarchar value first. This wouldn't be an issue if the types were reversed.
0: https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-...
Sankozi
So such issues can appear in more products and more datatypes (int and bigint have same problem).
This is really bad rule for SQL's "equality" operator.
Still optimizer should be able to handle it - if the result is the same, optimizer should take faster path.
wvenable
It's the optimizer caching the query plan as a parameterized query. It's not re-planning the index lookup on every execution.
beart
How do you safely convert a 2 byte character to a 1 byte character?
jiggawatts
Easily! If it doesn't convert successfully because it includes characters outside of the range of the target codepage then the equality condition is necessarily false, and the engine should short-circuit and return an empty set.
jklowden
> SQL Server has to convert every single value in the column to nvarchar before it can compare.
This of course is not true. It is a defect in Microsoft’s query planner. And the proof lies in the remedy.
The recommended solution is to convert the search argument type to match that of the index. The user is forced to discover the problem and adjust manually. SQL Server could just as well have done that automatically.
No information is lost converting nvarchar to varchar if the index is varchar. If the search argument is ‘’, no conversion from varchar will match it (unless the index data is UTF8, which the server should know).
This is a longstanding bug in SQLserver, and not the only one. Instead of patting ourselves on the back for avoiding what SQL Server “has to do”, we should be insisting it not do it. Anymore.
smithkl42
Been bit by that before: it's not just an issue with Dapper, it can also hit you with Entity Framework.
mvdtnz
This is a really interesting blog post - the kind of old school stuff the web used to be riddled with. I must say - would it have been that hard to just write this by hand? The AI adds nothing here but the same annoying old AI-isms that distract from the piece.
wronex
This makes me sad. We have these type safe languages, then a DB comes along and brakes the type barrier. What are we to do? Property attributes and an ORM? Is there a Linq to SQL thing?
andrelaszlo
I thought, having just read the title, that maybe it's time to upgrade if you're still on Ubuntu 6.06.
Get the top HN stories in your inbox every day.
This really doesn't have anything to do with C#. This is your classic nvarchar vs varchar issue (or unicode vs ASCII). The same thing happens if you mix collations.
I'm not sure why anyone would choose varchar for a column in 2026 unless if you have some sort of ancient backwards compatibility situation.