Subject: Re: [SG16-Unicode] Replacement for codecvt
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2019-08-30 08:55:15
On 29/08/2019 23:29, JeanHeyd Meneide wrote:
> As a minor sidenote, for the sake of discussion (and because I am
> drinking too much coffee), you can make the Encoding Object approach
> compile fast by employing all the type-fixing to fit your needs. Using
> the below implementation:
Ah, you're thinking in terms of engineering. This constraint is
I've worked a fair few contracts where automated tooling checked all
header files for any evidence of including std::allocator. If it found
it, it fails the build (the check is easy, preprocess each header file,
regex for allocator<.*>).
This bans lots of useful stuff from header files like std::string. But
it's not a stupid ban. If you're compiling millions of files, every time
any header file includes anything of substance, you add *hours* to the
There have been promises that Modules will fix this. But having worked
with these guys, no way they'll even touch Modules for another decade.
They'll wait for v2 or v3 Modules before even doing a test. And even
then, I have serious doubts that Modules as currently designed can
deliver much build time improvements.
Anyway, Outcome tries very hard to avoid including any header of any
complexity, to keep build times low. LLFIO and most of my libraries do
the same. Users choose my libraries based on their low build time
impact. They were promised that, and I cannot break that promise. That's
why use of Ranges are a red line for me, and many others.
Even if you hardcode template specialisations, it's actually about
avoiding #include, not about making builds fast. Because end users with
large codebases have automated checks for #include, that's what they
test for to stop users checking in bad code. Nobody tests actual build
time impact, or at least I've not seen it yet.
SG16 list run by firstname.lastname@example.org