Hacker News Reader: Top @ 2026-04-13 10:28:49 (UTC)

Generated: 2026-04-13 12:04:36 (UTC)

10 Stories
8 Summarized
1 Issues

#1 All elementary functions from a single binary operator (arxiv.org) §

summarized
411 points | 115 comments

Article Summary (Model: gpt-5.4-mini)

Subject: One Operator, Everything

The Gist: The paper claims that the binary operator eml(x, y) = exp(x) - ln(y), together with the constant 1, can generate a standard scientific-calculator repertoire: arithmetic, exponentials, logs, constants like e, pi, and i, and many elementary functions. The author says this was found by exhaustive search and shown constructively. The same uniform binary-tree form (S -> 1 | eml(S,S)) is then used as a trainable symbolic-regression model that can recover exact formulas from data at shallow depths.

Key Claims/Facts:

  • Universal calculator basis: eml plus 1 can express common operations and elementary functions.
  • Binary-tree grammar: Every expression becomes a tree of identical nodes, simplifying the representation.
  • Trainable symbolic regression: EML trees can be optimized with standard gradient methods to fit data and recover closed-form laws when the target is elementary.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously optimistic, with a strong dose of skepticism about practicality and the size of the claim.

Top Critiques & Pushback:

  • Expression blow-up / efficiency: Several commenters argue that a minimal operator set shifts complexity into much larger trees, so the representation may be elegant but inefficient in practice (c47747708, c47747897, c47748213).
  • Questioning the headline significance: Some call it a neat trick rather than a major breakthrough, or at least say the “one operator does everything” framing overstates the practical impact (c47747671, c47750072).
  • Arithmetic caveats: A thread points out that some derivations appear to rely on extended-real conventions like ln(0) = -∞, and that this caveat is easy to miss when reading the diagrams first; others note the paper does acknowledge this later and offers an alternative construction (c47748011, c47748369, c47748514).

Better Alternatives / Prior Art:

  • NAND / NOR / Iota analogies: Users repeatedly compare EML to NAND/NOR in Boolean logic or to small universal combinators like Iota, emphasizing that universality does not imply practical efficiency (c47747619, c47747456, c47747539).
  • DAG compression / lemma reuse: One commenter suggests that if the concern is exponential tree growth, DAG-style sharing or proof-like reuse of subexpressions could mitigate it, citing metamath as an existence proof for compressed formal objects (c47748938).

Expert Context:

  • Implementation details matter: A knowledgeable commenter notes that the paper’s completeness claim depends on the arithmetic model used; in extended-reals or IEEE754-style settings ln(0) = -∞ is standard, but pure real-number environments and some languages won’t behave that way (c47748369, c47749568).

#2 The Economics of Software Teams: Why Most Engineering Orgs Are Flying Blind (www.viktorcessan.com) §

summarized
146 points | 73 comments

Article Summary (Model: gpt-5.4-mini)

Subject: Software Team Economics

The Gist: The article argues that software teams are expensive capital investments that most organizations evaluate with the wrong metrics. It estimates an eight-engineer team at about €87k/month and says teams should justify themselves by measurable financial value, not velocity or sentiment. It claims many orgs have been flying blind for years because cheap capital and growth masked poor prioritization, and that LLMs now make the hidden liabilities of large teams and codebases more obvious.

Key Claims/Facts:

  • True team cost: An 8-engineer team in Western Europe is estimated at roughly €1.04M/year including overhead, or about €4k per working day.
  • Break-even and beyond: Platform or product work should generate enough time saved, revenue protected, or conversion uplift to exceed break-even, ideally 3–5x annual cost.
  • AI changes the equation: LLMs/agents reduce the advantage of large codebases and headcount, because functional prototypes and maintenance can be produced faster and cheaper than in the old model.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously skeptical, with some agreement that software teams should be measured more economically than they usually are.

Top Critiques & Pushback:

  • The math is too tidy / revenue attribution is fuzzy: Several commenters argue that assigning precise dollar values to features is often impossible or misleading, because revenue and outcomes are shared across many initiatives (c47748744, c47748969, c47749685).
  • The Slack example is overstated: Multiple replies say the article’s “95% of Slack in 14 days” framing ignores the real work of scale, reliability, compliance, security, and enterprise features that make a product actually shippable (c47749768, c47750036).
  • AI-generated code is not a free lunch: Commenters report failed AI-heavy projects where agents made no progress or produced wrong abstractions, arguing that throwing more agents at messy code can worsen agility and maintenance (c47748834, c47749781, c47749974).
  • Teams are more than cost centers: Some push back on reducing engineering to dollar extraction, arguing that products can create hard-to-quantify value like UX, reduced friction, or retention, and that over-optimization risks mediocrity (c47748761, c47749289).

Better Alternatives / Prior Art:

  • Measure usage and guardrails, not just output: One commenter says teams should track feature usage and correlate it with growth/retention while keeping technical and product guardrails, rather than relying on vague PM intuition (c47749015).
  • Accept uncertainty but rank opportunities relatively: Others argue that while exact ROI is unknowable, teams can still compare which initiatives are likely to matter more and prioritize accordingly (c47749685, c47749085).

Expert Context:

  • Mythical Man-Month analogy: One reply likens agent-heavy development to a “Mythical Machine Month,” suggesting coordination and verification costs may dominate as more agents are added (c47749974).

#3 Taking on CUDA with ROCm: 'One Step After Another' (www.eetimes.com) §

fetch_failed
175 points | 135 comments
⚠️ Page was not fetched (no row in fetched_pages).

Article Summary (Model: gpt-5.4-mini)

Subject: ROCm’s Slow Climb

The Gist: The linked piece appears to argue that AMD’s ROCm stack is gradually improving in its effort to compete with CUDA, but only in small steps and with plenty of remaining rough edges. Based on the discussion, it likely focuses on ROCm’s open-source development model, packaging/build complexity, and the practical challenge of making AMD GPUs usable for AI/HPC workloads. This is an inference from comments, so it may be incomplete or slightly off.

Key Claims/Facts:

  • Incremental progress: ROCm is portrayed as advancing step by step rather than as a mature CUDA alternative.
  • Open but messy stack: The ecosystem is open enough to be hacked on and packaged, but building it is complex and dependency-heavy.
  • Hardware/software gap: Support and usability still lag behind CUDA, especially outside AMD’s preferred server-oriented targets.

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously optimistic, but heavily skeptical of AMD’s execution.

Top Critiques & Pushback:

  • ROCm is hard to build and package: Multiple commenters describe porting/building ROCm as a "nightmare," especially on musl/Alpine or deterministic toolchains, due to custom LLVM forks and many dependencies (c47745792, c47746715).
  • Consumer GPU support is too narrow: Users complain that ROCm supports only a limited set of newer consumer cards, leaving older but still common GPUs unsupported or stuck on outdated versions (c47746481, c47747410, c47747006).
  • AMD has neglected the ecosystem: Several comments blame AMD’s management for underinvesting in ROCm and focusing too narrowly on server/HPC use cases while CUDA keeps broad compatibility and better tooling (c47747076, c47749703).
  • ROCm still loses on developer experience: Even when it works, commenters say the setup burden, brittle dependencies, and tooling complexity make it a poor choice compared with CUDA or simpler Vulkan-based paths (c47746517, c47749201, c47748169).

Better Alternatives / Prior Art:

  • Vulkan / llama.cpp: Some users say Vulkan backends are "good enough" or easier to use, especially for local inference, though they acknowledge weaker ergonomics and feature support than CUDA/ROCm (c47746715, c47746864, c47747803).
  • OpenVINO / SYCL: One commenter says Intel’s OpenVINO and SYCL stack looks more promising for some CV/data-science workloads, with better recent hardware/software support (c47748555).
  • CUDA remains the benchmark: Even critics of NVIDIA concede CUDA’s ecosystem, tooling, and backward compatibility are still major reasons developers stay locked in (c47749201, c47749861).

Expert Context:

  • Open-source governance can be a blocker: A deleted/quoted AMD internal comment suggests that some ROCm contributors need special policy exemptions to use external AI tools and contribute upstream, which commenters see as evidence of culture and bureaucracy problems rather than just technical ones (c47746221, c47746500, c47747531).

#4 Bring Back Idiomatic Design (2023) (essays.johnloeber.com) §

summarized
563 points | 320 comments

Article Summary (Model: gpt-5.4-mini)

Subject: Bring Back Idioms

The Gist: The essay argues that older desktop software felt easier to use because interfaces were more homogeneous: common controls, keyboard shortcuts, menu structures, and visual cues were reused across apps. Modern web apps are often individually polished but inconsistent with each other, so users have to relearn basic interactions like date entry, navigation, and form submission. The author blames the shift to mobile, frameworks, and custom-built front ends, and calls for a renewed focus on HTML/browser idioms and stricter interface conventions.

Key Claims/Facts:

  • Homogeneous UI: Desktop-era ecosystems constrained apps into shared patterns, making common actions predictable across software.
  • Web fragmentation: Modern front ends often reimplement basics instead of using native HTML/browser behaviors, producing inconsistent shortcuts and controls.
  • Design guidance: The author recommends following HTML/CSS/browser idioms, preferring obvious text labels, and preserving default behaviors like back-button support and standard links.
Parsed and condensed via gpt-5.4-mini at 2026-04-12 14:05:38 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously Optimistic — many commenters agree that inconsistent UI is frustrating, but they disagree sharply about the cause and whether the old desktop model is the right benchmark.

Top Critiques & Pushback:

  • Desktop nostalgia oversimplifies history: Several users argue the “desktop era” was really a Windows-era rail system, not true universal consistency, and note that Microsoft apps themselves often diverged from standards (c47740863, c47741843, c47741397).
  • The web/mobile constraints are real: Others say the web’s cross-platform nature, touch UI, and the need to support many device forms make strict homogeneity much harder than in the old desktop world (c47741397, c47742498, c47742637).
  • Idioms can still be fragile or overloaded: Commenters point out that even supposedly standard behaviors like Enter/Shift+Enter vary wildly across chat apps, code blocks, and tool modes, which can be confusing but also context-dependent (c47741915, c47742755, c47745591).

Better Alternatives / Prior Art:

  • Use native/browser controls: Several users argue that standard HTML form elements, input type=date, autofill, and native buttons solve many of the problems the essay laments (c47744491, c47744728, c47745750).
  • Follow platform conventions, not custom invention: Some suggest the real lesson is to stay on the rails of the host OS or browser, rather than trying to invent new idioms from scratch (c47742588, c47741528).
  • Configurability helps: A recurring suggestion is to make key bindings like Enter/Ctrl+Enter user-configurable, especially in chat and messaging tools (c47741979, c47748133, c47745570).

Expert Context:

  • Modal behavior is sometimes intentional: A few commenters explain that apps like Slack, Teams, and Signal switch Enter behavior based on context because chat input and long-form composition are different modes; the problem is less the idea than the opacity and inconsistency of the mode switch (c47745591, c47742755).
  • Accessibility and support matter: Some note that “idiomatic” HTML and standard widgets are not just aesthetic preferences; they also improve accessibility, reduce support burden, and avoid cursor/input bugs caused by custom controls (c47742637, c47746811, c47746576).

#5 DIY Soft Drinks (blinry.org) §

summarized
456 points | 129 comments

Article Summary (Model: gpt-5.4-mini)

Subject: DIY Soda Lab

The Gist: The post documents a hobbyist’s experiments making soft drinks from scratch, especially cola, orange soda, and almond soda. The core method is to blend tiny amounts of essential oils with gum arabic to make a stable flavor emulsion, then add caramel color, citric acid, and either artificial sweetener or sugar to form a syrup/concentrate. The author iterates through batches, comparing taste, adjusting sweetness/acidity, and eventually shares recipes publicly, including a sugar-free, caffeine-free cola.

Key Claims/Facts:

  • Oil-based flavoring: Essential oils provide the flavor, but need gum arabic to emulsify into water.
  • Concentrate approach: The drink is made as a syrup/concentrate, then diluted for serving.
  • Recipe iteration: The author tweaks sweeteners, acidity, and oil ratios across batches to improve taste and create new flavors.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Enthusiastic, practical, and very “make it yourself,” with lots of recipe-sharing and process tips.

Top Critiques & Pushback:

  • Sourcing is the hard part: Several commenters say the real challenge is obtaining gum arabic, flavor oils, and specialty ingredients in small quantities; one commercial bottling-license holder suggests pre-hydrated gum arabic and water-soluble flavor concentrates instead (c47744268, c47745729).
  • DIY may be less efficient than buying concentrate: Some note that ready-made cola concentrates like Cube-Cola can be cheaper and simpler than sourcing oils individually (c47743289, c47743420).
  • Carbonation/setup safety: The thread includes cautionary discussion about DIY carbonation pressures, bottle types, and the tradeoff between cheap bulk CO2 setups and safer convenience systems (c47745498, c47746919, c47746015).

Better Alternatives / Prior Art:

  • Commercial flavor concentrates: People recommend water-soluble concentrates from suppliers like Apex Flavors, Nature’s Flavors, or Bakto Flavors instead of emulsifying oils yourself (c47744268, c47745137).
  • Established DIY recipes: Cube-Cola, Open Cola, and related cola-reverse-engineering recipes are cited as prior art or easier starting points (c47743289, c47744471).
  • Other homemade drinks: Commenters share alternatives like cold-brew mate/Club-Mate-style drinks, kvass, kombucha, and water kefir as easier homemade beverage projects (c47744471, c47747915, c47743804).

Expert Context:

  • A commenter with a commercial bottling license says the most common failure point is hydrating gum arabic incorrectly, and advises mixing it with dry ingredients first if you can’t get pre-hydrated material (c47744268).
  • Another notes that the post’s methods are basically a small-scale version of standard beverage formulation: use oils only when needed, otherwise water-soluble flavors keep drinks clear like Sprite (c47744268).

#6 Show HN: boringBar – a taskbar-style dock replacement for macOS (boringbar.app) §

summarized
378 points | 208 comments

Article Summary (Model: gpt-5.4-mini)

Subject: macOS Dock Replacement

The Gist: boringBar is a macOS 14+ utility that replaces the Dock with a taskbar-style bar focused on windows rather than apps. It shows only windows on the active desktop, supports previews, desktop switching, pinned apps, notification badges, and an app launcher, while also offering an optional Dock-hiding mode. It needs Accessibility and Screen Recording permissions for window interaction and thumbnail previews.

Key Claims/Facts:

  • Window-centric bar: Organizes open windows by desktop/display, with hover previews, counts, and optional full titles.
  • Productivity extras: Includes searchable app launching, scroll-to-switch desktops, badges, attention pulses, and multi-display support.
  • Pricing model: Offers a 14-day trial, a perpetual personal license ($40, 2 devices, 2 years updates), and a yearly subscription option; business licensing is annual and volume-priced.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously optimistic overall, but pricing and licensing dominate the thread and initially drew strong pushback.

Top Critiques & Pushback:

  • Subscription fatigue / ownership concerns: Many users disliked the original subscription-only model for a local utility, arguing they prefer perpetual or versioned purchases for desktop apps and don’t want a payment that can be revoked if the company disappears (c47742559, c47742616, c47743949).
  • Device limits are a dealbreaker for some: The 2-device cap on personal licenses was criticized as too restrictive for people who regularly switch among several Macs (c47744336).
  • Security / trust concerns on macOS: One commenter said this kind of utility requires broad OS permissions and feels like another black box to trust (c47743014).

Better Alternatives / Prior Art:

  • JetBrains-style or versioned licensing: Several commenters suggested a major-version or support-window model: pay once, get updates for a defined period, then keep the last version forever (c47743211, c47743607).
  • Alternative tools: People pointed to Alfred/Raycast, Aerospace, sketchybar, zebar, and uBar as existing solutions or comparisons, with one noting Alfred’s one-time purchase model as a good precedent (c47742734, c47749730).
  • Open source option: A few users argued it should be open-sourced, or noted that an open-source clone already appeared quickly (c47748459, c47749913).

Expert Context:

  • Developer response and pricing revision: The author actively engaged with feedback and changed the offering from subscription-only to a perpetual personal license with 2 years of updates, while keeping annual business pricing (c47744059, c47744026, c47743992).
  • Behavior / UX details: Users reported a couple of functional quirks, especially that clicking a chip doesn’t always foreground the window and that apps with no open windows become harder to notice or relaunch; the author acknowledged some of these and discussed future fixes (c47749730, c47749823).

#7 A perfectable programming language (alok.github.io) §

summarized
130 points | 45 comments

Article Summary (Model: gpt-5.4-mini)

Subject: Lean as “Perfectable”

The Gist: The post argues that Lean stands out because it is not just a dependently typed language and theorem prover, but a system you can reason about inside itself. That makes it “perfectable”: users can state and prove properties about programs, build custom syntax and metaprogramming layers, and use proofs as a basis for optimization and refactoring. The author also claims Lean is fast enough to matter and has the strongest momentum among proof-oriented languages.

Key Claims/Facts:

  • Self-referential reasoning: You can prove facts about Lean programs in Lean, turning code properties into usable theorems.
  • Metaprogramming + syntax: Lean’s elaborator and custom syntax let you build domain-specific notations cleanly, while still type-checking them.
  • Optimization via proofs: If two pieces of code are provably equal, the compiler can substitute one for the other, suggesting a path to safer optimization and refactoring.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Enthusiastic overall, with practical caveats about size, semantics, and maturity.

Top Critiques & Pushback:

  • Distribution bloat: One commenter says Lean 4 has become extremely large to install/unpack compared with Lean 3, calling this a serious regression (c47746948, c47749510).
  • Non-constructive axioms: Another user argues Lean’s standard library includes non-constructive axioms that weaken the “perfectable” story because the kernel cannot reduce them (c47746672).
  • Performance expectations: Some readers are optimistic about Lean’s speed, but others imply it still has room to improve compared with Rust-level performance (c47748114, c47746672).

Better Alternatives / Prior Art:

  • Agda / Isabelle / Coq / F h / TLA+: Users mention these as neighboring tools; Isabelle is suggested as even more bloated than Lean, while Agda is favored by one commenter for mathy programming and TLA+ is cited as practical for verification (c47747543, c47746672, c47747530).
  • Verso: The blog’s hover-to-see-documentation experience is attributed to Verso, a Lean documentation toolchain (c47745554, c47745881).

Expert Context:

  • Real-world use exists: A commenter notes that AWS has notable activity with Isabelle and Lean, and that verified compilers like CompCert are the most widely used proof-assistant-adjacent success stories in industry (c47747530).
  • Lean’s expressiveness: Several comments reinforce the post’s core point that Lean is unusually expressive, including a linked computer algebra simplifier and praise for its documentation and tooling (c47747880, c47745554).

#8 Ask HN: What Are You Working On? (April 2026) () §

pending
227 points | 722 comments
⚠️ Summary not generated yet.

#9 Most people can't juggle one ball (www.lesswrong.com) §

summarized
361 points | 125 comments

Article Summary (Model: gpt-5.4-mini)

Subject: Juggling, From One Ball

The Gist: This post is a practical, step-by-step guide to learning juggling from scratch. It argues that the basics are simpler than they look: first learn a clean one-ball throw, then a two-ball exchange, then extend that motion into a three-ball cascade. From there it introduces tricks, higher numbers, passing, clubs/rings, and siteswap notation as a compact mathematical way to describe patterns.

Key Claims/Facts:

  • One-ball accuracy matters: Juggling begins with a consistent arc that returns to the other hand without chasing the ball.
  • Two balls, not passing: The second ball should be thrown into the inside of the first arc; simply passing it between hands does not scale well.
  • Three balls and siteswap: A 3-ball cascade is built from repeated 2-ball swaps, and siteswap encodes patterns by throw height; the average digit corresponds to the number of balls.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously optimistic — commenters largely agree juggling is learnable and rewarding, while sharing tips, personal anecdotes, and minor corrections.

Top Critiques & Pushback:

  • Teaching method is oversimplified or commonly mis-taught: Several commenters focus on the key correction that beginners should not just pass the second ball hand-to-hand, and note that the article’s sequence is a better way to teach basics (c47749776, c47748703, c47742533).
  • Not everyone learns quickly: A number of people say coordination varies a lot; some learned in days or weeks, others took much longer, and a few say they still struggle with consistent throws (c47742694, c47743101, c47745674).

Better Alternatives / Prior Art:

  • Beanbags, scarves, handkerchiefs, or one-hand drills: Commenters suggest softer or slower objects for beginners, while others recommend practicing two balls in one hand before moving to two hands or higher numbers (c47743796, c47742533, c47744608).
  • Clubs and passing: Some prefer moving from balls to clubs, while others point to passing patterns and social juggling meetups as the more rewarding next step (c47743587, c47744610, c47747565).

Expert Context:

  • The real skill is consistent timing and arc prediction: Experienced jugglers emphasize that the core of basic juggling is not hand-eye reflexes but a repeatable throw and learning to predict the ball’s path from the top of the arc (c47742533, c47749880).
  • Juggling can be taught creatively: One commenter describes standing in for a student’s non-dominant hand so they can “feel” the pattern immediately; others mention using practice setups like a bed or wall to constrain errors (c47743229, c47743101).

#10 Optimization of 32-bit Unsigned Division by Constants on 64-bit Targets (arxiv.org) §

summarized
79 points | 9 comments

Article Summary (Model: gpt-5.4-mini)

Subject: Faster Constant Division

The Gist: The paper proposes a way to optimize 32-bit unsigned division by constants on 64-bit CPUs by using 64-bit multiplication hardware more directly. For divisors like 7, where traditional “magic number” division needs a 33-bit constant and extra fixup steps, the method instead uses a widened multiply so the quotient can be read from the high half of the product, avoiding the extra shift/adjustment. The authors report LLVM/GCC patches and microbenchmark speedups, and note LLVM has already merged the change.

Key Claims/Facts:

  • Widened multiply: Replaces the 32-bit workaround with a 64-bit high-multiply path that better matches 64-bit targets.
  • No extra shift: For the targeted cases, the quotient comes directly from the high half of the product, eliminating a post-multiply shift.
  • Reported impact: The paper reports microbenchmark speedups on Intel Sapphire Rapids and Apple M4, with an LLVM implementation already accepted upstream.
Parsed and condensed via gpt-5.4-mini at 2026-04-13 10:31:22 UTC

Discussion Summary (Model: gpt-5.4-mini)

Consensus: Cautiously optimistic, but several commenters argue the paper rediscovered an older technique and may not represent the true state of the art.

Top Critiques & Pushback:

  • Prior art was missed: Multiple commenters say the key idea already exists in older division-by-constant work: use a different magic number plus a saturating/adjusted increment of the dividend so a 32x32 multiply suffices (c47748097, c47749741).
  • Questionable novelty: One commenter argues the paper’s approach is basically the obvious “use a wider multiply on a wider machine” solution, and that the more interesting optimization is the existing 32x32 strategy with compile-time extra-bit handling (c47748097).
  • Vectorization limits: The LLVM PR is praised as readable, but one commenter notes the method works in scalar registers, not inside vectors, limiting applicability (c47747945).

Better Alternatives / Prior Art:

  • libdivide / Lemire-style methods: Commenters point to prior blog posts and tools that already exploit 64-bit constants for 32-bit division or optimized remainder/division by constants, suggesting this space has known solutions not integrated into compilers yet (c47748329, c47747945).
  • Hackers Delight / Granlund-Montgomery lineage: The discussion frames the paper as building on classic magic-number division techniques rather than introducing a wholly new method (c47747683, c47748356).

Expert Context:

  • Why the optimization matters: One commenter explains that for divisors like 7, the magic constant needs 33 bits; on 64-bit targets, a full multiply can effectively supply the needed high bits and make the quotient available without an explicit shift (c47747683).
  • Implementation nuance on x86: Another notes that the paper’s direct use of the multiply high-half can outperform saturating-add fixups on scalar x86 because those fixups are not free, and zero-extension to 64 bits is implicit (c47748291).