Teletext’s creative legacy lives on
> Like Walkmans and VHS recorders, teletext now seems impossibly quaint. But designer and writer Craig Oldham explains that not only was Teletext a revolutionary technology in its prime, its creative legacy lives on with a new generation of artists who love its creative limits.
Writing a Texture Painter: Part #1
> Many programmers appreciate being able to see their code render something interesting to the screen. For a while I’ve wanted to write a texture painter, where I can import a model, paint colors on it, and then export those textures back to a file. I’m using OpenGL in my code, but I’ll focus on the actual mechanics and less on the language or code.
Boosting the Real Time Performance of Gnome Shell 3.34 in Ubuntu 19.10
> As you may have read many times, Gnome 3.34 brings much improved desktop performance. In this article we will describe some of the improvements contributed by Canonical, how the problems were surprising, how they were approached and what other performance work is coming in future.
> The thing is in the case of Gnome Shell its biggest performance problems of late were not hot spots at all. They were better characterised as cold spots where it was idle instead of updating the screen smoothly. Such cold spots are only apparent when you look at the real time usage of a program, and in not the CPU or GPU time consumed.
Nice write-up on addressing stuttering and lag.
Towards a unified theory of reactive UI
> In trying to figure out the best reactive structure for druid, as well as how to communicate that to the world, I’ve been studying a wide range of reactive UI systems. I’ve found an incredible diversity, even though they have fairly consistent goals. This post is an attempt to find common patterns, to characterize the design space as a whole. It will be rough, at some points almost a stream of consciousness. If I had the time and energy, I think it could be expanded into an academic paper. But, for now, perhaps these rough thoughts are interesting to some people working in the space.
Announcing the Allsorts Font Shaping Engine
> Today YesLogic is open-sourcing the Allsorts font parser, shaping engine, and subsetter for OpenType, WOFF, and WOFF2 under the Apache 2.0 license. Allsorts was extracted from the Prince HTML to PDF typesetting and layout tool and is implemented in Rust.
> Font shaping is the process of laying out the glyphs of a font in order to represent some input text. Rasterisation of the glyphs is a separate process. Font shaping for Latin text is quite simple. For some scripts, like those used by Indic languages, it is quite complex and requires reordering and substituting the glyphs in each syllable to produce the final output. There are only three main font shaping engines in use today: DirectWrite on Windows, CoreText on macOS and iOS, and HarfBuzz on open-source operating systems and some web-browsers. Of these, only HarfBuzz is open source.
The wet bird
> This image won the March-April 2000 round of the Internet Ray-Tracing Competition, with the topic “City”
> There are many city pictures in Oyonale. Cities are a favourite subject of mine, so that the IRTC “City” topic was somehow perfect.Too perfect actually, because it came at a time when I was of tired of making urban pictures. I didn’t want to make another “something strange happens here” picture, or model another building. I wanted fresh ideas that would involve the use of new techniques.
> Of course, even with the city as the main attraction, the image still lacked concept. The Megapov documentation provided the solution: because meshes can be copied (almost) endlessly, they?re good candidates for motion blur. So here it was: the picture would be about New York (actually a fantasy twin), and it would involve a motion-blurred character. Since motion blur is primarily a photographic effect, it was another excuse to make the picture highly realistic. The character could be a ghost from the past : a human being, like a XIXe century lady, or even an animal. I briefly ran experiments with a deer, but I decided that I had made enough of “animals in the city” pictures. The character also could be a simple, hurried passer-by. In fact, I’m still not sure of what the blurred character really is.
Signed distance fields
> It would be fun, I thought, to be able to specify the desired cross-sections, and have something generate the required 3D shape (if it existed) in real-time.
> Dealing with all of the details of creating a mesh with the right vertices etc. sounded painful though. Fortunately, I had been reading recently about a different kind of 3D rendering technique which makes these kind of boolean operations trivial – signed distance fields.
Vulkan Progress Report #5
> Another month, another Vulkan progress report! October was a busy month, as most of it was split between working on the new Global Illumination system and Godotcon/GIC in Poland. Despite this, strong progress was made and the new GI system seems pretty much complete.
> Godot 3.0 introduced GIProbes. They provide Global Illumination to scenes. They were, however, pretty limited. Only static geometry could provide GI and dynamic objects were ignored. Added to this, changes in light settings had significant frames of delay. Added to a not so great performance and quality, the feature was barely usable as is.
> For Godot 4.0, GIProbes will see several significant changes, which will be outlined as follows:
Half The Precision, Twice The Fun: Working With FP16 In HLSL
> It turns out that fp16 is still useful for the reasons it was originally useful back in the days of D3D9: it’s a good way to improve throughput on a limited transitor/power budget, and the smaller storage size means that you can store more values in general purpose registers without having your thread occupancy suffer due to register pressure. As of Nvidia’s new Turing architecture (AKA the RTX 2000 series), AMD’s Vega (AKA gfx900, AKA GCN 5) series1 and Intel’s Gen8 architecture (used in Broadwell) fp16 is now back in the desktop world. Which means that us desktop graphics programmers now have to deal with it again. And of course if you’re a mobile developer, it never really left in the first place. But how do you actually use fp16 in your shader code? That’s exactly what this blog will explain!
Dramatically reduced power usage in Firefox 70 on macOS with Core Animation
> In Firefox 70 we changed how pixels get to the screen on macOS. This allows us to do less work per frame when only small parts of the screen change. As a result, Firefox 70 drastically reduces the power usage during browsing.
> Every Firefox window contains one OpenGL context, which covers the entire window. Firefox 69 was using the API described above. So we were always redrawing the whole window on every change, and the window manager was always copying our entire window to the screen on every change. This turned out to be a problem despite the fact that these draws were fully hardware accelerated.
> Core Animation is the name of an Apple framework which lets you create a tree of layers (CALayer). These layers usually contain textures with some pixel content. The layer tree defines the positions, sizes, and order of the layers within the window. Starting with macOS 10.14, all windows use Core Animation by default, as a way to share their rendering with the window manager.
Text Rendering Hates You
> Rendering text, how hard could it be? As it turns out, incredibly hard! To my knowledge, literally no system renders text “perfectly”. It’s all best-effort, although some efforts are more important than others.
I lost it at multicolored ligatures.
What Remains Technical Breakdown
> What Remains is a narrative adventure game for the 8-bit NES video game console, and was released in March 2019 as a free ROM, playable in emulator. It was created by a small team, Iodine Dynamics, over the course of two years of on and off development. It’s currently in the hardware phase as a limited batch of cartridges are being created from all recycled parts.
> The game plays out over 6 stages, wherein the player walks around multiple scenes with 4-way scrolling maps, speaking to NPCs, collecting clues, learning about their world, playing mini-games, and solving simple puzzles. As the primary engineer on this project, I faced a lot of challenges in bringing the team’s vision to reality. Given the significant restrains of the NES hardware, making any game is difficult enough, let alone one with as much content as What Remains. Only by creating useful subsystems to hide and manage this complexity were we able to work as a team to complete the game.
> Herein is a technical breakdown of some of the pieces that make up our game’s engine, in the hopes that others find it useful or at least interesting to read about.
> The goal of Explanations is to try to allow people to play with fun parts of computers. Graphics, compression, audio. The tagline is my biggest inspiration: “Play, don’t show”, riffing off the typical “Show, don’t tell” rule of writers and authors everywhere. Why bother giving a diagram when I give you an inspector and let you poke at things yourself!
> Previously, this series was known as “Xplain” and was more focused on the X11 window system and protocol, but I’ve been slowly moving towards anything that interests me, and I’m hijacking this project for it since I really like the format and style I’ve developed. The code for every single one of these demos is available in the GitHub repo, and I do try to comment heavily and go into even more depth there! Play with the code! Use it for one of your own projects! It’s all MIT/X11 licensed. I very much appreciate followup questions and any sort of feedback through the links mentioned above.
> You might have noticed that when you ran your mouse over the stipple, your cursor changed. That’s because this isn’t just any old stipple image, that stipple is actually the background of a full X server session running in your browser using HTML5 canvas. All of the interactive demos will use this framework to explain what’s going on under the hood.
Author comment: https://news.ycombinator.com/item?id=21041340
3D Ken Burns Effect from a Single Image
> In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud.
> Nearly all retro game systems generate colors in some variant of RGB encoding. But the raw pixel colors are often designed for very different screens than those that emulators typically run on. In this article, I’ll walk through the importance of color emulation, and provide some example code and screenshots.
Hybrid screen-space reflections
> As realtime raytracing is slowly, but steadily, gaining traction, a range of opportunities to mix rasteration-based rendering systems with raytracing are starting to become available: hybrid raytracing where rasterisation is used to provide the hit points for the primary rays, hybrid shadows where shadowmaps are combined with raytracing to achieve smooth or higher detail shadows, hybrid antialiasing where raytracing is used to antialias the edges only, hybrid reflections, where raytracing is used to fill-in the areas that screenspace reflections can’t resolve due to lack of information.
> Of these, I found the last one particularly interesting: how well can a limited information lighting technique like SSR be combined with a full-scene aware one like raytracing, so I set about exploring this further.
An introduction to D3.js
> So, you want to create amazing data visualizations on the web and you keep hearing about D3.js. But what is D3.js, and how can you learn it? Let’s start with the question: What is D3? While it might seem like D3.js is an all-encompassing framework, it’s really just a collection of small modules.
Anime4K - A High-Quality Real Time Anime Upscaler
> We present a state-of-the-art high-quality real-time SISR algorithm designed to work with japanese animation and cartoons that is extremely fast (~3ms with Vega 64 GPU), temporally coherent, simple to implement (~100 lines of code), yet very effective. We find it surprising that this method is not currently used ‘en masse’, since the intuition leading us to this algorithm is very straightforward. Remarkably, the proposed method does not use any machine-learning or statistical approach, and is tailored to content that puts importance to well defined lines/edges while tolerates a sacrifice of the finer textures.
The 18-month fence hop, the six-day chair, and why video games are so hard to make
> Whether or not a player notices, appreciates, or is able to see these details, everything from a pen on a desk to a chair in a room has to be meticulously made, scrutinized, and tested. But at what cost? How does a developer decide how much time to allocate to set dressing a small room versus a game’s main character? How many polygons should an asset in the corner of a players eye get versus something directly in their face?
Turning a MacBook into a Touchscreen Using the Webcam
> Our idea was to retrofit a small mirror in front of a MacBook’s built-in webcam, so that the webcam would be looking down at the computer screen at a sharp angle. The camera would be able to see fingers hovering over or touching the screen, and we’d be able to translate the video feed into touch events using computer vision.