Anime4K - A High-Quality Real Time Anime Upscaler
> We present a state-of-the-art high-quality real-time SISR algorithm designed to work with japanese animation and cartoons that is extremely fast (~3ms with Vega 64 GPU), temporally coherent, simple to implement (~100 lines of code), yet very effective. We find it surprising that this method is not currently used ‘en masse’, since the intuition leading us to this algorithm is very straightforward. Remarkably, the proposed method does not use any machine-learning or statistical approach, and is tailored to content that puts importance to well defined lines/edges while tolerates a sacrifice of the finer textures.
The 18-month fence hop, the six-day chair, and why video games are so hard to make
> Whether or not a player notices, appreciates, or is able to see these details, everything from a pen on a desk to a chair in a room has to be meticulously made, scrutinized, and tested. But at what cost? How does a developer decide how much time to allocate to set dressing a small room versus a game’s main character? How many polygons should an asset in the corner of a players eye get versus something directly in their face?
Turning a MacBook into a Touchscreen Using the Webcam
> Our idea was to retrofit a small mirror in front of a MacBook’s built-in webcam, so that the webcam would be looking down at the computer screen at a sharp angle. The camera would be able to see fingers hovering over or touching the screen, and we’d be able to translate the video feed into touch events using computer vision.
History of VGA cables and DDC and more
Banding in Games: A Noisy Rant
> If you use sRGB correctly, you’re doing pretty well - you will generally hardly notice banding (though dark areas remain)
> If you are not on a platform where it’s readily available, or you want to get rid of the last issues, the rest of this presentation is for you
Dithering. Lots of dithering.
Survey of Alternative Displays
> The purpose of this article is to collect and consolidate a list of these alternative methods of working with displays, light and optics. This will by no means be an exhaustive list of the possibilities available — depending on how you categorize, there could be dozens or hundreds of ways. There are historical mainstays, oddball one-offs, expensive failures and techniques that are only beginning to come into their own.
There’s more to life than the LCD.
> Transparency may not seem particularly exciting. The GIF image format which allowed some pixels to show through the background was published over 30 years ago. Almost every graphic design application released in the last two decades has supported the creation of semi-transparent content. The novelty of these concepts is long gone.
> With this article I’m hoping to show you that transparency in digital imaging is actually much more interesting than it seems – there is a lot of invisible depth and beauty in something that we often take for granted.
Modern text rendering with Linux
> Welcome to part 1 of Modern text rendering in Linux. In each part of this series we will build a self-contained C program to render a character or sequence of characters. Each of these programs will implement a feature which I consider essential to achieve state of the art text rendering.
> In this first part I will show how to setup FreeType and we will build a console character renderer.
Natural Adversarial Examples
> We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. We curate 7,500 natural adversarial examples and release them in an ImageNet classifier test set that we call ImageNet-A. This dataset serves as a new way to measure classifier robustness. Like l_p adversarial examples, ImageNet-A examples successfully transfer to unseen or black-box classifiers. For example, on ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%. Recovering this accuracy is not simple because ImageNet-A examples exploit deep flaws in current classifiers including their over-reliance on color, texture, and background cues. We observe that popular training techniques for improving robustness have little effect, but we show that some architectural changes can enhance robustness to natural adversarial examples. Future research is required to enable robust generalization to this hard ImageNet test set.
Zelda Screen Transitions are Undefined Behaviour
> The vertical scrolling effect in the original “The Legend of Zelda” relies on manipulating the NES graphics hardware in a manor likely that was unintended by its designers.
Vintage TV Test Patterns
> As you might expect, the BBC test card with the girl and clown has both a backstory and a cult following.
Quake II gets free real-time raytracing updates on June 6
> Windows and Linux users will be able to download the first three levels of the graphically updated game as shareware starting at 6am Pacific Time on June 6. You can play the remaining levels and multiplayer if you point the installer to a legit copy of the full game on your hard drive. The source code for the Vulkan-based update will be posted on Github as well, though Quake II expansion packs will not be supported without extra effort from the community.
2D Graphics on Modern GPU
> I have found that, if you can depend on modern compute capabilities, it seems quite viable to implement 2D rendering directly on GPU, with very promising quality and performance. The prototype I built strongly resembles a software renderer, just running on an outsized multicore GPU with wide SIMD vectors, much more so than rasterization-based pipelines.
Unraveling the JPEG
> JPEG images are everywhere in our digital lives, but behind the veil of familiarity lie algorithms that remove details that are imperceptible to the human eye. This produces the highest visual quality with the smallest file size—but what does that look like? Let’s see what our eyes can’t see!
> This article is about how to decode a JPEG image. In other words, it’s about what it takes to convert the compressed data stored on your computer to the image that appears on the screen. It’s worth learning about not just because it’s important to understand the technology we all use everyday, but also because, as we unravel the layers of compression, we learn a bit about perception and vision, and about what details our eyes are most sensitive to. It’s also just a lot of fun to play with images this way.
Enter The Matrox
> A fast GPU is a must for video processing and other image-heavy productivity tasks, but GPUs are really known for their ability to push the limits on popular 3D-driven games. There’s an obsession with this kind of performance, in fact—hence why folks can’t stop talking about whatever Nvidia and AMD are putting into their graphics chips next. But two decades ago, these two companies had a competitor—and that competitor did something unusual and surprising: The graphics card maker Matrox decided to simply drop out of the market for consumer graphics entirely—a move that seemingly would have sunk any other company. So what happened to Matrox? Well, let me tell you! Today’s Tedium talks Matrox, the misunderstood graphics firm that could never quite get 3D right.
Anti-Ghosting with Temporal Anti-Aliasing
> We decided on TAA for The Grand Tour Game because it tends to produce a softer, more photorealistic image in both static and moving scenes. FXAA (Fast Approximate Anti-Aliasing) and SMAA (Subpixel Morphological Anti-Aliasing) work well for static scenes, but still produce artifacts for moving scenes. Lumberyard’s deferred lighting pipeline does not support MSAA (Multisample Anti-Aliasing). Like MSAA, TAA uses multiple samples per pixel to provide anti-aliasing. The difference is that with temporal anti-aliasing, the samples are spread across multiple frames. It uses a frame history buffer and a per-pixel velocity buffer to reproject each pixel to gather the additional sample. For each pixel, we use the per-pixel velocity as an offset, as well as the previous frame’s view projection matrix, to determine where to query the frame history buffer. Modifying the camera’s projection matrix with a sub-pixel jitter each frame allows us to produce anti-aliased results even in scenes where there is no camera motion. With fast rotation or linear motion, the history pixel (the sample retrieved from the frame history buffer after pixel reprojection) may correspond to a location with vastly different lighting conditions or to an entirely separate object. This history mismatch, if unaddressed, causes severe ghosting, as shown below.
> Dacein is an experimental creative coding IDE combining a few different ideas that I’ve been thinking about: functional creative coding library, time travel abilities, livecoding editor, direct manipulation
This is How Photorealistic Video Game Engines Are Now
> To prepare for the project, Quixel spent a month in cold and wet locations in Iceland, scanning all kinds of objects found in the natural environment using. The team returned with over 1,000 scans that captured the details of the landscape.
This looks great, but no mention of hardware or frame rates? Not sure when you’ll be seeing this on your desktop.
What every coder should know about gamma
> Given that vision is arguably the most important sensory input channel for human-computer interaction, it is quite surprising that gamma correction is one of the least talked about subjects among programmers and it’s mentioned in technical literature rather infrequently, including computer graphics texts. The fact that most computer graphics textbooks don’t explicitly mention the importance of correct gamma handling, or discuss it in practical terms, does not help matters at all (my CG textbook from uni falls squarely into this category, I’ve just checked). Some books mention gamma correction in passing in somewhat vague and abstract terms, but then provide neither concrete real-world examples on how to do it properly, nor explain what the implications of not doing it properly are, nor show image examples of incorrect gamma handling.
> I came across the need for correct gamma handling during writing my ray tracer and I had to admit that my understanding of the topic was rather superficial and incomplete. So I had spent a few days reading up on it online, but it turned out that many articles about gamma are not much help either, as many of them are too abstract and confusing, some contain too many interesting but otherwise irrelevant details, and then some others lack image examples or are just simply incorrect or hard to understand. Gamma is not a terribly difficult concept to begin with, but for some mysterious reason it’s not that trivial to find articles on it that are correct, complete and explain the topic in a clear language.
Includes a link to the classic, Gamma error in picture scaling.
Fyne - Cross platform GUI in Go based on Material Design
> Fyne is an easy to use UI toolkit and app API written in Go. We use OpenGL (through the go-gl and go-glfw projects) to provide cross platform graphics.
> The 1.0 release is now out and we encourage feedback and requests for the next major release :).