SAT solver on top of regex matcher

https://yurichev.com/news/20200621_regex_SAT/ [yurichev.com]

2020-07-08 00:05

tags:
compsci
programming
text

> A SAT problem is an NP-problem, while regex matching is not. However, a quite popular regex ‘backreferences’ extension extends regex matching to a (hard) NP-problem.

source: trivium

xi-editor retrospective

https://raphlinus.github.io/xi/2020/06/27/xi-retrospective.html [raphlinus.github.io]

2020-07-01 00:55

tags:
compsci
concurrency
development
programming
rust
swtools
text

> I still believe it would be possible to build a high quality editor based on the original design. But I also believe that this would be quite a complex system, and require significantly more work than necessary.

A few good ideas and observations could be mined out of this post.

source: L

Discovering Dennis Ritchie’s Lost Dissertation

https://computerhistory.org/blog/discovering-dennis-ritchies-lost-dissertation/ [computerhistory.org]

2020-06-20 23:21

tags:
academia
compsci
retro

> It may come as some surprise to learn that until just this moment, despite Ritchie’s much-deserved computing fame, his dissertation—the intellectual and biographical fork-in-the-road separating an academic career in computer science from the one at Bell Labs leading to C and Unix—was lost. Lost? Yes, very much so in being both unpublished and absent from any public collection; not even an entry for it can be found in Harvard’s library catalog nor in dissertation databases.

How Much of a Genius-Level Move Was Using Binary Space Partitioning in Doom?

https://twobithistory.org/2019/11/06/doom-bsp.html [twobithistory.org]

2020-03-21 20:52

tags:
article
compsci
graphics
programming

> A decade after Doom’s release, in 2003, journalist David Kushner published a book about id Software called Masters of Doom, which has since become the canonical account of Doom’s creation. I read Masters of Doom a few years ago and don’t remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of Doom, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because Doom was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, started reading research papers. He eventually implemented a technique called “binary space partitioning,” never before used in a video game, that dramatically sped up the Doom engine.

History of research into BSP.

Landmark Computer Science Proof Cascades Through Physics and Math

https://www.quantamagazine.org/landmark-computer-science-proof-cascades-through-physics-and-math-20200304/ [www.quantamagazine.org]

2020-03-08 04:00

tags:
compsci
math
paper
quantum

> Computer scientists established a new boundary on computationally verifiable knowledge. In doing so, they solved major open problems in quantum mechanics and pure mathematics.

source: green

What is the random oracle model and why should you care?

https://blog.cryptographyengineering.com/2020/01/05/what-is-the-random-oracle-model-and-why-should-you-care-part-5/ [blog.cryptographyengineering.com]

2020-01-16 04:14

tags:
compsci
crypto
hash
security

> About eight years ago I set out to write a very informal piece on a specific cryptographic modeling technique called the “random oracle model”. This was way back in the good old days of 2011, which was a more innocent and gentle era of cryptography. Back then nobody foresaw that all of our standard cryptography would turn out to be riddled with bugs; you didn’t have to be reminded that “crypto means cryptography“. People even used Bitcoin to actually buy things.

> That first random oracle post somehow sprouted three sequels, each more ridiculous than the last. I guess at some point I got embarrassed about the whole thing — it’s pretty cheesy, to be honest — so I kind of abandoned it unfinished. And that’s been a major source of regret for me, since I had always planned a fifth, and final post, to cap the whole messy thing off. This was going to be the best of the bunch: the one I wanted to write all along.

source: green

A brief history of liquid computers

https://royalsocietypublishing.org/doi/full/10.1098/rstb.2018.0372 [royalsocietypublishing.org]

2020-01-01 23:39

tags:
compsci
hardware
paper
retro

> A substrate does not have to be solid to compute. It is possible to make a computer purely from a liquid. I demonstrate this using a variety of experimental prototypes where a liquid carries signals, actuates mechanical computing devices and hosts chemical reactions. We show hydraulic mathematical machines that compute functions based on mass transfer analogies. I discuss several prototypes of computing devices that employ fluid flows and jets. They are fluid mappers, where the fluid flow explores a geometrically constrained space to find an optimal way around, e.g. the shortest path in a maze, and fluid logic devices where fluid jet streams interact at the junctions of inlets and results of the computation are represented by fluid jets at selected outlets. Fluid mappers and fluidic logic devices compute continuously valued functions albeit discretized. There is also an opportunity to do discrete operation directly by representing information by droplets and liquid marbles (droplets coated by hydrophobic powder). There, computation is implemented at the sites, in time and space, where droplets collide one with another. The liquid computers mentioned above use liquid as signal carrier or actuator: the exact nature of the liquid is not that important. What is inside the liquid becomes crucial when reaction–diffusion liquid-phase computing devices come into play: there, the liquid hosts families of chemical species that interact with each other in a massive-parallel fashion. I shall illustrate a range of computational tasks, including computational geometry, implementable by excitation wave fronts in nonlinear active chemical medium. The overview will enable scientists and engineers to understand how vast is the variety of liquid computers and will inspire them to design their own experimental laboratory prototypes.

source: L

Xor Filters: Faster and Smaller Than Bloom Filters

https://lemire.me/blog/2019/12/19/xor-filters-faster-and-smaller-than-bloom-filters/ [lemire.me]

2019-12-20 23:46

tags:
compsci
hash
paper
programming

> Among other alternatives, Fan et al. introduced Cuckoo filters which use less space and are faster than Bloom filters. While implementing a Bloom filter is a relatively simple exercise, Cuckoo filters require a bit more engineering.

> Could we do even better while limiting the code to something you can hold in your head?

> It turns out that you can with xor filters. We just published a paper called Xor Filters: Faster and Smaller Than Bloom and Cuckoo Filters that will appear in the Journal of Experimental Algorithmics.

source: L

Introducing Glush: a robust, human readable, top-down parser compiler

https://www.sanity.io/blog/why-we-wrote-yet-another-parser-compiler [www.sanity.io]

2019-12-18 17:54

tags:
compiler
compsci
programming
release
swtools
text

> It’s been 45 years since Stephen Johnson wrote Yacc (Yet another compiler-compiler), a parser generator that made it possible for anyone to write fast, efficient parsers. Yacc, and its many derivatives, quickly became popular and were included in many Unix distributions. You would imagine that in 45 years we would have further perfected the art of creating parsers and would have standardized on a single tool. A lot of progress has been made, but there are still annoyances and problems affecting every tool out there.

This is great, even just for the overview of parsing.

> The CYK algorithm (named after Cocke–Younger–Kasami) is in my opinion of great theoretical importance when it comes to parsing context-free grammars. CYK will parse all context-free parsers in O(n3), including the “simple” grammars that LL/LR can parse in linear time. It accomplishes this by converting parsing into a different problem: CYK shows that parsing context-free languages is equivalent to doing a boolean matrix multiplication. Matrix multiplication can be done naively in cubic time, and as such parsing context-free languages can be done in cubic time. It’s a very satisfying theoretical result, and the actual algorithm is small and easy to understand.

source: trivium

Isogeny crypto

https://ellipticnews.wordpress.com/2019/11/09/isogeny-crypto/ [ellipticnews.wordpress.com]

2019-11-11 05:24

tags:
bugfix
compsci
crypto
math
security

> Rolling forward 15 years, isogeny-based cryptography is another area with many technical subtleties, but is moving into the mainstream of cryptography. Once again, not everything that can be done with discrete logarithms can necessarily be done with isogenies. It is therefore not surprising to find papers that have issues with their security.

> It is probably time for an Isogenies for Cryptographers paper, but I don’t have time to write it. Instead, in this blog post I will mention several recent examples of incorrect papers. My hope is that these examples are instructional and will help prevent future mistakes. My intention is not to bring shame upon the authors.

source: green

History of Information

http://www.historyofinformation.com/ [www.historyofinformation.com]

2019-11-08 20:51

tags:
compsci
history
language
reference
tech

Lots of little facts organized in various ways.

source: grugq

Research based on the .NET Runtime

http://www.mattwarren.org/2019/10/25/Research-based-on-the-.NET-Runtime/ [www.mattwarren.org]

2019-10-27 04:41

tags:
compsci
dotnet
jit
links
paper

> Over the last few years, I’ve come across more and more research papers based, in some way, on the ‘Common Language Runtime’ (CLR). So armed with Google Scholar and ably assisted by Semantic Scholar, I put together the list below.

Factorio New pathfinding algorithm

https://factorio.com/blog/post/fff-317 [factorio.com]

2019-10-19 02:50

tags:
compsci
gaming
visualization

> A simple choice for this function is simply the straight-line distance from the node to the goal position – this is what we have been using in Factorio since forever, and it’s what makes the algorithm initially go straight. It’s not the only choice, however – if the heuristic function knew about some of the obstacles, it could steer the algorithm around them, which would result in a faster search, since it wouldn’t have to explore extra nodes. Obviously, the smarter the heuristic, the more difficult it is to implement.

> The simple straight-line heuristic function is fine for pathfinding over relatively short distances. This was okay in past versions of Factorio – about the only long distance pathfinding was done by biters made angry by pollution, and that doesn’t happen very often, relatively speaking. These days, however, we have artillery. Artillery can easily shoot – and aggro – massive numbers of biters on the far end of a large lake, who will then all try to pathfind around the lake. The video below shows what it looks like when the simple A* algorithm we’ve been using until now tries to go around a lake.

source: L

A Couple of (Probabilistic) Worst-case Bounds for Robin Hood Linear Probing

https://www.pvk.ca/Blog/2019/09/29/a-couple-of-probabilistic-worst-case-bounds-for-robin-hood-linear-probing/ [www.pvk.ca]

2019-09-30 03:59

tags:
compsci
hash
perf

> I like to think of Robin Hood hash tables with linear probing as arrays sorted on uniformly distributed keys, with gaps. That makes it clearer that we can use these tables to implement algorithms based on merging sorted streams in bulk, as well as ones that rely on fast point lookups. A key question with randomised data structures is how badly we can expect them to perform.

source: L

Visual Information Theory

https://colah.github.io/posts/2015-09-Visual-Information/ [colah.github.io]

2019-09-24 23:09

tags:
article
compsci
ideas
math
random
visualization

> Information theory gives us precise language for describing a lot of things. How uncertain am I? How much does knowing the answer to question A tell me about the answer to question B? How similar is one set of beliefs to another? I’ve had informal versions of these ideas since I was a young child, but information theory crystallizes them into precise, powerful ideas. These ideas have an enormous variety of applications, from the compression of data, to quantum physics, to machine learning, and vast fields in between.

> Unfortunately, information theory can seem kind of intimidating. I don’t think there’s any reason it should be. In fact, many core ideas can be explained completely visually!

source: E

The secret-sharer: evaluating and testing unintended memorization in neural networks

https://blog.acolyer.org/2019/09/23/the-secret-sharer/ [blog.acolyer.org]

2019-09-24 02:04

tags:
ai
compsci
language
opsec
paper

> This is a really important paper for anyone working with language or generative models, and just in general for anyone interested in understanding some of the broader implications and possible unintended consequences of deep learning. There’s also a lovely sense of the human drama accompanying the discoveries that just creeps through around the edges.

> Disclosure of secrets is of particular concern in neural network models that classify or predict sequences of natural language text… even if sensitive or private training data text is very rare, one should assume that well-trained models have paid attention to its precise details…. The users of such models may discover— either by accident or on purpose— that entering certain text prefixes causes the models to output surprisingly revealing text completions.

The Synchronization of Periodic Routing Messages

https://www.icir.org/floyd/papers/sync_94.pdf [www.icir.org]

2019-09-06 22:05

tags:
compsci
networking
paper
pdf
perf
random
systems

> The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertently become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, we study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt ‘phase transition’: in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. We show that synchronization can be avoided by the addition of randomization to the traffic sources and quantify how much randomization is necessary. In addition, we argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks.

Chopping substrings

http://blogs.perl.org/users/damian_conway/2019/07/chopping-substrings.html [blogs.perl.org]

2019-07-30 19:20

tags:
compsci
perf
perl
programming

> The “best practice” solution is a technique known as suffix trees, which requires some moderately complex coding. However, we can get very reasonable performance for strings up to hundreds of thousands of characters long using a much simpler approach.

source: HN

Decades-Old Computer Science Conjecture Solved in Two Pages

https://www.quantamagazine.org/mathematician-solves-computer-science-conjecture-in-two-pages-20190725/ [www.quantamagazine.org]

2019-07-26 03:04

tags:
compsci
math
paper

> “This conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science,” wrote Scott Aaronson of the University of Texas, Austin, in a blog post. “The list of people who tried to solve it and failed is like a who’s who of discrete math and theoretical computer science,” he added in an email.

> Over the years, computer scientists have developed many ways to measure the complexity of a given Boolean function. Each measure captures a different aspect of how the information in the input string determines the output bit. For instance, the “sensitivity” of a Boolean function tracks, roughly speaking, the likelihood that flipping a single input bit will alter the output bit. And “query complexity” calculates how many input bits you have to ask about before you can be sure of the output.

Also: https://www.scottaaronson.com/blog/?p=4229

Paper: https://arxiv.org/abs/1907.00847

> Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture

Models of Generics and Metaprogramming: Go, Rust, Swift, D and More

http://thume.ca/2019/07/14/a-tour-of-metaprogramming-models-for-generics/ [thume.ca]

2019-07-21 22:14

tags:
article
compiler
compsci
programming
type-system

> In some domains of programming it’s common to want to write a data structure or algorithm that can work with elements of many different types, such as a generic list or a sorting algorithm that only needs a comparison function. Different programming languages have come up with all sorts of solutions to this problem: From just pointing people to existing general features that can be useful for the purpose (e.g C, Go) to generics systems so powerful they become Turing-complete (e.g. Rust, C++). In this post I’m going to take you on a tour of the generics systems in many different languages and how they are implemented. I’ll start from how languages without a special generics system like C solve the problem and then I’ll show how gradually adding extensions in different directions leads to the systems found in other languages.

source: L