The cycle in which ideas turn into software is getting shorter and shorter. By and large, this is a good thing as new functions are delivered to users faster than ever before. But one of the consequences is software bugs are introduced and sometimes missed. I suspect part of the reason is testing cycles are being squeezed. This is part of the root cause, I think, as to why a two year old bug was introduced into Linux.
Researcher Chris Coulson found the bug which can allow a malicious actor to write to a system using a specifically crafted TCP payload which exploits the flaw in systemd.
He traced the flaw back to a specific developer.
According to a report at ITWire, patches for Ubuntu have been issued while Debian may still be vulnerable. Red Hat says Red Hat Enterprise Linux 7 is not affected.
It’s a good thing this was detected and fixed. But I remain concerned at how these flaws get introduced and committed to public codebases. I get that software is complex and that testing is challenging.
Is there a way to solve this challenge? Does the way we create software need to change? Or are we stuck with these sorts of issues?