Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

C/C++

Social Processes And Heartbleed, Part 2



Related Reading


More Insights




rl3551

Performance and access to low level memory for hardware and as Hearbleed showed buffer overruns are only 1 type of error, managed strings don't fix everything. Checking bounds? I've one program that runs at 1/3 speed, huge slow down, just with 1 extra careless 'if' conditional in the tight loop in the code. That said, liberal checking goes on in debug modes.

rl3551

No reason to suppose that. One one large project I worked on in 80's in "standard" Pascal, every string would need a length parameter which followed it, couldn't use "records" (structs) because every API had to be accessible to Fortran as policy. A friend wrote a pascal parser and used the code base to test his work, then on a whim he put in a heuristic check for all those string lengths. Found 100's of inconistencies and bugs (unknown mostly). It's just in general a bad design to have info replicated, and it's easy to switch round two length variables, when you're dealing with many of them. One reason I really liked Andrew's point about using unsigned for loop counters

Andrew Koenig

If this particular operating system supported threads (which it didn't), it would have been easy to solve that problem by having a separate buffer per open file, because two threads reading from the same file without locking would cause chaos anyway.

Allen Holub

Then there's the meta question: why program at all in a language that allows string-related buffer overruns? :-). There are alternatives.

PaulBuis

Keeping separate buffers per input stream should fix the race condition problem, but exacerbate performance issues.
My old non-thread-safe "readline" function used megabyte sized buffers and then punted by returning NULL for oversized lines. This avoided the denial-of-service problem. Only OK because no legit inputs would be that big for the stuff we were working on back then.

lbenini

I think the phrase "Any part of a program that reads variable-length input should be responsible for allocating enough memory to contain that input. " is quite misleading... especially in an exposed service. In the case of heartbleed increasing the buffer size to match the input would be an invite for a DDOS by eating up all memory...
Edit: of course it's possible to avoid this situation with a "reading limit", but this returns the point to the fixed size buffer

slkpg

A bit off current subject, but in line with the title of the article. I wonder if anyone has considered the possibility that Hearbleed was intentional. Whether or not, it shows just how easy it would be to maliciously corrupt open source code.

SilentInfidel

I guess this readline() function was written quite a few years ago, but still it is not a good idea to replace a bug that is fairly repeatable (fixed size buffer overflow) with one that is far less predictable (race conditions caused by multiple threads calling readline() simultaneously).