hacker news with inline top comments    .. more ..    30 May 2016 News
home   ask   best   3 years ago   
The SQL filter clause: selective aggregates e.g., to pivot data modern-sql.com
36 points by MarkusWinand  1 hour ago   4 comments top 3
MichaelBurge 13 minutes ago 0 replies      
At an old job, me and another person wrote an in-house SQL-like dialect that compiled to access our C++ data structures. It was kind of nice just throwing whatever junk you wanted into your SQL without having to worry about any standards.

Want a filter clause? Got it.

Need a weighted distinct count? Now it's part of the language.

Also, you can rephrase a lot of these SQL features as subqueries. It's surprising how many database bugs you can find when you do it. Not so much in Postgres, but I probably found a dozen in Redshift. I mean rephrasing this:

 select sum(x) filter (where x < 5), sum(x) filter (where x < 7) from generate_series(1,10,1) s(x) ;
as this(where t is either a permanent table, view, or CTE as appropriate):

 with t as ( select * from generate_series(1,10,1) s(x) ) select (select sum(x) from t where x < 5), (select sum(x) from t where x < 7) ;
Though for Postgres the filter will likely have better performance.

codegeek 4 minutes ago 0 replies      
"with" is not supported by mysql but sqlite supports it ? wow, didn't know that.
Xophmeister 24 minutes ago 1 reply      
Syntactic sugar that's only natively supported in PostgreSQL? The site's banner says, "A lot has changed since SQL-92". Be that as it may, but it seems no one has really bothered catching up. I wonder why that is...

My guess is that such extensions, while useful, are somewhat marginalised features in terms of usage. Thus, no one ever learns them formally and just Googles for what they need -- if it comes up -- and get the CASE solution, in this case (pun unintentional). Hence perpetuating that pattern. Also, of course, the CASE solution is a lot more powerful as the returned expression, that gets fed into the aggregate function, can be basically anything.

GE's Walking Truck educatedearth.net
37 points by YeGoblynQueenne  1 hour ago   1 comment top
nxzero 0 minutes ago 0 replies      
Just a heads up that a quick Google returns more videos, Wikipedia, etc. for this robot:


Making an RPG in Clojure (2010) briancarper.net
11 points by arm  17 minutes ago   discuss
Visualize the orbits of 288 exoplanets nbremer.github.io
18 points by gkst  1 hour ago   2 comments top 2
tensafefrogs 4 minutes ago 0 replies      
Here's a different version with source code, not sure if it's the same planets:


philipov 35 minutes ago 0 replies      
I am using Chrome. All the planets with orbits longer than 20-60 days are outside the default zoom level. To see them, it's necessary to alt-scrollwheel to zoom out, reducing the UI size to unreadible, find one of the planets at pixel-size, mouse over it to create a tooltip, zoom back in, and then use the scroll bar to scroll back to the tooltip. Maybe it would be better to just make the entire field scrollable without using stupid tricks?
How Crowdsourcing Turned on Me nautil.us
42 points by grkvlt  2 hours ago   17 comments top 5
biot 1 hour ago 1 reply      
It would have been interesting to add in a validation layer, letting the crowd evaluate each others moves, creating a virtual trust score for each player. Those with high trust scores get to meta-moderate others moves before those moves can be applied to the board. You can also try to catch trolls here by presenting them with a known good matching and the system blackholes those who reject it. Similarly, if their overall trust score dips below a certain threshold, whether due to poor skill or malicious activity, they get blackholed then too and any subsequent change they make to the board is only visible to them privately though they think they're causing havoc.

Combine a system like that with a requirement that N (some reasonable number of randomly chosen people) must agree when combining pieces or splitting pieces before the change is applied to the master board. And add the ability of the project admins to scan the board and lock in place obvious good solutions. That might work.

DanBC 2 hours ago 2 replies      
This story is interesting because of the hate generated in a small number of people around the perceived cheating.

Or maybe the stuff around anonymous crowds will always contain destructive assholes.

People might be interested in the previous discussion of this article (only 25 comments) https://news.ycombinator.com/item?id=8499452

Here's a different post about the 2011 challenge, with some interesting comments: https://news.ycombinator.com/item?id=9021383

And here's another one with 50 comments: https://news.ycombinator.com/item?id=3164466

metafunctor 1 hour ago 0 replies      
Most crowdsourcing systems hinge on trust. Too much trust, and exploitation and sabotage will occur. Too little trust, and nobody gets excited.

You cannot just give trust away like hugs. Trust must be earned. Trust is a social construct, and if that's not built into your system, your system is bound to fail.

Almost everyone will be honest and genuine, but that's not enough. It takes just one asshole in a thousand to burn down the building, if they have nothing to lose.

kriro 1 hour ago 1 reply      
The shredding sabotage seems very interesting. There is probably a lot of unexplored space in creating resilient (self repairing) crowds and/or the tradeoff between opening up the crowdsourcing process and security/safety concerns.
realworldview 1 hour ago 1 reply      
Unfortunately, I stopped reading after ...I swung open my laptop... when it appeared to head in the direction of a script for Matthew Broderick or Keanu Reeves.
Deep API Learning arxiv.org
88 points by tonybeltramelli  5 hours ago   13 comments top 4
omarforgotpwd 4 hours ago 3 replies      
You may hate the tech industry for recklessly destroying other people's jobs, but just know that we're working hard to build a next generation AI that will destroy our jobs too.
simongray 43 minutes ago 1 reply      
This is a pretty cool research area, but isn't this implementation basically just a Javadoc search engine? A basic keyword search of the javadoc descriptions would return similar results in most cases, wouldn't it?
kilotaras 1 hour ago 0 replies      
Here's a direct link to demo[1]. Looks like it received a hug of death.

[1] http://bda-codehow.cloudapp.net:88/

daef 3 hours ago 1 reply      
.oO( we're good at writing code, but bad when it comes to reasoning about code, so why not write code to solve that problem... )
Using Neural Networks to Evaluate Handwritten Mathematical Expressions willforfang.com
89 points by strategy  5 hours ago   7 comments top 7
deGravity 2 hours ago 0 replies      
Check out Joseph LaViola's work at UCF (formerly from Brown). http://www.eecs.ucf.edu/isuelab/research/pen.php

The latest work, MathBoxes, uses the recognition engine from the starPAD sdkhttp://graphics.cs.brown.edu/research/pcc/research.html#star...

It's a great toolkit for building pen-centric computing tools (especially math recognizers and tools), but unfortunately it is heavily tied to the old Windows 7 tablet APIs and so isn't easily generalized.I've been hoping to port it to work on newer hardware for the past few years, but have not yet found the time. If anyone wants to take on that project it would be incredibly useful (especially since there seems to be a resurgence of pen-centric computing on the near horizon).

You can find more pen-math work on Brown's website:http://cs.brown.edu/research/ptc/FluidMath.html

amelius 15 minutes ago 0 replies      
This looks interesting. But I was a bit disappointed about the amount of work needed to divide the job into subtasks (finding the characters, cropping them, etcetera). I was somehow hoping the ANN would take care of that as well.
nl 2 hours ago 0 replies      
Will someone please build this into a mobile (tablet!) first version of Jupyter?

Stop doing all these slightly-better-in-some-way-but-not-really things (Zeppelin, etc). You've lost.

But I'm so frustrated trying to use Jupyter on a tablet. The compute model is perfect for using my tablet, but the UI just doesn't work that well.

kjhljjg 3 hours ago 0 replies      
superb! this is exactly the kind of thing computers should be doing for us. Would also like to have diagram recognition to turn hand-drawn graphs, networks, 3d-surfaces etc into data-structures that can be computed with, and also can it recognize code and run it please t
jamesk_au 2 hours ago 0 replies      
The free app Mathpix for iOS does this really well too: http://mathpix.com
plg 1 hour ago 0 replies      
github link to code?
pferdone 3 hours ago 0 replies      
just phenomaly practical
A New Threat Actor Targets UAE Dissidents citizenlab.org
117 points by subliminalpanda  6 hours ago   17 comments top 8
deanclatworthy 3 hours ago 0 replies      
This is an absolutely superb write-up. It's highly disconcerting to read the level of detail that malicious (and supposedly state-sponsored) actors go to target speciifc individuals.
mouzogu 2 hours ago 1 reply      
Would it be fair to say that the Tor browser and Tails OS are being specifically targeted?

It seems to me using these tools is enough in to provide a suspicion and thereby having the opposite effect than what they are intended for.

So essentially, using Chrome on Windows, though perhaps less secure makes you less likely to be targeted than using Tor on Windows or on Tails.

pjs_ 3 hours ago 0 replies      
The timing attacks on AV software are interesting. Didn't know that was possible. Why doesn't the cross-domain policy reject pings to localhost immediately?
nxzero 17 minutes ago 0 replies      
Anyone able to estimate the monetary value per person targeted to a state-level actor? Seems like there's no reason to believe a state-level attacker if approached would not buy intell from a criminal network of attackers or that a foreign state-level attacker won't leverage it advantage attack operations to again and barter intel to other states.

Case in point, attribution based on the skill of these attacks does not dox the attacker, but the end result of their attacks. Meaning these may not have been sponsored attacks, but someone farming intel to capitalize on.

chinathrow 40 minutes ago 0 replies      
The table with the arrests is really troublesome.
facepalm 3 hours ago 4 replies      
"When a user clicks on a URL shortened by Stealth Falcon operators, the site profiles the software on a users computer, perhaps for future exploitation, before redirecting the user to a benign website containing bait content."

But how?

iamsalman 4 hours ago 1 reply      
The link is down?
coderdude 1 hour ago 1 reply      
This reminds me of a recently declassified document from the CIA that discusses how to undermine organizations by being poor managers, employees, etc. It's difficult to search HN for this at this point because A) we always discuss that agency and B) our best search option gives us the option of the past month or the past year. Maybe someone has this bookmarked or downloaded. It came in PDF form.

This is happening to us here. I think they're doing it by using the "No true Scotsman" informal fallacy and other methods. We'd blame the hundreds of thousands of new users. I believe they've found a way to unwind this community. Nothing revolutionary will ever come from here (how could it?). I think the community has been compromised and that we won't find out until 50 years from now when most of us are dead.

This is how I feel after years of watching the community. I have no stake in it either way. I'm open to opposing views.

Hardiman I Exoskeleton cyberneticzoo.com
140 points by vmorgulis  10 hours ago   32 comments top 11
YeGoblynQueenne 1 hour ago 0 replies      
I see your powered exoskeleton, and raise you a walking truck:


fernly 9 hours ago 3 replies      
Made me think of Heinlein's story "Waldo" [1] which featured mechanical manipulators controlled by motions of the operator's hand and fingers. I thought, oho, this is where Heinlein got the idea! But no! "Waldo" was published in 1942. Perhaps inspiration ran the other way.

[1] https://en.wikipedia.org/wiki/Waldo_(short_story)

gene-h 9 hours ago 3 replies      
Part of the problem was(and still is) that it didn't have very many good use cases. For the problem of moving heavy stuff around a forklifts, cranes, and carts performed much better. Not to mention the potential for serious injury to the user if they make a mistake or fall over.
mooneater 9 hours ago 4 replies      
Why would they design it such that the human hand goes into the mechanized hand? Because then if something causes failure, say by lifting something too heavy, the human hand is also injured. I would want to see a design that keeps the human operator safe.Same comment on many existing exoskeleton designs.
ekianjo 8 hours ago 0 replies      
Wait, was the Aliens' exoskeleton used by Ripley and other Marines based on that prototype ? It looks very similar in some of the concept art. See: http://www.mwctoys.com/images/review_loader_1.jpg
richforrester 8 hours ago 2 replies      
Looking at this, the one thing I see is a person "fighting" a machine that's encapsulating them.

If we're going to interpret and strengthen moves we make, it might be smarter and safer to leave the interpretation of movement out of the suit. So wear a light, easy-to change into/out of suit with sensors, that controls a machine in front of you. Also saves space as you won't have to make space for a pesky, fleshy, squishy human.

(Just thinking out loud here)

orjan 6 hours ago 0 replies      
That "Walking Truck" is very similar to Big Dog, at least in appearance.
Sami_Lehtinen 8 hours ago 0 replies      
[an error occurred while processing this directive] - Server getting overloaded? - A backup copy: https://archive.is/HhXbI
chx 4 hours ago 0 replies      
Exoskeletons will be huge, huge, huge as the baby boomer generation get older and more frail. This is why Japan is already far ahead in this.
kensai 8 hours ago 1 reply      
Would definitely not comb my hair or scratch my back with that thing... :p
robschia 5 hours ago 0 replies      
The HN hug of death strikes again
C++ for Games: Performance, Allocations and Data Locality ithare.com
109 points by ingve  11 hours ago   44 comments top 10
wmil 7 hours ago 7 replies      
> use ++it instead of it++

Has anyone actually benchmarked this recently? It's such a trivial optimization for a modern compiler.

I have trouble believing that they'll produce different code when the return value isn't used.

trimbo 7 hours ago 2 replies      
Mike Acton's "Data Oriented Design" talk from CppCon is great


Nican 6 hours ago 0 replies      
I also recommend watching https://www.youtube.com/watch?v=Nsf2_Au6KxU , made by an engineer at Valve.
lossolo 6 hours ago 0 replies      
For anyone interested in optimizing C++ i recommend Agner blog.


EDIT: typo

bogomipz 8 hours ago 3 replies      
Does anyone know who's behind this site/books? There's lots of great material here. I always look forward to an update. Does anyone know if these books are to be published?
qopp 9 hours ago 0 replies      
One comment: The dragging a slider to the right math is assuming all players that pay $25/month are playing the entire month non-stop. In reality it goes in waves. Using the "22.7 hours a week" from WoW, that's only 0.13/user/month. Cloud computing capitalizes on this by renting per hour to capture the waves. This means a 25$/month user is only costing $2.5 for server usage which seems right.
tfigment 9 hours ago 0 replies      
I found this to be fairly interesting content. I knew most of it so useful as a refresher and one can really argue not game specific in many ways though certainly slanted in that direction. Anything with real-time performance considerations like control/embedded system programming can also use the advise. I found Alexandresu's Modern C++ Design valuable many years ago for similar advise. I used Loki and its allocators and smart pointer strategies in a lot of code over the years due to control over data locality.

I found the next posts/chapters on Marshalling and Encoding even more interesting with comparison and pros/cons of different encoding techniques. I was not aware of the FlatBuffers library and see why the author likes it and may have to look into using in future projects.

nice_byte 6 hours ago 4 replies      
Small question, I don't really see how switching to unordered_map solves the locality problem that the author mentioned...unordered_map most likely uses chaining, and even if we were allocate a continuous block of memory for each bucket (as opposed to having a linked list), we'd still have a locality issue.
shmerl 7 hours ago 1 reply      
Thanks, very interesting reading. It can be valuable not just for games development but for any case when performance is important.
partycoder 8 hours ago 0 replies      
I prefer this explanation from Herb Sutter, member of the ISO C++ committee: https://youtu.be/TJHgp1ugKGM?t=21m33s
What Happens to the Brain During Cognitive Dissonance? (2015) scientificamerican.com
61 points by brahmwg  8 hours ago   34 comments top 6
amasad 6 hours ago 3 replies      
Tolerating some amount of cognitive dissonance is one of the best things that I was able to develop in myself recently. In coding for example, I used to suffer mental anguish over seeing inconsistencies in codebases I'm working on and have an OCD-like drive to fix them. Learning to live with those consistencies freed me up to work on what actually matters.

Furthermore the unforgiving drive for consistency is a reason why people don't update their beliefs when new evidence comes to light. Consider Superforcasting[1] (a book about people with an unusual ability in forcasting the future) the author says that one common trait among superforcasters is that they have a larger capacity for tolerating cognitive dissonance. The drive to avoid cognitive dissonance shackles you to your existing beliefs (see confirmation bias).

[1]: http://www.amazon.com/Superforecasting-Science-Prediction-Ph...

ItWasAllWrong 3 hours ago 2 replies      
Sadly enough, cognitive dissonance is also one of the coping mechanisms that keeps people captured in a cult. I've been raised in a high control religion/cult. I've been out now for a while, but I was in for 24 years. The cognitive dissonance was very high at times, and I'm not the only one [1]. Looking back at it, I almost can't believe it took soo long to realize consciously what was happening to me.

It's a weird thing, alarm bells going off everywhere in your head, but you still tell yourself: It's alright, I'm ok this is what I have to do, it's the best life choice, it's not that bad, everyone else is wrong, ...

For me personally, the subconscious reason why I acted that way is that I knew the repercussions when I would try to leave. My whole family, friends, everyone I cared for would start shunning me. I would've been kicked out of my the house by my parents, completely on my own, no contact at all. That's a scary though when you've been thought the world is a wicked and evil place. This year, the group has even become more aggressive when it comes down to shunning, showing emotional propaganda videos on their conventions [2].

[1] https://www.reddit.com/r/exjw/search?q=cognitive+dissonance&...

[2] https://www.youtube.com/watch?v=qxDAY5lVwuI

ccvannorman 4 hours ago 0 replies      
A strikingly shallow and pointless article. TL:DR; We did some trivial, questionable, narrow tests, and turns out cognitive dissonance is what you thought it was and can be thought of as OK / survival instinct.
marlag 5 hours ago 1 reply      
I strive to have one idea in my head, because having one idea feels like being on a motor-way and visiting one of many ideas feels like being on a small road. When I'm coding in a new domain or field I am sometimes flooded with options and I reach cognitive dissonance and my pace takes a halt, for minutes, hours, sometimes months, because of this dissonance, until one idea has become more like a high-way an the journey proceeds. I feel that because I'm a programmer I have to deal with and have achieved a quite healthy approach to what is cognitive dissonance, a state that outside of work sometimes makes me feel a bit schizofrenic.
wmt 7 hours ago 2 replies      
Does someone really believe that cognitive dissonance is a bad thing, as the article suggests at the end? I've just always imagined that feeling bad for poor decisions is how you learn not to do poor decisions.
How Not to Explain Success nytimes.com
159 points by rafaelc  12 hours ago   97 comments top 18
kartan 10 hours ago 4 replies      
"In this case, our studies affirmed that a persons intelligence and socioeconomic background were the most powerful factors in explaining his or her success"

If success is defined from an economic point of view in absolutes it makes sense. As the best predictor of your wealth is your parents wealth.

"Intelligence" is harder to add to the equation as it is more difficult to measure than parents wealth and there are known bias (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) that makes people think that they are smarter/more competent that they really are. So it is even possible that a superiority complex or insecurity are relevant even when the participants in the survey don't think so. (https://en.wikipedia.org/wiki/The_Triple_Package)

This results show the consequence of a diminishing socio-economic mobility. The same traits and skills in a high mobility society are going to have a big weight in "success" achievement. In an stagnated society where the system is rigged to make poor stay poor and rich stay rich "success" becomes an inherited trait making "socioeconomic background" the only and best predictor.

99_00 10 hours ago 5 replies      
I don't even understand the internal logic of "Battle Hymn of the Tiger Mother".

The reasoning seems to go like this: Asians-Americans make more money. Chinese-Americans are Asian-Americans. Raise your kids the way Chinese parents do. If you look at the stats, Chinese-Americans don't do especially well.

You'd be better off raising your child in the "Filipino style" if such a thing exists.

Indian American : $127,591[2]

Taiwanese American : $85,566[2]

Filipino American : $82,389[2]

Australian American : $76,095[3]

Latvian American : $76,040[3]

British American : $75,788[3]

European American : $75,341[3]

Russian American : $75,305[3]

Lithuanian American : $73,678[3]

Austrian American : $72,284[3]

Scandinavian American : $72,075[3]

Serbian American : $71,394[3]

Croatian American : $71,047[3]

Japanese American : $70,261[2]

Swiss American : $69,941[3]

Slovene American : $69,842[3]

Bulgarian American : $69,758[3]

Romanian American : $69,598[3]

Chinese American: $69,586[2](including Taiwanese American)


vslira 53 minutes ago 0 replies      
Has anyone here played The Binding of Isaac?

Awesome game. Hard as hell, though.

Every time you start the game all the levels are randomly generated. Each level has a treasure room which contains an item that may give a boost to your status or give you a special ability. Some items make the playthrough a lot easier, some can make it even harder. The items you get are basically down to luck.

The game is still hard anyway. There's no item that guarantees you'll win and, fortunately, there are strategies that help you make the most out of what you got. If you're good enough at the game the items may not even matter, you're just that good.

Unfortunately we don't have infinite tries at life like we do in a video game to learn how to get better. You have to learn while you play the only run you've got.

That's how I see the whole issue of luck vs. merit.

aaronharnly 11 hours ago 5 replies      
(I'm asking in all earnestness) Are online surveys now an accepted part of the social science repertoire? How do they compare to previous methods such as phone surveys, which I imagine had more uniform distribution to the population but had their own participation bias problems?
bake 11 hours ago 6 replies      
I wish someone would similarly test Malcolm Gladwell's body of work. It's encouraging to see the scientific method re-injected into the popular press.
erikb 4 hours ago 0 replies      
A high degree of impulse control is not linked to success? I thought that was one of the oldest and most well proved success requirements, also sometimes called "delaying reward".

Also if you say "my group succeeded because we believe in our group" then you need to not analyse one generation but several generations together, because that statement is not about a single generation. E.g. the question is not why is the Jew John Doe successful, but why are Jews on average more successful now despite having faced a lot of trouble in the past?

While this article seems logical in itself, to me it actually gave the idea that the book may very well have a point, exactly because of common sense.

marcus_holmes 9 hours ago 0 replies      
Studying the traits that successful people have is pointless without also studying the traits of unsuccessful people.

If you identify that 56% of successful people have Trait X you have no idea if Trait X is associated with success unless you also discover how many unsuccessful people have Trait X. Trait X can be "belonging to a given subculture".

Taking the entire population and subtracting the successful people does not leave you with the unsuccessful people. Unless you are prepared to define someone who is in the process of attempting something as unsuccessful.That's apart from the pointlessness of defining "success" for a large population of individuals (happiness? wealth? freedom? connectedness?).

sbardle 1 hour ago 0 replies      
I used to know a "Success".He owns a profitable company. He has a beautiful wife. He lives in a gated mansion and drives expensive cars.But he was also a cruel person, who liked to hang out with other cruel people.I decided if that's what being a success in life means, I'd rather be a failure.
imakesoft 2 hours ago 1 reply      
I could write a book "How NOT to be successful" and maybe in some reverse way people could benefit what NOT to do. :)
forgotpwtomain 10 hours ago 0 replies      
> We conducted two online surveys of a total of 1,258 adults in the United States.

Given the self-selection bias that's involved, that doesn't sound very reassuring, granted the paper probably has a better discussion of methodology. Still as it stands it's pretty much impossible to guess the significance of the result from the news article.

partycoder 8 hours ago 1 reply      
If you have a model that predicts success, then you can build a process that targets those metrics and reliably get success. We are not there yet.

Most of the time what I see is the Texas Sharpshooter fallacy. "The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the tightest cluster of hits and claims to be a sharpshooter."https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy

verganileonardo 11 hours ago 0 replies      
LPT: always study failures before defining what make people succeed!
dwmtm 10 hours ago 0 replies      
Please correct me: The Triple Package says successful individuals are in some sense drawn from populations that inculcate certain traits. The article discusses success as an individual trait, and not a group trait. Narrow reading, misreading...?
kriro 6 hours ago 0 replies      
My totally unscientific approach would be to build some sort of "curiosity index" and the higher that is for a child up to a certain age the more likely they are to be successful. For extra credit calculate a delta to see if the curiosity stays at the same level (or increases/decreases) with age.
the_cat_kittles 11 hours ago 0 replies      
i think "success" is a stupid and superficial term, but most damning is that its vague as hell. its one of those things that everyone defines in a way most beneficial to themselves. seems like a bad thing to base a study on.
dpweb 11 hours ago 0 replies      
Only if you measure success in money. Saying personal insecurity and good impulse control are traits of successful people, success defined by money, is like saying overeating is a trait of overweight people. Of course someone insecure/unsatisfied who does not overindulge themselves is likely to end up with more money. But if you were to measure success in, piece of mind, happiness, positivity, maybe the insecurity makes that more unlikely. We tend to only define success in having money however since it's so critical to have more stuff. If you're talking achievements or non monetary rewards, I'd expect insecurity. It fuels the person. Otherwise they would just live a common life it would be enough for them.
c3534l 10 hours ago 0 replies      
This confirms all my anecdotal evidence about how lawyers think. They're professors and it never even occurred to them that their ideas could be verified, they thought their arguments were enough. Now if only someone would run a study confirming or denying that's how lawyers reason about the world.
kan1shka9 8 hours ago 0 replies      
Jane Fawcett, British Decoder Who Helped Doom the Bismarck, Dies at 95 nytimes.com
52 points by aaronbrethorst  9 hours ago   8 comments top 3
samirillian 7 hours ago 2 replies      
Sounds like that poor Luftwaffe general/concerned father "doomed" the Bismarck more directly than someone who happened to transcribe a broken code. I wonder what his obituary was like.
ENTP 6 hours ago 3 replies      
I guess we'll never know the monumental contribution women made during WW2 at Bletchley. Respect.
dankohn1 8 hours ago 0 replies      
Wonderful story.
Qualcomm KeyMaster keys extracted directly from TrustZone mobile.twitter.com
126 points by marksamman  11 hours ago   46 comments top 10
slimsag 9 hours ago 2 replies      
What would the implications of this be?

Does it just mean that people can root all devices using this chipset? Or something worse?

(sorry if this is obvious, I'm just not in the know)

robot 7 hours ago 1 reply      
Trustzone runs an RTOS-like kernel, he likely hacked Qualcomm's implementation of this kernel to gain access.

In particular this line in the screenshot hints at what he did: "Overwriting syscall_table_5 pointer"

The issue is likely applicable on particular qualcomm devices, and a software patch should be possible.

zimmerfrei 5 hours ago 1 reply      
Granted that details are not there yet (but from previous work done by the author one can guess), would a formally proven OS like seL4 have prevented this? seL4 is somewhat weak on HW support but the functionality required in the secure domain doesn't do much I/O anyway.
modeless 9 hours ago 4 replies      
What is TrustZone being used for in practice?
guimarin 9 hours ago 1 reply      
I hope there is a way to patch this remotely. TrustZone is on quite a few devices today.[1]

If it's not, well, uhh, yeah this is kind of a problem.

1. http://www.arm.com/products/processors/technologies/trustzon...

gruez 10 hours ago 2 replies      
How would this be patched? I'm assuming that this will require ucode/hardware patching as trustzone is implemented in hardware?
happycube 9 hours ago 0 replies      
When someone asks if you're a (security) god, say NO!

(... unless you are one)

cloudjacker 8 hours ago 0 replies      
you dont post about this on twitter you wait
mindcreek 8 hours ago 1 reply      
So now we know how the fbi got into the shooters phone :)
mrweasel 5 hours ago 1 reply      
See, this is why Twitter is a terrible news medium.TrustZone is the company I buy SSL certificates from, so what do you think people like me assume has happened?

A little context wouldn't have hurt anyone.

The real responsive design challenge is RSS begriffs.com
173 points by begriffs  13 hours ago   79 comments top 10
toyg 13 hours ago 5 replies      
Hey, a post about RSS content quirks! It makes me feel young again!

On a more serious note - RSS is the Great Web Leveller. It spites your fancy CSS hacks, it's disgusted by your insane javascript, and it will piss all over your "mobile-optimized" crap. No semantic markup == no party; because markup is for robots, and RSS parsers are very stubborn robots that can see through web-hipster bullshit like Superman through walls.

The only real sin of RSS (beyond the holy wars and format bikeshedding and committee madness and and and...) is that it's too honest a format. It's a format for stuff that matters, for content that deserves to be read; it's too pure to survive in a world of content silos, stalking analytics and inaccessible material-designs. Its innocence doomed it in a very poetic way.

dave2000 13 hours ago 3 replies      
RSS is odd. Unloved by many, killed by Google, but there's still no better way of getting notified that a rarely (or perhaps not so rarely if you have endless time to read "internet stuff") updated site has something new to read. So I have an Android app which checks once per day and if there's anything new that day (I can go days or weeks without an update) I then pass it on to my Pocket account. Sometimes it goes another step further and I send it on to my Kindle.

Perhaps there's a better way of handling this. I can't read it directly on my Kindle because the browser there - optimistically described "experimental" - is shocking. Pocket is great because it does a good job of producing a page which consists of just the typing without the usual horrific web fluff (although sometimes it gets it wrong and the graphics go missing).

It seems a shame that, when most of what I'm interested in started life as someone else essentially entering text into a document, there's no way of obtaining it in that form but instead it has to be manipulated into something sensible. I don't want an "experience"; I just want to read what you've typed.

Would it help if I gave you my email address?

amatriain 4 hours ago 1 reply      
I develop and maintain an open source RSS reader. In my experience it's not so bad. I strip CSS and javascript from feeds, and most of them are displayed fine anyway. I don't think I've ever found a feed that needed javascript to load content, it seems even SPAs include plain entry content in their feeds, thankfully. I've never found a feed that became unreadable after stripping styling either.

I agree it's interesting to look at your content when loaded in an RSS reader. IMHO most feeds are actually more readable when loaded in a clean uncluttered RSS reader than in the original webpage. If the content is good, the reading experience should not be harmed by focusing just on its text and images and removing extra styling.

Shameless plug: the RSS reader I maintain is https://www.feedbunch.com , comments are welcome.

Animats 7 hours ago 0 replies      
For the last three days, we've had a Teletype Model 14 tape printer following the Reuters RSS feed [1] at a steampunk convention in San Jose, printing hundreds of feet of 8mm paper tape. Trying to condense RSS down to all-upper-case Baudot tape printing is harsh. All markup is deleted. All links are deleted. Most characters outside letters and numbers become "?".

For the Reuters feeds, this works out fine. The content is text, not markup. There are few or no HTML tags. The Reuters feeds are headlines and a sentence or two. The Associated Press also has RSS feeds, and it's very similar. The Voice of America's feeds are much wordier; they often have the whole article.

Space News has an RSS feed.[2] The Senate Democrats have an RSS feed covering what's happening on the Senate floor.[3] (The GOP discontinued their feed.[4]) The House Energy and Commerce Committee has a feed with markup in embedded JSON.[5] Not sure what's going on there. Even The Hollywood Reporter has an RSS feed.[6]

So for real news, RSS is in good shape. RSS seems to be doing fine for sources that have something important to say.

[1] http://feeds.reuters.com/reuters/topNews[2] http://spacenews.com/feed/[3] https://democrats.senate.gov/feed/[4] http://www.gop.gov/static/index.php[5] https://energycommerce.house.gov/rss.xml?GroupTypeID=1[6] http://feeds.feedburner.com/thr/news

colinthompson 12 hours ago 0 replies      
When Google Reader was in its heyday, I remember thinking the futuristic promise of the web from the early 90s had finally arrived. I so miss it. So much effort is put into unique "platforms" nowadays I get why Reader (and by extension RSS) can't survive in such an environment where exclusive attention of our eyeballs is monetized but I do sometimes wish I would wake up to an announcement that RSS is a priority for big companies once again. One can dream.
mmahemoff 13 hours ago 2 replies      
As the developer of an RSS parser, I spend a lot of time hitting View Source, surprisingly often on pages that appear empty in Chrome.

In a more general sense than RSS, I also have to install extensions to format JSON. Considering how much browsers are targeting developers these days, might they consider rendering JSON, XML, etc in some standard way that is useful to developers (as an option at least). I am talking about syntax highlighting as well as some basic interactive features like expanding/collapsing.

ruricolist 4 hours ago 0 replies      
I think the nastiest thing to parse in RSS feeds these days is code highlighting. To a surprising degree, people who should know better use blogging software that chops up code into tables, divs, and spans, and styles them with CSS that is not included in the feed. You either have to either reconstruct the underlying plain-text code as best you can, or try to recognize and support a zoo of different highlighting libraries.
spdustin 9 hours ago 0 replies      
Yahoo! Weather's RSS feed has some handy additional data. Useful if you only have a US postal code. Includes things like lat/long coords, separate elements with weather forecasts ad current conditions, sunrise/sunset times... Pretty handy bits of data just for requesting an RSS feed.


Bonus: the @code attribute can be substituted into the URL for an image, and visually identify the weather condition referred to by the code:


Just replace "26" part of "26.gif" with another value.

exolymph 13 hours ago 1 reply      
It depends on who your audience is, right? If most of the people who read your site come from social media on their phones instead of reading via RSS, optimizing for RSS readability is an activity with rapidly diminishing returns.
michaelmior 10 hours ago 1 reply      
I'm not sure what this has to do with responsive design, but this is a cool list of some things to be aware of :)
Apple took away Messages.app UI scripting capabilities in OS X 10.11 github.com
100 points by ivank  15 hours ago   71 comments top 13
INTPenis 0 minutes ago 0 replies      
It's still possible to script with tools like Sikuli for example.

Edit: I mean for the purposes of spam, not for the purposes of making an app like the one OP linked to.

freewizard 3 hours ago 1 reply      
I don't think it actually helps with spam issue, at least not in China.

Almost every iMessage user who activated with a Chinese mobile number has been receiving 3~10 spam iMessages everyday about online gambling and etc since ~2012 when iMessage service went live in China. The content for those messages have been pretty much the same with some minor variant on wording.

With the scale of the spam, I believe it's likely not sent thru UI scripting, but the iMessage protocol might have been well reverse-engineered and exploited by spammers. So disabling UI scripting won't help anything but cause trouble for developers with legit usage.

sjwright 12 hours ago 6 replies      
I love open source and open protocols as much as anyone on HN, but honestly, secretly, I love the tied-down nature of iMessage because that keeps it (mostly) free from spam and abuse.

If you want something more open, that exists too, which is great. It doesn't make sense to complain that iMessage isn't what you want it to be. Choice means that open and closed solutions exist in the market.

camhenlin 7 hours ago 1 reply      
Funny to see someone besides me post a link to one of my git repos :) What I've done to get around of some of these issues is run an OS X 10.10 VMware image to handle running iMessage-related code. This gets less UI interference since it is fairly AppleScript heavy. Another poster pointed out that disabling some of OS X's security features seems to make this work as well, but I didn't really feel comfortable recommending that anywhere. I just want to easily and programmatically send and receive iMessages!
aktau 13 hours ago 1 reply      
They didn't remove all OSAScript (AppleScript) functionality apparently. This tiny thing still works: https://gist.github.com/aktau/8958054.

Perhaps the OP really is talking about the Messages.app UI, but the screenshot on github imply that this is some sort of alternate UI (curses-based). Perhaps I'm misundestanding.

NEDM64 10 hours ago 1 reply      

There were people selling apps that bought iMessage to other platforms, by using VMs in servers running the desktop App and using the scripting functionality.

comex 11 hours ago 2 replies      
It should be pretty easy to work around this, either by hacking around (e.g. with a debugger) whatever call was used to disable UI scripting, or by using the interface between Messages.app and the IMCore framework, which is a relatively high-level Objective-C API. On the other hand, for the record, reimplementing an iMessage client from scratch would be difficult due to DRM: while the protocol is generally sane, it includes an authentication layer based on FairPlay, involving heavily obfuscated code. Some information here, though I'm not sure if it's up to date:


kethinov 8 hours ago 0 replies      
I also built a Messages.app companion app that 10.11 largely broke: https://github.com/kethinov/BubblePainter

Still technically works if you disable SIP, but that's a big ask just to be able to use my little toy app.

sgt 7 hours ago 0 replies      
I've got a Mac at home which I use for my own personal iMessage bot to do certain tasks in my house. I specifically left this Mac at Mountain Lion, since I sort of anticipated problems further down the line with the newer versions of OS X. I can probably safely upgrade to 10.10 but it was finicky enough to set up the AppleScript to work with my scripts.
HappyTypist 3 hours ago 0 replies      
I don't think this change was to intentionally block this repo. In OS X 10.11, more system apps including Messages.app has been white listed for SIP protection. Everything works if you disable SIP protection.

It is understandable that Apple doesn't want malware to easily read people's messages.

ComputerGuru 7 hours ago 0 replies      
I just bought a jailbroken iPhone which I'll upgrade to a newer release supporting SMS handoff, etc; then use for running a message server so that I can send and receive synchronized messages on my Windows development PC the same way I can on my Mac.
draw_down 7 hours ago 1 reply      
That app is such a hunk of junk, on every platform. It's disgraceful.
beedogs 11 hours ago 0 replies      
Nothing new: Apple loves removing functionality from their products.
Let's talk openly about depression gustavoveloso.com
104 points by gjmveloso  13 hours ago   31 comments top 8
wapapaloobop 10 minutes ago 0 replies      
>Personalization the believe that we are at fault

Yes, guilt is the means by which so much bad stuff gets installed in our minds. Ideas that get passed easily from mind to mind regardless of truth content are called memes. I propose the term remes for ideas that get recalled easily in an individual's mind. These get rehearsed more frequently than other ideas regardless of their truth content and so they persist; they achieve this by generating guilt.

amelius 1 hour ago 0 replies      
If you have depression combined with ADD, OCD, Autism or ME/CFS, then you might want to look up "methylation" on the internet. Genetic testing can show if you have a defect in your methylation pathways, which can lead to these disorders, and to depression.

Good starting point: http://geneticgenie.org/

They even have an implementation of methylation analysis on this website.

partycoder 8 hours ago 1 reply      
Checkout this site: http://devpressed.com/

And this excellent talk: https://www.youtube.com/watch?v=yFIa-Mc2KSk

gc123 11 hours ago 5 replies      
As someone who does the tech 9-5, I've found it difficult to make small talk/form deeper relationships with people outside of tech. Conversations often tend to die off after the initial "What do you do? Oh that's cool..." or "About that weather we've been having..." Any tips on expanding your network outside of tech?
lowglow 11 hours ago 5 replies      
Living for your dream isn't a walk in the park, so it's important to build up a network of people you trust and can talk to. Anyone that wants to talk about entrepreneurship, building, whatever and depression just hit me up online or available for coffee whenever in SF. Contact info in my profile. Reach out. :)
Pulce 10 hours ago 0 replies      
Three Ps for me: Pasolini, Pertini, Pippo. EDIT: if you are not italian: knowledge, coherence and craziness.
bpchaps 10 hours ago 0 replies      
I agree with the sentiment, but this whole Change Through Piecemeal concept for these sorts of things cab be discussed so much more breadth. Let's talk about mental health with a focus on distancing the concept of 'othering' from our heuristics instead, please.
meeper16 6 hours ago 0 replies      
Be Chinese.
Fasting-like Diet Reduces Multiple Sclerosis Symptoms neuroscientistnews.com
55 points by brahmwg  12 hours ago   18 comments top 7
tkyjonathan 1 hour ago 0 replies      
The Swank diet and later the McDougall diet have been halting and sometimes slightly regressing MS symptoms in most cases, for decades now.The Swank diet in particular, has kept people alive with the disease for 34 years.https://www.youtube.com/watch?v=kZ5NGLM1k90
rikta 10 minutes ago 0 replies      
Ramadan is coming :) 20+ hours fasting for 30 days :)
ams6110 10 hours ago 3 replies      
What is a "fasting-like" diet? It is presumably not absolute fasting otherwise no need for the "like" qualifier.
tansey 9 hours ago 1 reply      
This seems pretty likely to be placebo effect, at least in the human trials which are the only parts of the paper I read. The paper cited [0] pre-registered on clinicaltrials.gov [1] -- which is great and everyone in the field should do so. However, looking at what they pre-registered, we have:

- Two different diet treatment regimes: fasting-like for 3 months followed by Mediterranean diet (FMD), and ketogenic diet (KD).

- Control diet was simply telling people to eat the same way as usual. So there's no real accounting for how well a placebo would do here.

- Measurements included a 54-question survey, adverse event counts, and various lab measurements. These measurements were taken at start, 1-month, 3-month, and 6-month intervals.

The problem then is that what was report was:

- Results of the first half of the first treatment (fasting-like for 3 months) for a subset of the measurements. What if things only worked in the second half? Or if things worked only for KD? So many implicit comparisons here.

- Comparison against the control group at 3 months, with reported p-values. Even though one of the reported measurements was the overall survey results, all of the values reported are p-values without any mention, that I can find, of multiple hypothesis testing across all measurements. This comes despite the fact that for all of the mouse results, they explicitly state they used Bonferroni correction.

- Baseline performance which involved no placebo. How many people would have improved if simply given some bullshit diet? Or if they had simply been given a diet that was vegetarian, or something that gave them the impression it was a treatment? Especially in surveys, placebo effect is a huge thing to look out for. Their more hard metrics like lab results show a more mixed bag with WBC dropping for fasting subjects. Sure, it returns once the 3 months is over, but then the supposed quality of life scores drop; so you can't have it both ways, though their writing makes it sound that way.

I'm not saying it isn't a great result from a bio standpoint. I'm sure the Cell reviewers found the mouse model results compelling. I just don't see any way to conclude the broad sweeping title of the article from the actual content of the paper, and it's unethical to do so without strong evidence.

[0] http://www.cell.com/cell-reports/pdfExtended/S2211-1247(16)3...

[1] https://clinicaltrials.gov/ct2/show/NCT01538355?term=NCT0153...

orangegrain 10 hours ago 2 replies      
Isn't this obvious to anyone who follows that diet <-> autoimmune disease? Also adrenal hormones increase during fasting, as can be commonly seen with high cortisol levels in those who fast.

"All disease begins in the gut," as someone wise has said in the past.

horsecaptin 10 hours ago 1 reply      
What about ALS?
nikolay 7 hours ago 0 replies      
Food is so yesterday!
Pauli Effect wikipedia.org
84 points by MichaelAO  12 hours ago   21 comments top 7
femto 9 hours ago 3 replies      
Its opposite is the "technician effect", whereby your malfunctioning system always works when the technician (who is going to fix it) is present.

Incidentally, the antidote to the Pauli Effect is to tape a raw sausage to your circuit. Everyone knows that your circuit always works when you put your finger on it, and the sausage emulates your finger. (There is actually some truth to this joke, as the sausage/finger provides a little parasitic capacitance, which can make an unstable RF circuit become stable.)

sebastianconcpt 19 minutes ago 0 replies      
Would be interesting to see data on conditions with and without Pauli present to remove biases. Pity we don't have him anymore.

If we have a guy like that again to suspect of such effect, then we could try the isolation of parameters to see if there was in fact just a bias or it could lead to deeper issues. And interference from quantum biology might not be completely out of the question https://en.m.wikipedia.org/wiki/Quantum_biology

tvural 10 hours ago 0 replies      
Pauli was one of many physicists at the time who were totally disinterested in experiments and equipment. This put Heisenberg in some hot water during his doctoral exam:

"When an angry Wien asked how a storage battery works, the candidate was still lost. Wien saw no reason to pass the young man, no matter how brilliant he was in other fields"


bbcbasic 11 hours ago 2 replies      
This is probably a result of the same bias that when you buy a new car you see it everywhere (confirmation bias?) so when Pauli is present and an experiment goes wrong its another point on the tally of Pauli Effect anecdotes. When he is present and it all works OK no one cares. But obviously this is meant as a bit of fun.
jheriko 11 hours ago 3 replies      
so even the smartest of us are susceptible to magical thinking and superstition it seems... :)
kan1shka9 8 hours ago 0 replies      
nice !!
On the Wildness of Children carolblack.org
63 points by cardamomo  10 hours ago   22 comments top 9
appagrad 3 minutes ago 0 replies      
The parts on an open-mind always learning remind me of my childhood as a "homeschooler" and recent college experiences. While we spent plenty of time with the books, my mother emphasized the joy of learning when I was a child. Despite few tests and no grades, my brother and I scored above average on our year-end state tests. Such scores aren't a testament to our intelligence but instead the result of allowing a child's mind to absorb its environment as a process of life. When I entered college, I was astonished at how students viewed learning. It was a chore, a separate part of life they were obligated to endure. The vast majority of my peers and later students I taught simply couldn't learn on their own. They couldn't read the book and learn. To me, learning is enjoyable, a life-long process I will continue to old age.
sanoli 1 hour ago 0 replies      
There are literally dozens of alternative school models throughout the world, and I've done research on a lot of them when I had my kids. What I found was broad and varied. Different results, different opinions from parents and former students, different conclusions from researchers. Both on the positive and negative sides. I never got to make up my mind, and I ended up putting my kids in a public school in a small town where there was only one school per age segment. It's a pretty good public school, probably like a good private school, but there's a difference in that the student body is truly varied. My kids study with the son of the garbage collector and the daughter of the supermarket owner. There is a Waldorf school not too far from here, in a neighboring town, but it's as diverse as a very expensive school with everyone arriving in imported cars can be. So I chose not to put my kids there.

edit: Plus, I'm not too fond of Waldorf's philosophy, and even less of the Anthroposophy's.

eliben 1 hour ago 1 reply      
What bothers me about this piece is that the seven generations she almost describes as "lost" were the most productive generations in humanity's history. Think how the developed world looked in 1850 and how it looks now. This is the work of there generations of "dogs locked in a cage". Something doesn't compute.
wallacoloo 4 hours ago 2 replies      
Very insightful, and much less alienating than most articles I've read of a similar nature.

Come to think of it, this was very well written - it had a nice flow to it, building up to some profound observation, and then starting over with some different viewpoint. My only complaint is over those weird "repeat myself in giant text surrounded by quotation marks" things scattered throughout the article, but that seems to have become an acceptable/recommended practice?... (Really, when did this become a thing? I'm really curious on the history of it, but I don't know which term to search for).

I will say that I used to read only nonfiction texts because I enjoy learning about history and the sciences, and these subjects seemed more important than what can be found in fictional stories. But then I began reading more stories, especially slice-of-life type things, and I realized that I was wrong. Reading these stories lends me insight into my own social life - how to be a better friend, etc - and really makes me contemplate which values I want to live by and how I can uphold those in my day to day life.

I liken this to the distinction between classroom schooling and life learning. There's a decent-sized class of subjects that are more effectively taught through experience and self-discovery than via instruction. Interestingly, this class of subjects seems to be the most foundational, as they tend to lend insights into things like what makes an individual feel fulfilled, whereas the subjects taught in school are usually more along the lines of tools (that could potentially be applied to the former). But what use is it to learn a tool if you have no sense of what to apply it to? Engagement increases when a student is seeking knowledge of their own accord, usually to satisfy some goal, curiosity or creative drive - none of which are likely to be conceived within a classroom. Certainly some balance is needed.

jstewartmobile 1 hour ago 1 reply      
Public school kid here. From my experience, the writer is clearly coming from a place of elite Hollywood unreality: http://carolblack.org/about/

Most of the "wildness," or more to the point, "hatefulness" clearly came from the parents. The racism, the homophobia, the Disneyesque self-absorption, the intolerance for anything outside of the present iteration of pop culture and regional sport -- all seeded, fostered, and fed by the family. The kids were barely old enough to even understand any of this stuff, let alone hate someone for it, yet they did!

I doubt that the Thoreau experience is going to have much effect on damage inflicted outside of the classroom, and this is a big thing. It's why we have "good" school districts and "bad" school districts within the same county -- expenditure per pupil is the same, buildings are the same, teachers are mostly the same, but the upbringing is not.

bikamonki 9 hours ago 3 replies      
Profound and to the point! I don't want my kids to go to the factory but I don't have the time/skills to school them myself. What should I do?
noelwelsh 3 hours ago 1 reply      
This article reads to me like pastoral romance, with little other than middle class guilt motivating its conclusions. Schools may have been founded to prepare children for the factory (that wasn't the motivation in the UK, AFAIK), but that doesn't mean they cannot change in the intervening century, and they certainly have. Etc.

Can education be improved? Certainly! However, mass home schooling is not the answer if you want to keep any semblance to current society. Most people have to work to make enough $s, for instance.

Why Most Unit Testing is Waste (2014) [pdf] rbcs-us.com
214 points by pmoriarty  10 hours ago   179 comments top 37
avip 7 hours ago 12 replies      
Ok let's stop writing UT and see what happens. Wait... we've already tried that, and we know the result pretty well:

1. In dynamic languages, simple type mismatches, wrong variable names etc. are now caught in "top level system level test". Yes these are bugs that should have been caught by a compiler had we had one.

2. There's no documentation as to how something should work, or what functionality a module is trying to express.

3. No one dares to refactor anything ==> Code rottens ==> maintenance hell.

4. Bugs are caught by costly human beings (often used to execute "system level tests") instead of pieces of code.

5. When something does break in those "top level system tests", no one has a clue where to look, as all the building blocks of our system are now considered equally broken.

6. It's scary to reuse existing modules, as no one knows if they work or not, outside the scope of the specific system with which they were previously tested. Hence re-invention, code duplication, and yet another maintenance hell.

Did I fail to mention something?

UT cannot assert the correctness of your code. But it will constructively assert its incorrectness.

erikb 4 hours ago 1 reply      
The funny thing about unit tests is that it's actually possible to write unit tests that don't really help you at all and that this is the way how many unit tests are written (a pessimistic me would say "most"), but nobody thought that would be possible.

The initial people who came up with the idea thought about writing down the execution of a usecase, or a small part of that, as a test. Then they ran their code against it while developing it. That gave them insight into the the usecase as well as the API and the implementation. This insight could then be used to improve tests, API and implementation.

But most professionals aren't about making quality. They are about paying their rent. So when they started to learn unit tests, they just wrote their code as always, and then tried to write tests, no matter how weird or unreasonable, to increas the line coverage of their test suite. The proudest result for them is not to have a much more elegant implementation, but to find the weird test logic that moved them from 90% coverage to 91%.

I believe that's how you get a lot of clutter in your unit tests. However what is described in the document are sometimes example of people really trying, but that are just early in their development. Of course when you learn to do something by a new method you will first do crappy, inefficient stuff. The idea here is how much do you listen to feedback. If that team that broke their logic to get higher coverage learned that this was bad, then they probably adapted after some time, and then they did exactly what unit tests are there for.

snarfy 6 minutes ago 0 replies      
With modern compilers and debuggers, a good set of integration tests is all you need. You can refactor large parts of the code base and if the integration tests break, thanks to debuggers etc you can easily identify the breaking changes within minutes. There is no need for 100% code coverage in unit tests to catch these same changes. If you do need it your tooling is lacking.

Writing code is expensive. If you have more test code than real code it means you value correctness over features. If I can skip unit tests entirely and have a 95% functioning system, I'm not sure that 5% is worth an extra 500% or so lines of code needed in unit tests for 100% code coverage.

Unit tests might seem more important in dynamic languages like JavaScript, but that really just points to poor tooling.

heisenbit 5 hours ago 1 reply      
James O. Coplin has a long of experience and is steeped in theory and practice. His list of publications is about 200 entries long: https://sites.google.com/a/gertrudandcope.com/info/Publicati...

Unit tests are not free as they are also code that much is obvious. Coplin however delves also into less obvious aspects of impact of unit tests on design and also the organizational aspects. Ultimately coding patterns are going to reflect the incentives that govern the system.

Software development is a lot about trade-offs. There is plenty to be learned here how to do it. A addendum by him can be found here: http://rbcs-us.com/documents/Segue.pdf but the meat is in the 2014 article.

vdnkh 9 hours ago 4 replies      
Abandoning unit tests was a thing many companies were proud to tell me while I was interviewing earlier this year. I always thought that was ridiculous - but i suppose their products didn't have so many users, or thought they could tolerate some bad behavior. I'm happy I ended up at a place where we're big on tests and even have SDETs embedded in within our team. Besides being useful, I often use a unit test as my main method of completing a feature. I also believe it's a code smell when it's hard to write a test. Maybe it's a regional thing - I've heard it said that here in nyc were much more strict with testing (a carryover from financial roots).
xahrepap 9 hours ago 0 replies      
I'm currently at a phase where I'm just not as worried about TDD or Unit testing. I've realized that most of my buggy code is in integration points.

For example, I'm working on an internal project that creates VMs with some provider (be it Virtual Box, AWS, etc) and then deploys a user defined set of docker container to it. I've found that I don't have bugs in situations I would typically test using mocking/stubbing/etc in traditional unit tests. I usually need to have the real AWS service with the docker service running to get any value out of the test. And at that point it's more work to mock anything else than it is to just start up the app embedded and do functional testing that way.

I'm becoming more of a fan of verifying my code with some good functional tests in areas that feel like high risk and then some contract testing for APIs other apps consume. Then if I find myself breaking areas or manually testing areas often I fill those in with automated tests.

n72 5 hours ago 1 reply      
Unit tests function as a kind of REPL for me and allow me to code considerably faster than without them. Without them, it takes me considerable time each time I want to test the smallest code change since in order to get my app to a testable state I have to click around in the UI, enter a few values in inputs, etc. This is just a waste of time. Moreover, there's a slightly costly context switch which happens when I go from coding the feature to setting up my app to test the feature. With judicious mocking, however, I save a ton of time getting my app to a state where I can actually test the functionality I'm coding and do away with that context switch.
partycoder 9 hours ago 1 reply      
There are things that a machine can do in a much better and reliable way than a human. Comparing lists of items is one of them (e.g: outputs of functions to be verified against expected results). And that's the entire point of using computers in the first place. Otherwise we can go back to pen and paper and process forms by hand.

Does it make more sense for a human to do all the aspects of the testing by hand? Of course not. Nobody has budget for that. It's much better to automate as much testing as possible so testers can focus in higher level tasks. Like the risk assessment involved in marking a build as releasable.

Then, unit testing encourages people to construct their software for verification. This software construction paradigm in itself is enough of a benefit even if unit tests are absent.

Construction for verification diminishes coupling, and encourages developers to separate deterministic logic from logic depending on unreliable processes that require error handling. Doing this frequently trains you to become a better developer.

Unreliable processes can be mocked and error handling can be tested in a deterministic way.

scotty79 3 hours ago 0 replies      
I think I know why I don't enjoy writing test. Most of my enjoyment from programming comes from feeling of power when I am able to write concise code that does a lot for me.

Testing spoils the fun as now I need to write another piece of code for each, single thing that my original piece of code is doing.

I am no longer a wizard casting fireball into a room. I'm also the guy that has to go over the corpses and poke each one with a stick strong enough and for long enough to check if they are absolutely totally dead.

dtheodor 4 hours ago 0 replies      
We often see criticism against unit testing that is based on facts such as tests are badly written, unmaintainable, incomprehensive, and they drag development down.

Which may very well be true! But I am amazed at the conclusion: That because tests are badly written, writing tests is a bad thing. No! Any code can be badly written, it doesn't mean that writing code is a bad thing. Tests, like any other piece of code, also need to be designed and implemented well. And this is something you need to learn and get experience with.

As to whether well-written unit tests are worth it, I cannot imagine how someone could efficiently maintain a codebase of any size without unit tests. Every little code change is a candidate to break the whole system without them, especially in dynamic languages.

bluejekyll 9 hours ago 20 replies      
I generally try to keep my HN comments positive, but this is total Bullshit, yes, with a capital 'B'.

Unit tests are not albotrosses around the neck of your code, they are proof that the work that you just did is correct and you can move on. After that they become proof that any refactor of your code was correct, or if the test fails and doesn't make sense, that the expectations of your test were incorrect. When you go to connect things up after that, and they don't work, you can look at the tests to verify that at least the units of code are working properly.

I am no TDD fan, but I do believe that writing your code in a why that makes it easy to test generally also improves the API and design of the entire system. If it's unit testable, then it has decent separation of concerns, if not, then there may be something wrong (and yes this applies to all situations). I use this methodology for client/server interactions as well where I can run the client code in one thread and the server in another, with no sockets, to simulate their functioning together (thus abstracting out an entire area of potential fault that can be tested in isolation from network issues).

The article/paper raises good points about making sure that the tests are not just being written for the sake of code-coverage, but to say they are useless is just sloppy. Utilize the testing pyramid [1], if you adhere properly to that, everything about your system will be better.

I have a serious question, given that this was written by a consultant, is it possible that tests get in the way of completing a project in a timely manner, thus causing a conflict of interest in terms of testing?

[1] - http://martinfowler.com/bliki/TestPyramid.html

nuggien 9 hours ago 1 reply      
jacquesm 7 hours ago 0 replies      
Operative word 'most', and then only when done by someone who doesn't understand the goal of unit testing. Any tool can be abused, including testing.

I became a 'convert' after having to clean up a fairly large mess. Without first writing a bunch of test code there would have been no way whatsoever to re-factor the original code. That doesn't mean I'm a religious test writer and that there is 150% test code for each and every small program I write. But unit testing when done properly is certainly not wasteful, especially not in dynamic languages and in very low level functions. The sooner you break your code after making changes the quicker you can fix the bug and close the black box again. It's all about mental overhead and trust.

Unit tests are like the guardrails on the highway they allow you to drive faster confident that there is another layer that will catch you in case something goes wrong rather than that you'll end up in the abyss.

MichaelBurge 8 hours ago 1 reply      
If I'm writing Perl or something, I'll write unit tests just to verify the code runs in the basic cases.

I like Haskell because I can skip most of the unit tests. Integration tests are still good, and some unit tests like "confirm that test data are equal under serialization and then deserialization" help with development speed. But I can usually refactor vast swathes of code all I want without having to worry about breaking anything.

If you do write unit tests and your test passes on the first try, make sure you change the output a little bit to ensure it fails. It's more common than you'd think to accidentally not run a test.

henrik_w 8 hours ago 0 replies      
There was also a follow-up article [1]. My take on the two articles is that he argues that integration tests should be able to replace unit tests in most cases. However, in my own experience, both kinds of tests have their palces.

Why unit tests are good:- You get well-tested parts that you can use in your integration tests, so that the integration tests truly catch the problesm that couldn't be caught at a lower level. This makes trouble-shooting easier.

- Decoupled design - one of the key advantages of TDD

- Rapid feedback. Not all integration tests can be run as quickly as unit tests.

- Easier to set up a specific context for the tests.

There are more details in the blog post I wrote as a response [2].

[1] http://rbcs-us.com/documents/Segue.pdf

[2] https://henrikwarne.com/2014/09/04/a-response-to-why-most-un...

kriro 7 hours ago 0 replies      
Just to add another perspective. There is a (pretty sad but real) business case for unit tests (don't need be good just need to exist :( ). In some b2b/enterprise fields it is an excellent sales pitch to be able to say we have X% test-coverage or 500 unit tests OMG. The people that make the buy decisions want to sleep well no matter if it is based on sound reasoning or not. Test coverage is almost a "feature" (in the bad enterprise IT sense). Sounds less risky to buy the product with more test coverage. Less risk I can understand it's like buying insurance...which is great since insurance turns risk into a budgetable item...yes please sell me this awesome software with that high test coverage.


tux1968 1 hour ago 0 replies      
Not sure what this says about unit testing in general, but Perl 6 as a language is specified by a test suite. Ie. the only official language definition is the test suite:


jondubois 5 hours ago 0 replies      
I share a similar sentiment about unit tests. When you have a lot of code with fast changing requirements, unit testing can be a HUGE waste of time. In my previous job, we spent much more time writing and fixing unit tests than actually adding new features. I was operating at like 1/10th productivity.

It might make sense if you're working for a huge corporation with a LOT at stake. Unit tests then become a form of risk management - It forces employees to think REALLY LONG AND HARD about each tiny change that they make. It's good if the company doesn't trust their employees basically.

I MUCH prefer integration tests. I find that when you test a whole API/service end-to-end (covering all major use cases), you are more likely to uncover issues that you didn't think about, also, they're much easier to maintain because you don't have to update integration tests every time you rename a method of a class or refactor private parts of your code.

About the argument regarding using unit tests as a form of documentation engine; that makes sense but in this case you should keep your unit tests really lightweight - Only one test per method (no need to test unusual argument permutations) - At that point, I wouldn't even regard them as 'tests' anymore, but more like 'code-validated-documentation'; because their purpose then is not to uncover new issues, but rather to let you know when the documentation has become out of date.

I think if you're a small startup and you have smart people on your team (and they all understand the framework/language really well and they follow the same coding conventions), then you shouldn't even need unit tests or documentation - Devs should be able to read the code and figure it out. Maybe if a particular feature is high-risk, then you can add unit tests for that one, but you shouldn't need 100% unit test coverage for every single class in your app.

shilman 9 hours ago 0 replies      

They informed me that they had written their tests insuch a way that they didn't have to change the tests when thefunctionality changed.

keithnz 4 hours ago 0 replies      
I once had a talk with Kent Beck way way back in the day (early 2000s) about how much unit tests there should be, etc... and I think its nicely captured by his reply here


A lot of people seem to miss many of Kent's subtly, but intentionally phrased advice. Unit Tests are a liability, so use them responsibly and as little as possible, but not at the expense of removing confidence in your software.

Also, delete tests that aren't doing you any favors.

lokedhs 9 hours ago 2 replies      
The beginning of the essay discusses traditional bottom-up programming to top-down programming (using object orientation).

I have written a very large amount of Java code in my career, but after having spent a lot of (personal) time on a Common Lisp project (web application) I can safely say it's still possible to build modern applications using a bottom-up approach. I recommend people try it, it can be quite refreshing.

lazyfunctor 6 hours ago 0 replies      
I've read this advice couple of times "convert your unit tests to assertions". What does it actually mean? Say in the context of web dev, you add assertions to the code and when they fail you log an exception and move on?

Any links related to it will be helpful.

ebbv 11 minutes ago 0 replies      
There's actually a section of this where he basically says that a test that always passes is not useful (provides no information) and a test that fails sometimes is useful (provides lots of information.) I'm not sure exactly what failure of reasoning lead to this conclusion but it's totally bogus. They are both useful.

The ideal case is that your codebase is entirely made up of code that never fails and tests that always pass. Obviously sometimes you are going to have tests that fail and introduce bugs that cause tests that used to pass to fail. But that's the reason that you write those tests, to find those problems.

The author gives the silly example of a method that always sets x to 5, and a test that calls it and makes sure x is now 5. That seems like a bad test but anyone who's actually done work as a developer understands why it isn't. If you skip the tests that are simple and straight forward and seem like a waste of time and only write more complicated tests then you will have a hard time reasoning what failed when the complicated test fails. Was your x = 5 method faulty? You don't think so but you don't have proof since it wasn't tested. Having the test, as silly as it seems, lets you know that method is working.

Anyone who has been on a team that skips easy/simple tests knows what a mistake it is. And if you don't, you will eventually.

testingnut 6 hours ago 1 reply      
People start writing tests like they start coding - badly. They forgot all the DRY principles and don't architect their tests to reduce their coupling on the code they are trying to test. The result is tests that are a drag on additional new functionality. Even these are better than no tests as you can refactor the tests to reduce the coupling.

The debate about whether UT or system tests or something in the middle is better is missing the point. A test should be understandable at any level. 5+ mocks per test generally doesn't help the next guy understand what you are trying to test.

If you can abstract your system behind an API to drive and test it, you'll have much longer lasting tests that are more business focused and importantly are clearer for the next person to understand.

I can see great value in identifying the slow and rarely failing tests and running them after the quick / more information producing tests. Aee there any CI support for such things? I know TeamCity can run failing tests first...

dewiz 9 hours ago 3 replies      
Is it possible to do TDD without unit tests? I ask because a place where I interviewed had no concepts of mocking, injecting, and would just test the whole chain of classes. But dev leads would insist on TDD. Hmm?
planetjones 7 hours ago 0 replies      
Throw away tests that haven't failed in a year. That just a ridiculous point IMO. Written properly tests act as perfect docimentation for the system. So just because a screen or a process hasn't changed in a year, so therefore the tests have been consistently passing does not mean you throw the tests away! You never know when someone will need to maintain that piece of the system.
fsloth 7 hours ago 0 replies      
My kneejerk reaction is that this is BS but the text has good points and can function as an anecdote on how not to do things.

"... Large functions for which 80% coverage was impossible were broken down into many small functions for which 80% coverage was trivial. This raised the overall corporate measure of maturity of its teams in one year, because you will certainly get what you reward. Of course, this also meant that functions no longer encapsulated algorithms. It was no longer possible to reason about the execution context of a line of code in terms of the lines that precede and follow it in execution,"

Unit tests which break code are stupid. Refactoring is good, but just splitting a large function into smaller pieces does nothing to improve the value of code unless it's done so that there is an understanding of the algorithm available and communicated.

Everything can be abused if not used craftly.

bengalister 7 hours ago 0 replies      
At my company, we (developers) tend to develop mainly automated functional tests and we test our product in blackbox mode. I also prefer favouring functional tests and found that actually many integration tests become useless. For unit tests, I keep that for pieces of code that are used thoroughly in different contexts, i.e, actually code that could be turned into an external library and/or code that is hard to test in integration/functional tests => it makes non-regression much faster to test.

For blackbox mode, I am not that convinced that it is the proper strategy, especially when the product is being built incrementally. The typical example is when an entity state is being modified in a UC and the function to test the state is not yet developped. I'd prefer in that case to have the test verify that the state has been properly updated directly in DB.

ensiferum 3 hours ago 1 reply      
Wow so much fail. Where do these mickeymouse programmers repeatedly spring up? "Oh, I'm a supermanprogrammer and I can get it just right the first time because I use my brain!"

Sure I believe that there are people who can do this and get the code right by just thinking it through. But I have a question for even these people (and their organizations).

- What happens when these people leave and the new junior dev becomes the maintainer?- What happens when the code is migrated/reused somewhere else?- How do you make sure your code works at least the same as it did before after you: * modify it * update some dependent component somewhere, say some open source library that your code uses internally. These have bugs too you know...

Simply you don't unless you write the tests. The real value of testing (unit testing, regression testing, system testing) comes once you have that nice test suite and you can automate it and make sure that on every change nothing breaks. This is a beautiful thing to have, because simply no human can understand these software systems fully to make sure that some innocent looking change doesn't break something. These things unfortunately happen.

Sure testing is hard if you make it so. I said before that I get an average ratio of about 2/5 of actual code/unit testing code. This doesn't mean that "this is so bothersome let's not do it", what it should mean is that you design your system with testability in mind. A primary architectural goal in any design should be testability How easy it is to write unit tests for your classes and methods. The easier it is, the smoother the unit tests are to write and maintain and the better code quality you will have in the end.

Also note that when you design your software to be testable you get a few other abilities for free such as reusability. One of the most important starting points is to find the right separation of truly orthogonal components and define their interfaces right. Way too often I see code where the engineer didn't understand this and clumped several unrelated concepts together. This really makes the testing hard and painful.

WalterBright 7 hours ago 0 replies      
I'm not giving up unit tests anytime soon. I've had excellent success using them - success being much faster development time and far fewer bugs found in the field.
yason 5 hours ago 0 replies      
The final points at the end of the article are gold. They match my own experiences exactly.
mokn80 4 hours ago 0 replies      
"Throw away tests that havent failed in a year" What a load of crap. We have a test that asserts creating a new user doesnt fail, it hasnt failed for over a year, but the day it does..
powera 4 hours ago 0 replies      
Why are so many people saying that if some developers write bad unit tests, then all unit tests are pointless and a waste of time?

Yes, I've seen thousand line files of boilerplate unittests that don't actually say anything useful about the system. I've also written unit tests that tell me in 2 minutes rather than in 3 weeks that somebody has broken my code.

If your standard for a system of testing is that it guarantees that people can only write good code, you're insane.

V-2 3 hours ago 0 replies      
"Unit tests are unlikely to test more than one trillionth of the functionality of any given method in a reasonable testing cycle. Get over it. (Trillion is not used rhetorically here, but is based on the different possible states given that the average object size is four words, and the conservative estimate that you are using 16-bit words)."

It's the most bizarre reasoning I've seen for a while. Well, yes, of course, and my string trimming function will always have coverage of essentially 0% no matter what I do, since strings can be of any length, and there's way more possible strings than atoms in the universe... we clearly have a different concept of "functionality" though

"When developers write code they insert about three system-affecting bugs per thousand lines of code. If we randomly seed my clients code base which includes the tests with such bugs, we find that the tests will hold the code to an incorrect result more often than a genuine bug will cause the code to fail! Some people tell me that this doesnt apply to them since they take more care in writing tests than in writing the original code. First, thats just poppycock."

Of course, because in reality it's not about being more careful or attentive. It's about the fact that tests don't (or aren't supposed to) contain any logic! (Meaning conditional statements of any sort, loops, etc.) It's logic that breeds an overwhelming majority of bugs. If your tests contain any, it means they're written badly.

"even if it were true that the tests were higher quality than the code because of a better process or increased attentiveness, I would advise the team to improve their process so they take the smart pills when they write their code instead of when they write their tests"

No, it's not about "smart pills" (this condescending tone is so annoying), it's about the fact that - otherwise than with my tests - I can't keep logic out of my production code.

"Most programmers believe that source line coverage, or at least branch coverage, is enough. No. From the perspective of computing theory, worst-case coverage means investigating every possible combination of machine language sequences,ensuring that each instruction is reached..."

...oh God, enough.

I don't know the author, but it's quite clear he's some CS professor rather than a real-life full time software dev

Remember this article? http://blog.triplebyte.com/who-y-combinator-companies-want

https://phaven-prod.s3.amazonaws.com/files/image_part/asset/... - that's who YC startups want. When I saw it for the first time, I wondered why "academic programmers" rated the worst.

em3rgent0rdr 9 hours ago 2 replies      
Copied from end: In summary: Keep regression tests around for up to a year but most ofthose will be system-level tests rather than unit tests. Keep unit tests that test key algorithms for which there is abroad, formal, independent oracle of correctness, and forwhich there is ascribable business value. Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a systemtest context is everything. Design a test with more care than you design the code. Turn most unit tests into assertions. Throw away tests that havent failed in a year. Testing cant replace good development: a high test failurerate suggests you should shorten development intervals,perhaps radically, and make sure your architecture and designregimens have teeth If you find that individual functions being tested are trivial,double-check the way you incentivize developersperformance. Rewarding coverage or other meaninglessmetrics can lead to rapid architecture decay. Be humble about what tests can achieve. Tests dont improvequality: developers do.
kmiroslav 6 hours ago 3 replies      
I'm really annoyed there is no date and no way to date when this was written. Coplien has been around for a long time, this could have been written yesterday or thirty years ago.
csours 5 hours ago 0 replies      
Watching the Dragon Innovations videos on Design for Manufacturing [1] earlier, Bill Drislane said that anything you can afford to test must not fail.

I think the corollary is that if you cannot afford to test something you cannot rely on it.

1. https://www.youtube.com/watch?v=zCnTUOxMl_4&list=PLNTXUUIxHy...

Letters between Backus and Dijkstra (1979) medium.com
192 points by acidflask  21 hours ago   84 comments top 6
vanderZwan 19 hours ago 9 replies      
You know, I'm wondering: could part of Dijkstra's reputed arrogance be due to a cultural difference? I'm Dutch, and bluntly calling out flaws in each other's work is not considered all that rude over here; it's almost the opposite: not calling someone out on their flaws implies we either consider them a lost cause or not worth the hassle of educating.

I almost got fired from a teaching position in Sweden because I told my 3rd year bachelor students that many did not bother to add their names or the assignment number, or using paragraphs and in some cases even basic interpunction on their assignments. And that this was well below the level required to to pass secondary school, and that I expect better from them.

This was apparently too confrontational, and a few upset students later I got chewed out and almost fired. Meanwhile, from my point of view, I was just doing my job and already sugarcoating it by Dutch standards.

Having said that, yes, even by Dutch standards I would say that Dijkstra liked to troll people a bit.

PS: I really like the following insight from Dijkstra's review: But whereas machines must be able to execute programs (without understanding them), people must be able to understand them (without executing them).

justin66 20 hours ago 2 replies      
"I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras."

- Alan Kay

weinzierl 21 hours ago 3 replies      
This is a series of cell phone shots of an exchange of letters between Backus and Dijkstra. There is no transcript and some of the letters are hard to read, especially the handwritten ones.

I only skimmed it but it is definitely an interesting read.I didn't interpret their stance as arrogant though. In my book both are enthusiastic about their subject and think the other party is misguided. Both take some effort not to hurt the others feelings while still bringing their point across.

Ericson2314 7 hours ago 1 reply      
I'm sad that in this thread and https://news.ycombinator.com/item?id=11786193, nobody is actually talking about the technical arguments at play. I've only read the EWD692 "opening salvo" but already there are interesting things to point out.


> The profound significance of Dekker's solution of 1959, however, was that it showed the role that mathematical proof could play in the process of program design. Now, more than 40 years later, this role is still hardly recognized in the world of discrete designs. If mathematics is allowed to play a role at all, it is in the form of a posteriori verification, i.e. by showing by usually mechanized mathematics that the design meets its specifications; the strong guidance that correctness concerns can provide to the design process is rarely exploited. On the whole it is still "Code first, debug later" instead of "Think first, code later", which is a pity, but Computing Science cannot complain: we are free to speak, we don't have a right to be heard. And in later years Computing Science has spoken: in connection with the calculational derivation of programs and then of mathematical proofs in general the names of R.W. Floyd, C.A.R. Hoare, D. Gries, C.C. Morgan, A.J.M. van Gasteren and W.H.J. Feijen are the first to come to my mind.

Backus as quoted in EWD692 (Djikstra attacks this):

> One advantage of this algebra over other proof techniques is that the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely [sic!] talks about his programs.

From a perspective of a PL person like myself, these ideals are very compatible. Without correctness by construction, there is not enough proof reuse so formality will forever be doomed for niche applications (aka computers embedded in dangerous things). Likewise after trying out intuitionistic type theory--based things (e.g. Agda) separating the program and proof langauges just seems clumsy. Overall separating programming and proof whether spatially (seperate languages) or temporally (correctness proved after the fact) is bad, and the reasons hardly depend on the dimension of separation

OJFord 13 hours ago 0 replies      

 Dear John, ... But when you open your letter with: I am quite sure that you have never read any paper Ive sent you before it is my pleasure to inform you that - although quite sure - you were very wrong.

vog 16 hours ago 2 replies      
Could anybody explain why this comment by "internaut" was downvoted so heavily?

I found internaut's comment to be insightful, well-conceived and on-topic. What's wrong with this comment?

EDIT: Moreover, who downvoted me so quickly for asking this question? This happened almost immediately after I posted this. Are there some nasty bots at place here?

I defeated a long-broken fridge and became a household hero through 3D printing arstechnica.com
100 points by shawndumas  13 hours ago   48 comments top 13
dmritard96 11 hours ago 2 replies      
The part that I found so relatable here is that 3D printers in libraries are absolutely transformative. It empowers people to do things like fix things on their own but also crazier things like launch a company. A couple of days ago I wrote a blog entry about how we started our company after doing some 3D printing at the library in downtown Chicago.


nfriedly 10 hours ago 6 replies      
This probably all boils down to my lack of experience, but I just recently got a 3d printer, and most of the things I have printed have failed to perform their desired function. This is both things I designed (a replacement battery lid for a remote - the software skips the small details like the notch that holds it in place), and most of the designs I've pulled off of thingiverse (e.g. a water bottle lid - it's too tight and it leaks water).

My only fully successful prints aside from the calibration were the "hello world" roctopus and a toy car.

I expect (hope) that this will improve with time, but so far it's been a little disappointing.

sgnelson 9 hours ago 4 replies      
My father needed to change the oil in his Toyota truck about 6 months ago, but unfortunately he couldn't find his oil filter wrench. I too thought I could be a hero, and show him that the money I spent on this 3d printer wasn't just for a lark, but it was a useful tool. I found a model of the exact tool needed on Thingiverse, printed the model out (it took about 3 and a half hours if my memory serves me correctly to print the part, all the while my dad was waiting).

Finally, being incredibly proud, I handed my dad the tool, and expected success along with much congratulations from my father for solving his problem.

It was about a millimeter too small to fit around the filter. I had failed. My printer had failed. He went and bought a new oil filter wrench at a car parts store.

One of the problems with STL's is that they're unit independent, they don't care if the part is measured in millimeters or inches. (they also ignore the internal structure of the part, but that's a rant for another time.) And one problem with the 3d printer community is that with all these different printers, there is a wide variety in the tolerances/accuracy of the printed parts. Even with well set-up printers, changing one setting, such as amount of infill, may change the size of the final part. And even if you have a precisely calibrated printer, the person who made the model you're about to print may not.

In this particular case, I don't believe the model was off, but rather to make the tool incredibly strong, I printed with ABS plastic and included extra support and extra shell layers, which I believe may have produced too much material and effectively over extruded plastic, making the part just a bit too small. But I took the part and slammed it into the driveway as hard as I could, and the part didn't even think about breaking, it was quite strong, even if it was useless.

robbrown451 11 hours ago 2 replies      
Not to take something away from what he did and what he learned doing it, but there are plenty of ways to fix that part that would be a good bit quicker.

I'm a big fan of Bondo, fiberglass/fiberglass resin, and my trusty dremel tool. Recently a plastic part to a dryer cracked, and those approaches didn't seem strong enough and would probably make it not fit any more, so I heated up some little nails on the gas stove and pressed them into the plastic, which melted around them, making a very strong part that was no bigger than the original.

And I say this as someone who has a long, long history with 3d CAD modeling.

elliotf 9 hours ago 0 replies      
I am, unlike the author, not a mechanical engineer. However, I've been able to teach myself enough about designing and printing in 3d to make household-useful items with my 3d printer. The interesting thing is that my wife and I now take it for granted that I can just print something to fix the problem.

It's not been easy, and I laugh at my earlier attempts, but it's been a fun learning experience. The fun thing is that I've had to re-learn a bit of trigonometry because of my choice in CAD tools (openscad).

I have a repository of the household items I've printed: https://github.com/elliotf/reprap-household-misc

hristov 10 hours ago 1 reply      
He was a mechanical engineer with some experience with autocad. I have to say this is still something the average person would not be able to do.

I never learned autocad or ME, but I do remember my ME buddies from college taking their required autocad class and taking many hours to design a simple part like a pulley, and then being oh so proud of their creation. But, hey, I am sure it was much more fun than their Fortran class which they also had to take.

My point is it takes special skills, training and a lot of patience to be able to design a part and get all the dimensions right.

It would have been more interesting if he was able to glue the broken part together, then scan it, and then print a copy of the scanned part. Is that something that is easily doable nowadays?

te0006 4 hours ago 2 replies      
Modeling needs to get easier, especially for such simple replacement parts tasks.

Any suggestions regarding tools for_capturing_ 3D (or at least 2D) models from photos or videos of existing objects? I imagine if there are multiple reference objects (of known size and shape) visible in the scene together with the object-to-capture, the photogrammetry should be tractable.

A good start would be an app able to produce a 2D CAD file from a prototype shape (e. g. cut from cardboard) photographed on a background of squared paper. Does such a beast perhaps already exist?

tezza 5 hours ago 1 reply      
I appreciate the geek aspect to this.

However I do feel he could have achieved the same result a simpler way.

1. Repair the original part in a basic fashion

2. Make a silicon cast of the repaired original part. The original will have been injection molded, so try to use their mold separation lines as a guide for where your separation line should be

3. Spilt out to make a mould

4. Pour in resin to make an very exact replica. The place where the original was broken will be replaced with functional resin

5. De-mold and put in place.

This guy makes his own rubics cube style puzzle via this method and documents it well: https://m.youtube.com/watch?v=i-HXU4cfvdc

aftbit 8 hours ago 1 reply      
3d printers are in a weird transitional state. We are long past the point where every prototype shop has a decent 3d printer, yet we're still (seemingly) years away from the point where every home and business has a 3d printer. A big part of this problem is that there really aren't that many obviously useful things to print.
olivermarks 7 hours ago 0 replies      
For something like this simple plastic non load bearing cap you could just press the broken one in a wad of plasticine then pour molten polystyrene or similar plastic into the mould. A lot cheaper and simpler than 3d printing...
noonespecial 10 hours ago 1 reply      
The very first thing I did with my printer is go on a repair binge. Modelling a tiny gear in Blender (having done 0 autocad since college) and fixing a little toy felt like my first "hello world" on a C64. Magical.

I then had a moment of silence for all of the super-glue messes of my childhood that never fixed anything and a touch of envy for my own kids growing up in what to my 10yo self would seem a Star Trek future.

zakalwe2000 10 hours ago 1 reply      
Spoiler! Author is a mechanical engineer.
SexyCyborg 10 hours ago 1 reply      
TinkerCAD is really underrated. People who already have good spatial and mechanical skills tend to be CAD snobs and look down on it. But if youre not by nature a very handy person and just need simple things around the house it will do everything you need. Its parts tend to have a distinct look. But they work just fine.
Lost at Sea on the Brink of the Second World War newyorker.com
44 points by stass  10 hours ago   18 comments top 2
NotSammyHagar 5 hours ago 0 replies      
That's a great yarn. Makes me think about what the refugees coming to Europe face.
cperciva 9 hours ago 4 replies      
Do Americans really consider 1941 to be the brink of WWII? I've always considered that it started on September 3, 1939, with the first declaration of war between major powers.
A Proposal for an Antifragile Software Manifesto sciencedirect.com
43 points by jhrobert  11 hours ago   27 comments top 8
danielvf 11 hours ago 2 replies      
"We value reliable software", is nice, but everyone values that.

But if you said "We value reliable software over meeting deadlines", then you'd actually have said something.

The power of the Agile Manifesto was that it clearly identified tradeoffs they were willing to make.

SkyMarshal 6 hours ago 1 reply      
Antifragile refers to a property in which a system does not just resist breaking down under disorder and stressors (robust), but gains/learns/becomes stronger. A good example is how bones heal back stronger where they were broken.

While we are currently able to build self-healing systems of a sort, fault-tolerant Erlang systems being a good example, it doesn't seem we can build systems that are truly antifragile on their own, that would require strong AI. Otherwise, humans monitoring and intervention are required to improve systems under duress.

This manifesto does include "team" and "organization" to account for that, but a true autonomous antifragile system is a ways off.

keithnz 11 hours ago 1 reply      
I think a manifesto with a lot of buzzwords ends up being quite fragile.
jmspring 9 hours ago 1 reply      
As with any manifesto, words, interpretation, and practice are where they fail. I've been through the Agile/Scrum classes, yeah, great. Does management buy in, despite professing such? Yeah, not so great.

Unfortunately, for me, whenever I see the word "manifesto" in relation to computers, this is the one I immediately remember...


jcbrand 2 hours ago 0 replies      
Interesting that the concept of Antifragility is gaining traction with regards to software.

Back in 2014 I wrote a blog post "Antifragile Software Ecosystems" that discusses how IMO antifragility relates to how software is developed.


kapitalx 6 hours ago 0 replies      
Chaos engineering falls in this category: http://principlesofchaos.org/
jimjimjim 10 hours ago 4 replies      
This will be difficult to do in text (imagine a triangle):

 Quality Time Cost
Right, which one is more important? and more importantly who gets to decide which one is more important?

nikolay 6 hours ago 1 reply      
When in 2009 I was calling it "Fragile," not "Agile," people were laughing at me as yet again I wasn't politically correct!
Does Military Sonar Kill Marine Wildlife? (2009) scientificamerican.com
47 points by turrini  14 hours ago   7 comments top 2
andy_ppp 8 hours ago 1 reply      
Why can the military not just use a different frequency that animals can't hear? This whole thing sounds like a classic human failure; its under water, we can't see it, who cares. The same goes for green house gases and allowing oil companies to mess up countries like Nigeria.
cwal37 10 hours ago 0 replies      
You could reflexively post that, or you could read the article, which provides some evidence that the answer seems to range from "possibly", to "yes". It's tempting to just link to Betteridge's Law and move on, but it's not universal, and sometimes piece is legitimately presented as reviewing the evidence of some topic.

Personally, I think this is actually a useful conceit when considering something like this (which I believe I've actually seen on HN multiple times in the past) where a connection is being made that perhaps one had never considered before.

The Pros and Cons of Taking Investment from Corporate VCs medium.com
44 points by joeyespo  12 hours ago   3 comments top 2
shin_lao 3 hours ago 0 replies      
Some corporate VC are also very detached from the parent company and are actually closer to "regular" VC than you might think.
hkmurakami 9 hours ago 1 reply      
Solid post. Echoes many of the things I've been told by very seasoned lawyers in the field.

Tag along clauses for corporate investors are advisable to prevent them from blocking acquisitions etc.

Who Really Found the Higgs Boson nautil.us
37 points by dnetesn  12 hours ago   6 comments top 3
adam_d 6 hours ago 2 replies      
I worked on the ATLAS experiment for 6 years. The article gives a reasonable description of the management structure and the unique upsides of that way of working. However it mostly skipped over the negative aspects.

Possibly due to the described desire for consensus, I found the organisation to be incredibly bureaucratic with incredibly lengthy processes. Releasing a paper usually involved around a dozen rounds of review with various groups, often arguing for days about linguistic style more than Physics content.

The lack of clear top-down control makes resource allocation very challenging. There were frequent complaints that Higgs analyses had too much manpower while less "sexy" tasks were chronically understaffed.

The lack of clear assignment of responsibilities also leads to lots of nasty internal politics between institutes. Especially the Higgs analysis where people were eternally engaged in attempts to land grab so they could claim responsibility for bits of the eventual discovery.

Overall, I enjoyed working there a lot. It is a unique structure and the sense of teamwork and lack of hierarchy is very nice. But this article is a bit of a whitewash. I don't think it should be lauded as some incredible model, it has at least as many problems as any other organisation of its size.

sytelus 7 hours ago 1 reply      
Very intriguing but just like most other articles on alternative management structures, this leaves out all the critical information and focuses on praising system without actually understanding it. While I understand that at CERN most people are working for pure passion at modest salaries while being very highly qualified as opposed to someone in 20s with goal to cash out stock options and retire ASAP, I think it's important to figure out if this is the driving reason behind success of loose management structure with no chain of command.

Any alternative management system proposal needs to answer questions like,

1. How people gets hired? Who creates job posting, how interviews are conducted, who does negotiations, approvals and how talent gets attracted and retained?

2. Is there differentiated performance reviews? If so who exactly conducts, signs off these? Is there curve? Is there expected distribution? Who approves promotions/pay raises? Who sets up these rules?

3. If there minimum expectations for performance? Who determines firing and how?

4. If there is no real manager and everything gets decided by commeeties, who sets up these commeeties? How work assignment is done? Who is accountable for tracking progress, success or failure? Who has final say in ties when conflicts occur?

5. What options employees have when they want change? How transfer happen? Who approves these and what are the official rules?

chrispeel 8 hours ago 0 replies      
I like the loose management style described. My question, can such management be used in a commercial endeavor?
Palo Alto considers subsidized housing for salary up to $250K reuters.com
69 points by sampo  10 hours ago   89 comments top 11
natrius 7 hours ago 4 replies      
In most of Palo Alto, it's against the law to build anything but the most expensive kind of housingsingle family homes with big yards. People who already bought in get to enjoy their walled-off neighborhoods at the expense of families with less money. It's nice that they feel bad enough about it to do something, but instead of legalizing cheaper housing on less land, they're trying to fund a lottery as a totem to make themselves feel better about the artificial affordability crisis they've created.
whyenot 7 hours ago 5 replies      
> The plan is a among a series of proposals being mulled by the Palo Alto City Council to provide affordable housing to those considered middle class in the area - families making between $150,000 to $250,000 annually.

As a point of reference, starting salary for a teacher in the Palo Alto Unified School District is $57.5k. Someone who has taught in the district for 30 years with significant post-credential grad school makes double that.

topspin 5 hours ago 1 reply      
Palo Alto property owners rejoice; subsidizing demand is a sure-fire way to drive up prices.
mc32 7 hours ago 1 reply      
Consolidate bay area government (ABAG) and come up with a comprehensive and cohesive growth plan (housing, transportation, taxes, zoning, etc.) Piecemeal reactive and Nimby policies are antithetical to the needs of the area. Forget about 1960s idyll. It's the 2010s, act like it. The area doesn't have to do growth Chinese-style, but we could to growth Singapore-style/Korea-style with North American values (i.e. less patronizing and more community involvement, up to a point)
eru 4 hours ago 1 reply      
Why won't they just implement a land tax, and reform zoning?

(Of course, because these two sensible policies would not be in favour of landowners. Subsidies demand is.)

alexc05 50 minutes ago 1 reply      
This seems like a bad idea (gut check). Won't the availability of public money for private housing help to drive prices up?

Everything I see seems to indicate that the price of housing expands to fill the amount of money available.

Cheap interest rates managed to change the base price of homes such that the mortgage payment is roughly the same. Now it is individuals & condo developers collecting that difference rather than banks.

Or am I totally off base here?

Increasing property taxes shifts the burden of ownership to the people who have money... though suppose that would get passed through to renters as well.

VeejayRampay 2 hours ago 0 replies      
The noob question cause I feel like I don't understand the issue: Why are companies not pushing for remote workers? Are the downsides overweighing the positive or is it a cultural thing?
nharada 7 hours ago 3 replies      
The middle class STARTS at 150K?
mickrussom 4 hours ago 0 replies      
Good idea. Too little too late. I suffer horribly with my small family and two good jobs. One job basically pays for the taxes. And housing is so expensive I cant get over the hump to get a mortgage to see any tax benefits. Its really crushing to not be a landed gentry here. We rent is a good school district and have a kind landlord but we cannot build housing equity besides having good jobs.
mtgx 2 hours ago 1 reply      
This lead to the sellers accounting for the subsidy and increasing their prices accordingly.
bettyx1138 7 hours ago 1 reply      
NYC needs this too.
GraphicsMagick and ImageMagick popen() shell vulnerability via filename gmane.org
132 points by marksamman  15 hours ago   47 comments top 10
_jomo 11 hours ago 2 replies      
Thanks HN, I have now lost my Twitter profile image and can't upload a new one:


voltagex_ 13 hours ago 0 replies      
The current situation reminds me of OpenSSL after Heartbleed - researchers start paying lots of attention and finding other issues after the "main" one. Sad that this is what it takes for a big, old project like ImageMagick to get some TLC.
zerocrates 14 hours ago 1 reply      
The particularly troublesome part of this issue (and the previous big one) is the interaction between all the "power" ImageMagick provides in its filename parsing and its MVG and SVG decoders.

You'd think that just having applications not use user-provided filenames during the conversion process would prevent much of what's been disclosed lately, but no: the MVG decoder exposes all those features to a user that merely controls the content of the input file.

The mitigations from the last big vulnerability will still work against this, but people who merely updated to an ImageMagick that fixed the curl command escaping would be in trouble.

wfunction 14 hours ago 3 replies      
> The simple solution to the problem is to disable the popen support (HAVE_POPEN)

Is it just me or is this the wrong way to tackle this? The question to me is why the file name being interpreted in some way in the first place, not why popen is being used on the result of the interpretation.

Also, why are pipes even being allowed in the file name in the first place? (I'm asking about POSIX/*nix here, not about ImageMagick.)

yxhuvud 5 hours ago 1 reply      
This is not the first time popen with | has led to security issues. At the same time it doesn't have any legitimate use cases that I can think of. Is there any good reason to not patch popen and remove the support for |s instead?
0942v8653 14 hours ago 1 reply      
Wow. This is really scary. Especially the SVG issue. Is it common to support xlink:href on servers that use ImageMagick? It seems like that would allow you to read any readable image on the FS anyway, which is a vulnerability in itself.
kyledrake 5 hours ago 0 replies      
Does this affect non-SVG (and other weird format I've never heard of) images? (such as jpg, png, bmp, gif)

I'm trying to figure out a way to patch this that doesn't involve recompiling imagemagick. Like most people I get it from dist, compiling it is kindof a PITA.

Mojah 7 hours ago 0 replies      
In case anyone is interested, here's another mirror of the CVE: https://marc.ttias.be/oss-security/2016-05/msg00255.php
0x0 14 hours ago 1 reply      
So, just like Perl?
trollian 14 hours ago 3 replies      
Who on earth still uses ImageMagick? Security holes in it are old news. Like 15 year old news.
       cached 30 May 2016 13:02:01 GMT