hacker news with inline top comments    .. more ..    21 Dec 2014 News
home   ask   best   3 years ago   
1
EFI Firmware Security
47 points by SpaceInvader  3 hours ago   15 comments top 6
1
cnvogel 1 minute ago 0 replies      
Out of curiosity: Can anyone point me where to find how a recent x86-cpu actually boots? Where's the code that gets executed in the first few CPU cycles?

The bulk of the firmware, that's clear, nowadays will be fetched from a serially connected flash, which this initial code will copy to the (then initialized) DRAM, also probably in several stages. But where do the first few instructions hide? Mask-rom in the CPU, or the chipset?

I know how initial bootup works on my day-job-default-CPU (a m68k/coldfire that basically just starts executing from a parallel connected flash), on a few ARMs and some PPC, but I have no idea about a "typical" intel core/i5..7/... CPU.

2
userbinator 2 hours ago 4 replies      
Remember when BIOS flash ROMs were write-protected with a physical hardware switch/jumper? It was an extremely simple measure that basically made it impossible for the BIOS to be corrupted by software, malicious or otherwise.

It was certainly "inconvenient" to perform BIOS updates, but back in those days BIOS updates weren't all that common either. I don't think it should ever be "convenient" to do something like that with basic system firmware - by its very nature, it is supposed to be stable and rarely changed. Somehow this is making me terribly nostalgic... for the days when BIOSes seemed far less buggy and in need of constant change. Now, I hear stories of laptops with factory-installed crap that silently updates the BIOS in the background(!), bricking the machine when something else unfortunate happens coincidentally with it (e.g. hard reset.) I remember the ritual of "boot from a floppy to a plain DOS prompt, run the updater, and wait for a few tense seconds as it updated the BIOS".

The mention of "Thunderbolt Option ROM" makes it clear that Thunderbolt is basically an external version of PCI(e). In other words, even without being able to modify any firmware, plenty of other maliciousness is already possible - the same with any other device that has direct access to the system bus. In the same way that you probably wouldn't plug a random untrusted PCI(e) adapter into your system, you should exercise the same caution with Thunderbolt...

3
walterbell 1 hour ago 0 replies      
http://theinvisiblethings.blogspot.ca/2011/09/anti-evil-maid...

"Anti Evil Maid is an implementation of a TPM-based static trusted boot with a primary goal to prevent Evil Maid attacks.

The adjective trusted, in trusted boot, means that the goal of the mechanism is to somehow attest to a user that only desired (trusted) components have been loaded and executed during the system boot. It's a common mistake to confuse it with what is sometimes called secure boot, whose purpure is to prevent any unauthorized component from executing."

4
walterbell 1 hour ago 2 replies      
http://puri.sm

"The first high-end laptop that respects your freedom and privacy. The Purism Librem 15 is the first laptop in the world that ships without mystery software in the kernel, operating system, or any software applications."

5
amluto 1 hour ago 0 replies      
> the larger issue of Apple's EFI firmware security and secure booting with no trusted hardware is more difficult to fix.

IMO this shouldn't really be a problem. If the SPI payload disables writes before executing anything unsigned, then it's really quite hard to bypass.

Presumably the bug is a result of EFI capsule on disk support. The design is sh*t for exactly this reason.

The firmware could lock the flash, detect the capsule after initializing option ROMs, copy it to RAM, do a full reset, then find the capsule in RAM and verify a signature prior to re-locking the flash, though.

6
fubarred 2 hours ago 2 replies      
Wanted:

- "Tripwire" for firmware - host-based (not perfect) & bootable live cd/usb/image (still not perfect)... Perhaps some JTAG verifying device that could be hard-wired to all supported chips directly? (Very painful to setup, but potentially interesting.)

- Host-based peripheral firewall (not perfect, but more usable) - e.g.: selectively disable, ask user permission and/or limit rights to connecting devices from the various buses: USB, FW, PCI, SD card, SATA/SAS, BT, TB, SPI, FC, ... On OSX, it's doable considering VMware Fusion "patches" IOKit (check out IORegistryExplorer) selectively based on user preferences (whether to redirect a USB device to a guest or to the host).

2
I am not authorizing you to release a Ruby port of Metaphone 3
216 points by matthewmacleod  5 hours ago   98 comments top 27
1
dewitt 2 hours ago 4 replies      
Before anyone mocks this guy (sadly, partly too late), please try to keep in mind that he (a) does have a reasonable right to what he says he wants (to have some say in what he believes are his inventions), and (b) it's an illustration that the rules of intellectual property and open source are not always clear, even to those following well-established patterns.

While he's inarguably incorrect about quite a number of things, it's always better practice to assume the best of a fellow engineer and treat this as a teaching opportunity, not a cause for pitchforks and belittlement.

The package maintainer, threedaymonk, handled it perfectly, imo, by respecting the desires of the individual and not porting it, and closing the issue out quickly without further escalation. While it may have been technically "right" to copy the code, it wouldn't have been worth the damage it would have caused to the person behind it. Credit for handling a unfortunate situation with grace.

2
SwellJoe 4 hours ago 3 replies      
It's always surprising how little people understand the licenses they publish their code under. BSD does not make something public domain, but it does allow derivative works and inclusion in completely unrelated projects, without permission of the author. Rewriting it, from scratch, in another language generally completely bypasses even the requirement to reproduce the copyright notice (unless there are data structures being copied over, or similar, that copyright would apply to).

Algorithms are sometimes subject to patents, though I believe they shouldn't be, and if the author wanted this sort of control over the algorithm a patent is the path he should have taken, but as far as I can tell the author has not patented this particular algorithm (and I doubt it is novel, as it sounds similar to quite a few pre-existing tools in related fields).

This is just a sort of weird conversation. Why Open Source something if you don't want people to use it in interesting ways?

3
anu_gupta 2 hours ago 1 reply      
IMO, @threedaymonk has handled this maturely and well.

https://github.com/threedaymonk/text/issues/21#issuecomment-...

> "I'm not going to port Metaphone 3 to Ruby, nor am I going to accept or merge any such ports at this time.

> Whilst the licensing of the Java code in question clearly and unambiguously permits such a port, @lphilips54's stated intentions for reuse of the code are unclear and contradictory. I can't see that any benefit would come from integrating something that is surrounded by such confusion."

4
sysk 4 hours ago 0 replies      
For others wondering what Metaphone is, here's the Wikipedia description:

"Metaphone is a phonetic algorithm, published by Lawrence Philips in 1990, for indexing words by their English pronunciation. It fundamentally improves on the Soundex algorithm by using information about variations and inconsistencies in English spelling and pronunciation to produce a more accurate encoding, which does a better job of matching words and names which sound similar. As with Soundex, similar sounding words should share the same keys. Metaphone is available as a built-in operator in a number of systems, including later versions of PHP."

Metaphone 3:

"A professional version was released in October 2009, developed by the same author, Lawrence Philips. It is a commercial product but is sold as source code. Metaphone 3 further improves phonetic encoding of words in the English language, non-English words familiar to Americans, and first names and family names commonly found in the United States."

http://en.wikipedia.org/wiki/Metaphone

5
chrisacky 4 hours ago 4 replies      
Unfortunately, the author of the Metaphone code (lphillips) keeps talking about public domain as if that is what is the point of contention. When you license something as BSD you aren't putting the work into the "public domain" (as in the legal meaning) but you are giving a very permissive license to the subject of the license.

My guess is that Google requested use of the Metaphone 3 package, Lawrence perhaps didn't understand the what BSD meant, but was willing for Google to use it in their Google-Refine (now Open Refine) codebase so willfully let them pick the most permissive license.

I'm having a real difficult understanding him from his GitHub comments. It's clear that he never intended his algorithm to be re-useable or modifiable in any way, but if he intends to restrict the future use of his Metaphone 3 code or even prevent a port, he's going to find it impossible....

Lesson learnt... it's clear that a work of this magnitude would have taken thousand plus hours... if you don't intend it to be re-usable, then you shouldn't pick a permissive, open source license such as BSD.

6
thegeomaster 1 hour ago 0 replies      
I think it would be interesting to point out a somehow similar exchange between Richard Stallman and Bruno Haible, the original author of CLISP[1]. Bruno wanted to integrate GNU Readline into the then-proprietary CLISP, but RMS informed him that doing so would be a violation of GPL. After a minor flamewar-of-a-kind, Bruno concurred, because he wanted very much to use Readline, and today the world is richer for another free software Common Lisp implementation. I think it very clearly illustrates how often people don't understand the implications of free software licenses, an issue that, even now when there are loads of FAQs and websites dedicated to just that, remains present.

[1]: http://clisp.cvs.sourceforge.net/viewvc/clisp/clisp/doc/Why-...

7
lukeredpath 3 hours ago 2 replies      
I don't have the time to read it and verify but could this be a patent for the algorithm? If so, does it make any difference anyway?

http://www.google.com/patents/US20090043584

8
blcknight 4 hours ago 1 reply      
I don't see how his argument holds water, the BSD license is quite permissive, and porting software to another language sounds like a modification to me, assuming you keep all the copyright info intact. He still holds the copyright, but he's granted everyone a worldwide right to use the software with or without modification...

And apparently he doesn't know what the common meaning of "FOSS" even is.

"Metaphone3.java was released as part of the Open Refine Package under the BSD license" and then "It does not, contrary to popular belief, automatically declare the algorithm to be public domain, or the software to be FOSS. "

9
kerkeslager 2 hours ago 0 replies      
I'm not a lawyer, so I won't comment on who is right here, but I will say that if your business model relies on the law, you should have a lawyer on speed-dial.
10
Kiro 4 hours ago 2 replies      
I expect to see a Ruby port by the end of this day.
11
matheweis 1 hour ago 1 reply      
It's too bad groklaw isn't still around, it would have been fun to try and get thier take on this.
12
mbillie1 1 hour ago 0 replies      
All I can think of is this: http://steve-yegge.blogspot.co.uk/2010/07/wikileaks-to-leak-...

"They have no right to do this. Open Source does not mean the source is somehow 'open'. That's my code, not theirs. If I make something private, it means that no matter how desperately you need to call it, I should be able to prevent you from doing so, even long after I've gone to the grave."

13
mariuolo 4 hours ago 1 reply      
I think the author has no clue about copyright law.

If he wanted to legally protect his algorithm he should have patented it and given a licence to use it only with his BSD-licensed implementation.

Still, IANAL.

14
ddoolin 1 hour ago 0 replies      
15
payne92 1 hour ago 0 replies      
The legal issues here relatively straightforward and were touched on by one comment in the github issue thread.

The Java implementation is protected by copyright.

The algorithm itself would be protected by a patent, which he (generally) could file for within a year of publishing.

And the odds of getting that patent (in the US) would be fairly low, given recent Supreme Court rulings.

16
gburt 1 hour ago 0 replies      
He is wrong [0] in his interpretation of copyright and the BSD, but I think his wishes for his code should exceed his misunderstanding of the relevant law. Further, I think this is a patent (application?) on M3, which is what he is wanting to say with the "Public Domain" stuff [1].

[0] Though, IANAL.

[1] http://www.google.com/patents/US20090043584

17
ris 4 hours ago 0 replies      
Nope. Can't copyright an algorithm. Would have to use patents and software patents are on highly shaky ground.
18
directionless 2 hours ago 0 replies      
I'm a little disappointed to see the original PR locked due to lphilips54's inconsistent statements.

While I think the OSS community should be polite and inclusive, I also think that we are all poorer if we ignore contributions due to author behavior. I'm confident that many authors have abhorrent political views and actions. While we should not elevate them as role models, there are times it's reasonable to just use the code.

19
facorreia 4 hours ago 0 replies      
The license literally states that redistribution in source form with modification is permitted, provided a few conditions are met.

https://code.google.com/p/google-refine/source/browse/trunk/...

20
Morphling 4 hours ago 1 reply      
Am I missing something obvious here? What would change if this piece of software/algorithm was ported to Ruby? I mean Rubyists would get to use it, but what else would change? This whole thing seems very strange, like if someone would create a cure for a disease and give it out for free and tell everyone how it was made but insit that he be the one who injects every single patient personally.
21
lotyrin 4 hours ago 0 replies      
He also, some months ago, opened an issue on a Python project to implement the algorithm... but with no text or comments. Just a title: "M3"

https://github.com/coolbutuseless/metaphone3/issues/1

22
meric 4 hours ago 2 replies      
I think the author of the algorithm implementation would have benefited more from a patent than a BSD copyright notice.
23
logfromblammo 2 hours ago 0 replies      
This sounds like a call for the FOSS community to produce a phonetic similarity algorithm that is not encumbered by patents. Metaphone 3 is patent-pending, and the author obviously intends to profit from it.

Porting the code released with BSD license would effectively just be donating the work to a private individual. Be glad that the guy was good enough to warn people ahead of time, instead of submerging a patent submarine and surfacing after someone creates a big payday for him.

The idea itself, to determine phonetic rules from spelling quirks in English, is non-patentable, but the specific rules he formulated may be. Anyone else could spend "thousands of hours" creating their own rules. We already have a few in the public domain, such as the "i before e" rule, where the "ei" in "neighbor and weigh" is phonetically an "a", which implies that "-eigh-" is the spelling pattern, which also holds in "eight" and "neigh".

See? Free head start for FOSS.

24
jacques_chester 3 hours ago 0 replies      
This is why, in matters of law, you should consult a lawyer.

Stuff you remember from TV is not the law.

Half-remembered Slashdot debates are not the law.

The law is the law. It varies from place to place and from year to year. It is large, complicated and subtle because it covers everything humans do, which is a large, complicated and subtle problem domain.

Seriously. If you have a legal question: see a freaking lawyer. A few hundred bucks to change the entire course of your life is a bargain. It's less than some plumbers.

25
faragon 3 hours ago 1 reply      
26
XorNot 4 hours ago 0 replies      
At this point he's daring someone to do it at which point the only question will be who can afford to fight it. But seeing as how algorithms aren't subject to copyright in the first place...
27
slapresta 4 hours ago 0 replies      
I'm pretty sure the algorithm for Metaphone is not theirs, just this particular implementation. Which means there's nothing they can do here.
3
Why switching jobs is almost always a good idea
57 points by alexpotato  2 hours ago   40 comments top 10
1
joelennon 1 hour ago 3 replies      
If you're not happy in your current role, make your employer aware of that fact. If you believe you are underpaid, say it. If you're overworked, say it. I'm not saying you'll necessarily get results but I think people all too often look for the door when what is making them unhappy can probably be resolved where they are. Of course if you're miserable and need a change that's a different story. But remember that if you're good, losing you is going to hit your employer hard. The cost of replacing good people is so high, any good employer will try to resolve any issues you may have in order to keep you.
2
goblin89 1 hour ago 1 reply      
> The first six months of a new job is taken up primarily by learning new systems, procedures, who to talk to etc. <> in the beginning, you will probably feel a lot less stressed out.

Weird, for me its the oppositethe most stressful time is when I dont know how things work. Battling lacking or missing onboarding processes instead of working on challenges I thought I was hired to solve can be demotivating.

3
steven2012 12 minutes ago 0 replies      
The problem with this mentality is that if you move around TOO much, then people won't want to hire you because they will think, rightfully so, that you won't stick around. I routinely reject resumes where the person has 3 or more jobs of 2 years or less on their resume.
4
S4M 1 hour ago 1 reply      
The post really makes the OP sounds like a headhunter ("don't worry, the grass is always greener somewhere else..."), in which case I would be very wary of his advice.
5
d357r0y3r 1 hour ago 5 replies      
In my current role, I like the company, the product, and my co-workers, but I'm almost positive I could be making 20,000 more a year in the same area. My pay is (I feel) relatively low because I'm a junior software engineer, so I'm torn on whether I should just stick it out and ask for a large raise/promotion in a few months, or put my feelers out.
6
LukeB_UK 1 hour ago 1 reply      
My Dad always said to me that if you ever wake up and realise that you're not enjoying work anymore (or even worse, dreading it) then that's the day you start finding something or somewhere else.
7
lumberjack 1 hour ago 2 replies      
If every "five years of experience Java Swing developer" starts looking around for a better job position, isn't that a bit similar to a sector wide union asking for a raise?
8
jarjoura 49 minutes ago 0 replies      
This is why all the big tech companies give substantial raises in RSUs. It's that carrot stick along with the promise of a promotion always just within reach that makes job hopping difficult.At least plan to stay with a company for 2 years. It never looks good to have resumes with pages of jobs.
9
ishener 1 hour ago 2 replies      
There is one point that was missed in this post: promotion. Are you more likely to land a promotion in your current job, or are you more likely to find a another job that is also a promotion to a position that you have no experience in?
10
codazzo 1 hour ago 3 replies      
If somebody ever asks me "what does mansplaining mean?" I'm just going to say, "Well you see, it's quite easy. Just read this blog post"

In all seriousness, there was no need for the point in this post to be explained through such exemplary mansplaining.

4
How to Be an Expert in a Changing World
345 points by jayzalowitz  16 hours ago   94 comments top 34
1
sytelus 7 hours ago 7 replies      
I'm not exactly sure what is the grand insight(s) from this essay that warrants it's rather link-baity lavish title. Does below captures all the tips and techniques pg mentioned?

1. Don't marry to your beliefs.This is always a delicate balancing act because we do have to have some belief to make a decision, we just don't know if it's a right.

2. Don't predict the future.Can be derived from #1 so I suppose redundant.

3. Bet on people instead of ideas.I think this is great advice for VCs but have been covered many times by pg.

4. Make friends/Surround yourself with smart people.: Not extremely practical unless you are a famous VC but even then not trivial.

It would have been great essay if pg shared some stories and shared non-trivial insights that makes us better at doing above. BTW, I did got tripped at this line:

Another trick I've found to protect myself against obsolete beliefs is to focus initially on people rather than ideas.

Ummm... what was the first trick?

2
kordless 15 minutes ago 0 replies      
> The winds of change originate in the unconscious minds of domain experts. If you're sufficiently expert in a field, any weird idea or apparently irrelevant question that occurs to you is ipso facto worth exploring.

I frequently wrangle with challenges related to finding other people who are open to listening and helping me explore these less-than-complete ideas in an unbiased way. It's always an enjoyable moment when someone listens and then adds to the conversation, instead of immediately trying to negate the kernel of the idea based on the fact it's in a chaotic state at the time.

3
KedarMhaswade 2 hours ago 1 reply      
Kahneman attributes a lot to 'luck' of what experts achieve especially when it comes to predicting something. I am a bit surprised that PG does not give 'shear luck' its fair share.

I am assuming that many (most?) of the startup founders, when they are being interviewed by the potential investors, are 'complete strangers'. By focusing objectively on ideas than subjectively or intuitively on people doing them, they believe that they mitigate the so-called risk. I am not convinced that by judging people rather than by judging ideas (in supposedly short amount of time) the chances of succeeding go up, because if we are often wrong judging ideas, what makes us good at judging people who are ever changing too?

Of course, there is an undeniable 'credit history' part wherein if a 'successful' founder comes back with another idea, many investors are ready to 'shower money' on her/him -- I don't find anything majorly wrong with that attitude, but that alone does not guarantee success, I believe.

4
blrgeek 5 hours ago 1 reply      
Coincidentally went to an effectual entrepreneurship workshop by Prof. Sarasvathy y'day.

Her learnings from talking to 'expert entrepreneurs' are that

1. they do not believe the future can be predicted, esp in the early stages of a new venture

2. they prefer to create the future in areas with high uncertainty by forging partnerships with people they know already, and who are also investing in the same areas

3. they go to the customer at a very low 'acceptable loss' as they can, building as little of the product as they can, investing as little as they can, and getting the first initial set of customers as partners.

4. they jump in to a fast iteration of execution/sales that helps them discover/create the market, relying on sales as market research.

5. they find pleasant surprises along the way that shape their execution/market/venture and they change to fit the new reality - and they shape that reality along the way

Finding great parallels with PGs essays here.. See http://effectuation.org/learn Read also http://www.inc.com/magazine/20110201/how-great-entrepreneurs... and the original paper at http://www.effectuation.org/sites/default/files/documents/wh...

5
nl 15 hours ago 1 reply      
One interesting thing about beliefs in a changing world is how you deal with contradictory beliefs.

I see "beliefs" as an optimisation that avoids having to return to first principles for every single decision.

That speeds things up, but is obviously dangerous.

Sometimes it's possible to have two beliefs that contradict each other. Usually this means they need reevaluating, but sometimes it means there is context in which each is true.

Reasoning in the face of contradictory beliefs is one of the most interesting things about "knowledge", and something that humans can do surprisingly well - at least until we realise we are doing it.

6
UhUhUhUh 23 minutes ago 0 replies      
Talking about non-conform ideas, I would like to add that what should be of interest is not what changed but what doesn't. And I would like to offer that this idea of a world of accelerated change might in fact be a self-serving, Ptolemaic fantasy of omnipotence.What counts hasn't changed much in centuries. We're still using steam engines to produce electricity. And society is very similar to what it was in the Bronze Age.
7
asimjalis 8 hours ago 1 reply      
@pmarca has an interesting thread on this on Twitter [1]. In Peter Thiels terminology [2], PGs view is optimistic nondeterminate, which might be fine for a VC but might not be the best choice for a founder.

[1] https://twitter.com/pmarca/status/546533509922160640

[2] http://blakemasters.com/post/23435743973/peter-thiels-cs183-...

8
akrymski 1 hour ago 1 reply      
And yet the truly successful people have gotten to where they are by sticking to their beliefs:

- Warren Buffet has always believed that companies should be valued based on their profits, not their share price performance. Thus he didn't invest in the tech bubble of the 90s, and was forced to under-perform the market significantly for many years whilst everyone thought he was getting too old for the new way of things. Yet he stuck to his beliefs, and it turned out that he was right - the Nasdaq bubble evaporated and he again out-performed other funds in the long term.

- Steve Jobs always stuck to his belief that computing devices for consumers should be fully integrated and beautifully designed. After getting kicked out of Apple and loosing the OS war to MS, he could have easily told himself his approach was wrong. But whether he is right or wrong became irrelevant - his belief was so powerful that it was contagious.

- Zuck, Brin, and others have succeeded because they stuck to their beliefs instead of selling out early on.

- Darwin stuck to his unorthodox beliefs until his theories became accepted.

All of these people have/had a belief - a model of the universe which they passionately believed in. Having such a model means that predicting the future becomes possible, even though one can't predict exactly when that future will come, the same way Warren Buffet will never tell you what the market will do tomorrow, yet he certainly has a belief in where things are going long term. Jobs was too early with the Newton, yet his continued belief caused him to strike gold later on with the iPhone.

There is an interesting difference between fundamental beliefs and "expert knowledge". The later constantly needs to adapt to take new information into account, but it is our fundamental beliefs that determine how we interpret that expert knowledge.

For what it's worth, my startup investment belief has always been to: invest in companies that are run by great people that have created the best product in a growing market that has profitable competitors. I'll leave it up to the "experts" to speculate on which markets will grow most, or which competitors will win out. If Warren Buffet can't do it, I won't bother either. All that we can really judge to the best of our ability is the quality of the product and the team.

9
sitkack 14 hours ago 2 replies      
This reminds me of Arthur C. Clarke's "Hazards of Prophecy"

> When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

and Asimov

> Those people who think they know everything are a great annoyance to those of us who do.

beliefs, like stereotypes are shortcuts in cognition. We should always be applying the most powerful tool we have developed in the history of humanity, the scientific method.

10
davemel37 15 hours ago 0 replies      
I am noticing a trend in Paul's essays. The last two were about changing paradigms. The first was about how a second read of a book leads to a different understanding, and this essay is about not letting your previous beliefs close your mind about things that can change.

I like though how he circles back to two fundamentals. One, human nature doesn't really change, and Two, betting on people is a better indicator of the future than betting on ideas.

Probably the biggest mistake people make is looking for confirming evidence instead of using evidence to determine the relative likelihood of competing hypotheses.

Even a large body of confirming evidence doesn't prove anything because that same evidence could be consistent with other hypotheses you never considered. You need to evaluate the diagnosticity of that evidence. (i.e. a patients high fever has almost no diagnostic value in determining what is wrong, because so many illnesses are consistent with that evidence.

Whereas, one shred of evidence can disprove a hypothesis completely. Which is why good doctors rule out things to identify the most likely problem.

These ideas PG is talking about are, IMO, some of the most important things for founders to understand.

Don't look to confirm your theories, look to disprove them. Don't accept lore, challenge everything. Most importantly though, always evaluate competing hypothesis to prevent you from just trying to confirm what you are looking for.

11
hayksaakian 15 hours ago 5 replies      
I hear many startup people talk about "domain expertise"

I wish someone would write about that.

What is an expert?

What is a domain?

How do I become a domain expert?

Am I an expert at something I take for granted?

I love Paul Graham's essays but there's a lot of implied logic and jargon. Maybe he should have a few non-startup, non-business people proof read them too?

12
lyricalpolymath 3 hours ago 0 replies      
ASK PG: when you are saying "energetic" people, do you also mean "passionately convinced"?

It's interesting that on one side you try to avoid the pitfalls of your own beliefs (by smartly adopting meta-beliefs and techniques that sidestep the problem, like believing in change and focusing on people)but at the same time you are suggesting that you somehow accept, or decide to trust, the worldview of "energetic" founders.

You might not mean it, but this connection is somehow untold in the classical narrative of the driven founder: there is a fine line between "energetic" and "passionately convinced", which also suggests that they might have and communicate strong beliefs in their success or their idea. Of course I understand that "strong beliefs" aren't forcefully connected with being an expert, which is your main argument.

PG, Where do you stand on this? do you believe in founder's beliefs? Do founders who passionately believe in something convince you?

I feel very sympathetic to the argument of your essay, for many reasons, and have grown to use the same meta-belief of believing in change; however, instead of sidestepping the problem, I tackle it heads on, by incorporating Doubt and Relativism in my decision making. I feel, and strongly believe ;), that Relativism is the cure to many cognitive biases; if you doubt yourself you can't be affected by the Dunning-Kruger effect; I think everybody should train themselves in managing the unavoidable cognitive dissonances of day to day life, it would make the world a better place. Relativism is also an important tool for creativity: the moment you doubt your frame of reference, you start seeing the picture outside of the box, new possibilities open up.

Of course Relativism has it's shortcomings, and beliefs, which are rooted in emotions, command our behaviours more powerfully that the logical reasonings of Relativism. However, I believe, there are ways for us to be both "passionately convinced" and relativistic at the same time; I try to be like Cezanne, who would ask himself: "is this what I see?"

13
fubarred 2 hours ago 0 replies      
Life is suffering, change, taxes and death.

The meta of Buddhist philosophy (sans religion, spirituality, etc.) appears to be one of the most honest of the major, religion-derived belief systems.

14
whiddershins 2 hours ago 0 replies      
I don't agree that this essay is arguing for indeterminate optimism. I think you can consciously and deliberately shape the future while simultaneously being open to the reality that things are constantly changing.
15
McUsr 2 hours ago 0 replies      
What I got out of this essay, which I found to be good, is that you should investigate your hunches, more deliberately before dismissing them. That is hindsightly (From my hindsights :) )a great wisdom from Paul Graham.
16
vinceguidry 16 hours ago 0 replies      
> If you're sufficiently expert in a field, any weird idea or apparently irrelevant question that occurs to you is ipso facto worth exploring.

My proudest creations all came from these weird ideas. Now I listen to them very carefully and nurture them. They're starting to cohere and turn from random ideas into an ideology. It will be fun to make that real.

17
holri 9 hours ago 1 reply      
The human nature is not static as well.Humans are able to think and act very differently.If I look back what I was thinking and doing 20 years ago, I have to shake my head in astonishment.
18
mbesto 12 hours ago 2 replies      
There's a subtle theme in this essay which describes the thought process of how YC (well pg specifically) selects teams for YC. Perhaps in a way, bringing clarity to what is generally seen as a non-scientific and highly complex line of reasoning for hiring and selecting talent with very little limited context. However, I still don't feel like I've still fully grokked it.

> Within Y Combinator, when an idea is described as crazy, it's a complimentin fact, on average probably a higher compliment than when an idea is described as good.

Yet, outside the walls of YC, sounding crazy means you don't get in.

> But we could tell the founders were earnest, energetic, and independent-minded. (Indeed, almost pathologically so.)

How is it possible to discern this in a YC application?

This is all reminiscent of the talk Gabe Newell, CEO and founder of Valve, gave on the hiring practices at Valve. https://www.youtube.com/watch?v=t8QEOBgLBQU#t=1291 Here he mentions the following: (paraphrased) "One of the first programmers of the company was the manager of a waffle house. He was one of the most and still one of the most creative people in the industry" So what made them take the risk? What makes Valve turn away every waffle house manager that applies today? What made YC take the risk on Chesky, Gebbia, and Blecharczyk?

19
evanwarfel 15 hours ago 3 replies      
Updating one's beliefs to get more accurate priors is something we should all strive to do. There is no such thing as having two beliefs of equal weight that contradict each other - that's an illusion created by either applying a different definitions to each belief, or a refusal to admit that you believe one more than the other.

...

> "The winds of change originate in the unconscious minds of domain experts."

I would also add that domain expertise is when one's internal model / map matches the territory. When this happens, one's intuition is accurate and one can notice patterns that other people can't, or one can make predictions that turn out to be right.

20
gbog 15 hours ago 0 replies      
Great read. I would go a bit further and say that the goal is not to be an expert and is not related to start-ups. The goal of being open minded and sceptical as described in this essay, is to make sure one do not embrace sheepishly some ideas that will be considered obviously stupid twenty years in the future. For example in the 50s in Europe most intellectuals were communist or pro communist. Only a very few were neither reactionary nor communist. The goal, for me, is to be of these few. In the US history, it would mean to have been e.g.among the earliest abolitionists.
21
zan2434 8 hours ago 0 replies      
I wonder if being open minded is enough. The "winds of change" aren't always very strong, and accordingly new opportunities aren't (at least initially) better alternatives to the status quo, they just happen to have incredible future potential. To fully take advantage of this, the expert would have to move backwards, quite possibly endangering himself, while a new entrant to the field has no risk to assume by doing so. To use the hill-climbing metaphor described by other commenters, a new climber can begin climbing the new potentially tall mountain immediately, while the "expert" will first have to descend his peak.
22
C2E2 2 hours ago 0 replies      
More I read these VC spreading their wisdom, more I find the Silicon Valley barely a caricature.
23
sidcool 13 hours ago 0 replies      
The first paragraph is a gem. It took me few readings understand it in entirety. Great essay, Paul!

> protecting yourself against obsolete beliefs is exactly what you have to do to succeed as a startup investor

This statement is not only true for startups, but for life in general.

24
ExpiredLink 7 hours ago 0 replies      
"Anyway, the thing about progress is that it always seems greater than it really is." Wittgenstein
25
oldpond 11 hours ago 0 replies      
Beliefs like JEE is a good thing? Like object oriented development is a good thing? Like Portal Server is still relevant? Yeah, I hit the wall with those all the time. Lots of ego in the way of changing those beliefs because the big guys have been selling this for the last two decades. How would you like to be the guy trying to convince enterprises that loosely coupled is the way to go when they've been sold on "silos are bad we need integrated systems"? I feel a little bit like Chicken Little at times. :)
26
DanielBMarkham 16 hours ago 2 replies      
I work with making technology teams deliver faster. It's nothing as complex and tricky as what Paul's doing, but I also get to see lots of teams and make correlations between people and performance.

One of the things I've noticed is that over a long period of time, the more arrogant and certain team members are, the less likely they are to deliver. Folks that are always learning have a tendency to kick ass. Not so much with folks who think they've already arrived. I read in an article on HN a few weeks back that Google calls this "intellectual humility" and looks for it as a hiring trait.

27
andyidsinga 15 hours ago 1 reply      
re this : "if you write about a topic in some fairly durable and public form, you'll find you worry much more about getting things right than most people would in a casual conversation."

i wonder what the bar here is - certainly more than blog posts / self published websites as those are fairly close to casual conversation imho.

glad to see pg picking up the pace on the writing!

28
bane 13 hours ago 3 replies      
Expertise is a weird thing. In today's world, one that changes so rapidly, it's very hard to become an expert in something and have the value of that expertise remain valuable. It used to be that expertise in some field or area would last generations, and guild-like systems of master/apprentice developed in response. Now you can't even make it through a degree program before the things you started to learn about at the beginning become virtually obsolete by the end. Watching the front page of HN and you'll see entire frameworks and languages come and go virtually as quickly as the seasons. When I was starting out, it was fully possible and reasonable to consider becoming an expert in COBOL and one or two kinds of mainframes and make a career of it.

Expertise today requires that you start to think and operate at a higher level of abstraction in order to cover what you know, and where your knowledge might go. Or it might require you to make connections between old ideas that are still valid and new ideas as they appear. You might even be able to change career paths entirely, but things you learned as an expert in your old one might make you special and unique in your new one -- somebody with rare and unique talents. Like automating business processes in a non-technical career.

I've been thinking a lot about beliefs and belief systems recently. I'm not quite sure I have any meaningful or concrete conclusions yet, but I'm pretty sure I'm going down the path of thinking that beliefs should generally be treated and handled with extreme caution, that they're some kind of holdover from an earlier evolutionary stage -- a vestigial kind of proto-thought process wedged somewhere between pure animalistic instinct and enlightened reason.

They aren't inherently bad just like instincts aren't inherently bad, the instinct to pull your hand off a hot stove is good for example. Perhaps they some kind of optimized way of thinking that evolved when we had less hardware to compute with. An estimation system that got us through hundreds of thousands of long cold nights, alone in the world of beasts...when we couldn't quite think through everything. But now we can, and this old wetware technical debt kind of gets in the way now and again.

It tends to make us rigid and lazy in our thinking...why think when believing is less work?

Maybe it's possible to hack our beliefs be believing in the right things, by believing in reason, or believing that things should be changing, believing that we should evaluate and challenge ourselves from time to time, believing that we should feel like we're just on the edge of discomfort and force us to use this new better and harder to use equipment we've been gifted with.

29
jsonmez 12 hours ago 0 replies      
Have strong convictions, but hold on to them loosely.
30
jcr 14 hours ago 1 reply      
I spotted the following in my usual reading this morning:

>"Until this point, I have been describing the stereotyping process as anegative force for individual and team functioning. However, thisprocess actually stems from an adaptive, often functional psychologicalprocess. Mental heuristics and cognitive shortcuts enable us to processinformation without conscious deliberation: they fill in theinformational gaps we often experience when making decisions. In otherwords, habits of mind help us to save brain power for more difficulttasks. Joseph Pieper's classic "Leisure, the Basis of Culture" setsout this theory well. In the 1980s, people thought we could createexpert systems by interviewing experts like brain surgeons or oilexploration specialists, and creating a rule chaining prolog environmentthat would recreate their decision-making ability. The problem was theexperts did not know how they knew what they knew. That is, experts arecreating associations between disparate experiences and pieces ofknowledge, using the subconscious brain." [1]

The quote above reminded me a bit of the previous "How You Know" essayby Paul Graham [2,3]. At times it's disheartening to realize thediscrepancy between how much I read, and how much of it I can actuallyremember, but it's nice to know at least some of it manages to stay withme somehow.

In this new essay, pg says:

>"When experts are wrong, it's often because they're experts on anearlier version of the world."

Stereotyping and similar classification are essentially heuristicmodeling based on past experience. Some substantial part of our mentalmodels are based on "Unknown Knows," or better said, our mental modelsare based on experiences we cannot consciously recall but still somehowknow unconsciously. Even if we keep the models updated to have a"current version of the world" through seeking new experiences, we arestill often incapable of either anticipating or handling new surprises.

Maybe people who regularly succeed as startup investors (i.e.anticipating and handling surprising exceptions) have learned to retainsome degree or form of suspended disbelief when applying their modelsto the world?

It's trivially easy to suspend disbelief for the sake of entertainmentlike enjoying a fictional story. Intentionally escaping reality for awhile is an established habit for most people. On the other hand,intentionally suspending a bias, belief or model in the real world seemsto take more effort, but that's possibly due to my own lack of practice.

[1]http://cacm.acm.org/magazines/2014/11/179827-the-data-on-div...

[2] http://paulgraham.com/know.html

[3] https://news.ycombinator.com/item?id=8753526

31
pitchups 11 hours ago 4 replies      
There is an additional attribute that goes a long way in staying an expert in a changing world - humility or acknowledging that you do not or cannot know the future. PG hints at this when he says.. "...admit you have no idea what the right direction is, and try instead to be super sensitive to the winds of change...". It is also paraphrased quite well in the maxim of "Stay hungry, stay foolish" popularized by Jobs.

But humility that stems from ignorance is at odds with popular expectations of an expert. The problem is what worked in the past may not work in the future. Being sensitive to changes in the world around you, requires you to momentarily stop being an expert and look at the world from the eyes of a novice or beginner. This is not only hard but sometimes impossible to do because of how we change as we acquire new knowledge. [2]

So the trick is to be somewhat schizophrenic - being able to simultaneously view the world as an expert with the fresh thinking of a beginner or a novice.

[2] https://news.ycombinator.com/item?id=8754334

32
theoh 16 hours ago 1 reply      
The idea that a static world implies monotonically increasing confidence in your beliefs is really shaky, logically. Hill-climbing in a landscape of belief-fitness could easily arrive at a local maximum. To further improve your beliefs you might need to become less certain.

Apart from that, how can something (confidence) increase in a static world? PG has clearly tossed this essay off without serious thought or discussion. A waste of time.

33
te_platt 13 hours ago 0 replies      
In the context of this article I think it's important to make a distinction between an expert and a specialist. I think of an expert as someone who almost always gets things right. For example, an expert pianist plays great music but it seems a bit off to think of an expert meteorologist. Weather is so variable it's just not realistic to expect anyone to get forecasts as accurate as pianist at a keyboard. But it makes good sense that a specialist will be able to better than most. I would change "When experts are wrong, it's often because they're experts on an earlier version of the world." to "When specialists get things wrong, it's usually because the world is so complex."

So I agree that it's important to protect against obsolete beliefs it's also important to consider when the belief is really wrong and when it was just overloaded with confounding events.

34
soup10 7 hours ago 2 replies      
What a dumb assertion that someone can be an expert on changing the world. Paul graham has lived in the startup world bubble for too long and is a bit off his rocker when it comes to having perspective on the larger picture of changing industries and global trends and dynamics. The next big thing is not some social media garbage app for sharing cat pictures. That much is clear. Take a look at the kind of low impact garbage YC specializes and funding to get a clear look at how paul graham views successful startup ideas and teams. It needs no reminder that the vast majority of YC companies, fail, have no impact, dissolve and get aquired by other teams or if they are lucky get a talent aquisition by a bigger fish with a bloated mergers and aquisitions budget. My advice to Paul, write essays about what you KNOW, not what you feel like rambling on about. You have lots of experience in the world of seed staging and getting small teams to make the leap into legitimate hard working business entities with growth potential. You don't have a ton of wisdom to share outside of that domain so stop acting like you do.
5
Realtime Embedded: LEON3, OpenRISC 1200, Nios II, MicroBlaze
13 points by noselasd  2 hours ago   discuss
6
Page load fail makes it difficult to cook cornbread in the woodstove
212 points by JoshTriplett  15 hours ago   55 comments top 14
1
geofft 14 hours ago 2 replies      
http://joey.hess.usesthis.com/

"This place is nicely remote, and off the grid, relying on solar power. I only get 50 amp-hours of juice on a sunny day, and often less than 15 amp-hours on a bad day. So the whole house runs on 12 volt DC power to avoid the overhead of an inverter; my laptop is powered through a succession of cheap vehicle power adapters, and my home server runs on 5 volt power provided by a USB adapter.

"When power is low, I often hack in the evenings by lantern light"

2
watt 4 hours ago 1 reply      
This indeed is the pathological case of awful pattern of modal notifications stealing focus. Here the "notification" that connection has failed indeed destroys the whole page you have been working on (by replacing the content you care about, with some message about which you could care less).

The browser (or application, or desktop environment) should be smart enough to allow enqueuing such notification and presenting it via some facility that respects users task and only shows the message via some non-focus-stealing approach. Such as growl.

The bug means loss of user information. We as developers should be treating the user's information as sancrosanct. Such as never presenting question "Do you want to save?". Instead, save everything implicitly. Don't steal focus to ask for confirmation: instead offer unobtrusive undo. Allow user to delete when he wishes so. (Like Gmail does.) Never lose user-entered information. Never destroy user's flow. Never destroy user's view (workspace).

3
knodi123 29 minutes ago 0 replies      
I've seen that problem in chrome before with images; I was trying to read a massive flowchart; the part I was interested in was on the top, and I was busy tracing it with my finger and getting what I needed, when the entire image vanished and was replaced with a placeholder because it timed out while downloading.

This is not just a problem with HTML renderers; JPEG renderers do it too.

4
patronagezero 15 hours ago 7 replies      
Only crazy people put sugar in cornbread:

10" round (iron) skillet

2 eggs

1 cup milk

1/4 cup cooking oil

3/4 teaspoon salt

4 teaspoons baking powder (not baking soda)

1 cup yellow cornmeal

1 cup unbleached white flour (or 1/2 whole wheat + 1/2 white)

Preheat oven to 400 degrees:In a large bowl, beat together the eggs,milk, oil and salt until well blended. Siftin the baking powder and whisk until foamy.Quickly mix in the cornmeal and flour.Beat until the batter is smooth. Pour intoan oiled 10" round cast iron skillet. Bake for20-25 minutes, or until a knife inserted inthe center comes out clean.(Okay to freeze the leftovers.)

Once cooled, crumble into bowl, add milk for a breakfast-type cereal.

5
dezgeg 14 hours ago 0 replies      
Another classic funny bug report, this time from Linus Torvalds to Fedora's flash plugin: https://bugzilla.redhat.com/show_bug.cgi?id=439858

  Description of problem:  youtube no workee - fedora 9 not usable for wife  How reproducible:  I didn't try a lot of videos, but I couldn't find a single one  that actually worked. And what's the internet without the rick-roll?

6
jerf 10 hours ago 1 reply      
Not to distract from the corn bread recipes (which, in all serious, actually led me down some interesting links to some things I intend to have a closer look at later), but what is the mechanism behind the described failure of the browser? Is it when the original page doesn't fully load? Is it a certain type of resource that half-loads? It's happened to me a couple of very inconvenient times (though not quite to that level) and I'd be interested in any possible mitigation strategies.
7
weinzierl 11 hours ago 2 replies      
Mobile Safari on iOS behaves similarly.Trying to scroll, I sometimes I click on a link accidentally.When network is slow, going back to the original page does a reload and takes ages or even fails.
8
Stratoscope 13 hours ago 0 replies      
While we're sharing cornbread recipes, here's mine.

This is a different kind of cornbread, not highly sweetened, low in fat and high in fiber, but very tasty.

Utensils:

9.5" Pyrex pie pan (a fairly deep one, not one of the really shallow ones)

Two large bowls

Wire whisk

Dry ingredients:

1 cup cornmeal, or half cornmeal and half polenta (a coarser cornmeal)

1 cup oat bran

1 Tbsp baking powder

1/4 to 1/2 tsp salt

Wet ingredients:

1/2 cup plain nonfat yogurt (I use Trader Joe's French Village, or Nancy's plain nonfat - it's the same yogurt under either name)

1/2 cup orange or apple juice

1/2 cup unsweetened applesauce

1 large egg or two small

Other ingredients:

Butter to grease the pie pan

Process:

Preheat oven to 400 degrees.

Generously grease the pie pan with butter.

Mix dry ingredients in a bowl with the wire whisk.

Mix wet ingredients in the other bowl (a large one) with the wire whisk.

Pour dry ingredients into wet and mix together.

Pour the batter into the pie pan.

Bake for 30 minutes.

Serve hot, with maple syrup or good quality (not too sweet) jam or fruit spread.

9
codefisher 4 hours ago 0 replies      
Besides all the jokes - which are funny - this is something I posted a bug about before, when working on slow unreliable networks this can happen. You end up reading half the page, and then it tells you it can be displayed.
10
zhte415 10 hours ago 0 replies      
A cornbread recipe without eggs, milk or sugar, very wholesome:

400g cornflour

200g wheat flour

250ml water (add gradually, moisture depends on relative humidity)

Teaspoon of baking powder

Teaspoon of salt

Mix.

Form into balls.

Heat an iron pan, gently. Press balls into desired shape and place on pan.

Cook, covered. If uncovered, turn to roast both sides.

11
ikawe 10 hours ago 1 reply      
Ha! I knew this had to be Joeyh before reading the article.
12
thrownaway2424 13 hours ago 4 replies      
FWIW, the statement about the superiority of Chromium does not match my experience. I certainly recall reading a page on mobile Chrome (iOS) under poor radio coverage and having the whole page replaced after a few minutes with an error about the page being not available.
13
danellis 7 hours ago 0 replies      
Suggestion: turn off "call waiting".
14
andrewliebchen 8 hours ago 0 replies      
If you don't bake your cornbread in a cast iron skillet, you're Doing It Wrong.
7
Ask HN: Considering leaving my job at a startup? What should I consider?
10 points by mynewtossaway  55 minutes ago   6 comments top 6
1
thepoet 1 minute ago 0 replies      
I was in a similar position but I was getting bored doing the same thing again and again. Also the job was intense so I would end up thinking about the startup all the time with no time left for personal projects. I have decided to take a break of couple of years to learn new stuff, experiment with some personal projects I always wanted to create and figure out what I would like to do later. Just ensure that you have a fair bit of runway so that if any of your projects get serious you can pursue it to it's end.
2
bane 6 minutes ago 0 replies      
If you can do it, take some time off and see if you can take a small project from inception to completion. It's harder than it seems, especially as a solo person, but you'll learn a lot from it. Spend some time trying to get users to learn about marketing and sales. Even if you don't succeed, those skills (full product life-cycle, marketing and sales) become very marketable resume points.

I'm also of the opinion that it's good to spend some time in the trenches at a big tech company to better appreciate the differences between small startups and mega-corps (and if you can something in between). It's one thing to read about it and get an idea, it's another thing to experience it and understand better what the pros/cons of each environment are. If you end up at another growing startup, this experience will better help you understand what's happening to the organization as it grows. Too many founders end up with growing organization and don't understand what a big shop looks like or how it functions and end up dragging the potential of the company down around them.

Both of those experiences can become very valuable to any future career direction you choose: small startup, on your own, big software house. If you're missing one or two of those, it's hard to say you have a complete picture.

Alternately, 6 months is really not a lot of time to learn and observe. Startups always go through lots of rapid changes, but seeing some of those through to completion (or stagnation) can be very informative. You're young enough that another year or so there won't kill you and won't prevent you from pursuing anything you want after that. Only you'll have more perspective and experience to work off of.

3
websitescenes 13 minutes ago 0 replies      
If you lack the motivation to work on your project after work and you typically lose steam after a few months, I think leaving a steady job would be the worst thing you could do. The cited obstacles seem more like excuses than things legitimately standing in your way. Get excited about something, then take the plunge.
4
Feeble 6 minutes ago 0 replies      
Six months may feel like a long time right now, but it really is not. I would stay at the current position, get the equity (not matter how small). Use this time to prepare your startup, getting a startup making revenue is a lot more than just coding. You can start to educate yourself about how to best set up a company, talk to potential customers (email or phone). If you think this is not very interesting to do then I suggest not starting a company at all.

If you still feel the same after this time then take the plunge.

Reference: Worked with a lot of startups.

5
BillyParadise 14 minutes ago 0 replies      
The best time to take risks is when you don't have any responsibilities, so in that sense, you're at the right time of your life to do this. On the other hand, if you're good at starting things but not finishing them, then why not make a pledge to yourself before you strike out on your own: Get an MVP version of your new business up and going first.

If you don't try, you'll never have the possibility for success. But stack the odds in your favour first.

6
w1ntermute 13 minutes ago 0 replies      
This post, also currently on the HN front page, seems relevant: https://news.ycombinator.com/item?id=8779799
8
How to Design Programs, Second Edition
179 points by olalonde  16 hours ago   33 comments top 14
1
nixpulvis 14 hours ago 2 replies      
As a student of NEU, I personally swear by this resource. Having been a part of the staff of the course for 3 years now, I can say that students love to bitch about Racket at the start, but few curse it's name by the end.

I wish more people were as excited by Racket as I am.

2
macco 6 hours ago 0 replies      
Great book. Especially the prelude is a great tutorial.

The book is used best in conjuction with Coursera's https://www.coursera.org/course/programdesign

3
mb_72 10 hours ago 3 replies      
Quote from that page:"Good programming requires thought, but everyone can do it and everyone can experience the extreme satisfaction that comes with it."

That contrasts directly with my opinion after 20-years of development experience working on applications across a number of industries and in teams of 1 to 100. Not 'all' programmers can be or are good, and definitely this is not true for 'all people' in general.

After such a start, why then should I read further? I'm genuinely interested to know the answer.

4
vog 5 hours ago 0 replies      
I especially like their introduction on how to create a web service, using continuations for handling session, a technique also used "Beating the averages":

http://docs.racket-lang.org/more/

5
arkx 2 hours ago 0 replies      
6
howardlykim 14 hours ago 0 replies      
This is an invaluable resource (next to McConnell's Code Complete) that has influenced the way I program even to this day. Happy to see that my old university is still using HtDP as its curriculum's primary guidance.
7
ExpiredLink 2 hours ago 1 reply      
If you know how to design real-world programs after having read this book you must know it from somewhere else.
8
ekr 8 hours ago 1 reply      
Does any know what software was used to generate the book? I assume, because of the CC license, the source might be in some github repo, but I was unable to find it.

LE: Thanks.

9
Pewqazz 14 hours ago 0 replies      
UBC's introductory CS course (CPSC 110) uses HtDP/2e as its "textbook". Given the progress that the majority of students make throughout the term, I think it's quite successful in terms of teaching the fundamentals, despite not being based on a more "conventional" language like Python or Java.
10
Scarbutt 11 hours ago 2 replies      
Can someone from the Racket community share how this book compares to Realm of Racket?
11
swedev 7 hours ago 0 replies      
Sounds interesting. Does anyone know if it is available as pdf? I only found the first edition as pdf.
12
woah 12 hours ago 0 replies      
You're telling there are no books on artistic techniques?
13
nstott 14 hours ago 0 replies      
I just stumbled on this tonight as well, I'm happy it's still active.
14
ExpiredLink 7 hours ago 1 reply      
> We assume few prerequisites: arithmetic, a tiny bit of middle school algebra

... and Lisp :(

9
Gallery of Processor Cache Effects
86 points by arunc  12 hours ago   7 comments top 4
1
joseraul 2 hours ago 0 replies      
Most examples are classical cache effects, but the last one is such a puzzle.

  A++; C++; E++; G++; 448 ms  A++; C++;         518 ms
How can incrementing 2 variables be slower than incrementing 4 variables?

2
WayneS 4 hours ago 1 reply      
I like to combine the graphs together into one picture and then another processor structure appears the picture: https://dl.dropboxusercontent.com/u/4893/mem_lat3.jpg

This graph show the memory latency for a linked list walk that strides across a certain size buffer in memory. I used to use this picture for interviews and ask people to explain as much as they can about the processor from the picture. It end works for people who know nothing about processor architecture as I can walk them over what it says and see how they think and react to new information.

3
raverbashing 7 hours ago 2 replies      
Very interesting (but not exactly news)

But I suppose in this "modern world" most people forget how their processors work

4
hintss 9 hours ago 0 replies      
Not quite processor cache effects, but Duff's Device is pretty cool too
10
Scans of North Korean IP Space
339 points by djcapelis  23 hours ago   95 comments top 23
1
deanclatworthy 19 hours ago 0 replies      
Fascinating. Has anyone ever penetrated the NK intranet via an internet-facing machine, to do a thorough analysis? I've read a few articles [1] but never a detailed analysis of what's available.

[1] http://www.fastcolabs.com/3036049/what-its-like-to-use-north...

2
jedberg 19 hours ago 5 replies      
So they seem to have at least some commercial software and hardware made by American companies. Given that the US cannot trade with NK that means either 1) the companies broke the embargo or 2) they bought through a 3rd party or 3) the software is pirated/stolen/

I'll assume #1 isn't true since it would be stupid for those companies to do that for so little money.

#2 has interesting implications about trade embargoes. Unless everyone in the world participates, it seems like all an embargo does is add complexity and middle-men to the transaction. For example, if they legally acquired the software and hardware through a Chinese or Russian reseller, then all that happened was the Chinese or Russians took a cut.

#3 interestes me because what happens there? Ok, they are using clearly stolen software, now what? Are there any consequences?

3
gwern 18 hours ago 1 reply      
The generally old-school services available and minimal turnover suggests to me that the official IP space is entirely controlled by a few NK government entities (maybe one of the universities?) and the real NK IP space is dispersed among Chinese allocations/assignments.Is there any way to know how representative these results are of overall NK Internet usage?
4
totony 22 hours ago 1 reply      
Despite the controversial topic, I think it is interesting to see what one can conclude about a country from freely available information (even though the nmap'ing might have been illegal, I'm not sure about laws regarding nmap anymore).
5
kbuck 10 hours ago 0 replies      
Small correction: VMware authd runs on the host machine, not the guest. That's actually a Windows machine running VMware Workstation.
6
symlinkk 17 hours ago 1 reply      
I don't really see how this tells us anything interesting. You would see pretty similar results no matter where you scanned, with the exception of the Red Star OS stuff.
7
driverdan 20 hours ago 3 replies      
Anyone have an idea how much bandwidth NK has? How easy would it be for a large botnet to DDoS the whole country?
8
JonnieCache 22 hours ago 1 reply      
Kudos for resisting the temptation to login to that macbook's VNC server. Or at least, kudos for not telling us about it.
9
internetisthesh 19 hours ago 2 replies      
My webpage get a few visits from NK every week. A bit curious wether this is common. Anyone else seeing this in their logs?
10
jmnicolas 21 hours ago 2 replies      
I was surprised they're using Cisco. Some Chinese hardware (Huawei ?) would make more sense : both are back-doored, but at least the Chinese are kind of allies.
11
grobinson 18 hours ago 3 replies      
Seeing as North Korea only have 3 allocated address blocks, 175.45.176.0/22, 210.52.109.0/24 and 77.94.35.0/24 they only have approx. 1530 globally reachable IP addresses. However, North Korea must have more than ~1530 hosts. Does this mean that they use some kind of NAT, or is their number of internet connected hosts just that small?

Is there any information about the intranet in North Korea? Do they have a private class A network that everyone in the country is connected to with their own DNS servers, routers, etc which are unreachable from the rest of the internet?

12
chubot 21 hours ago 6 replies      
What are some good books/resources on things like "allocated" and "assigned" IP addresses? i.e. Internet governance, and IP in general? Where is he getting the data like: "inetnum: 175.45.176.0 - 175.45.179.255 ..."?

Also are there tools that take a list of services on ports and map it to likely hardware/OS?

I have been programming for a long time but somehow I missed out on this kind of networking knowledge. Are most people who know this stuff network engineers?

13
richardkeller 11 hours ago 0 replies      
The author notes that they picked up an MacBook Air during one of the scans. Probably unrelated, but interesting nevertheless, is that Kim has been seen using Apple products [1], specifically an iMac. Perhaps the author came across Kim's own notebook?

[1] http://www.telegraph.co.uk/technology/apple/10619703/North-K...

14
j2kun 19 hours ago 0 replies      
I was surprised to see a hit from the DPRK on my blog about math and programming. I wonder what the reasons were, though chances are it was an irrelevant search hit.
15
sysk 15 hours ago 0 replies      
As a side note, I recently learned that it was possible to scan the whole Internet in a few hours on a regular connection: https://www.youtube.com/watch?v=UOWexFaRylM
16
imperialdrive 21 hours ago 1 reply      
Fantastic read - and amazing amount of thought put into taking on this research, and it's fascinating to read. I just started using nmap this year, now I'm tempted to perform similar wide scans. I'm curious how to managed keeping your IP from being blocked? Or, did you use a different EC2 instance each time?
17
alexivanovs 22 hours ago 1 reply      
It seems strange the the author implies us to do some searching through the findings, but really, he has already given away most of what you can find...

EDIT: where did I imply that this is about SONY? Have any of you who commented back on this, have actually checked the actual findings? They're yearly dated records, it seems very hard to believe that he only observed them prior to writing his piece.

P.S. - I do think it's a very good technical report, though I don't recall saying it's not.

18
dylanerichards 16 hours ago 0 replies      
Redirect
19
berberous 21 hours ago 3 replies      
The general population doesn't know or give a shit about the torture report. The educated don't really give a shit beyond shaking their head while reading the report in the Times, or posting a link on their FB saying that it is 'shameful.' Sad, but true.

We've known about these practices for years. The Abu Ghraib scandal was 11 fucking years ago. We've known about waterboarding and Guantanamo for years as well.

All of which is to say, I think if you believe that the U.S. government needs to create a false flag operation to bury the report, you are seriously out of touch with the political reality. Public apathy will bury it for them.

20
gojomo 19 hours ago 0 replies      
Be cybercareful! You may have just cyberstarted a cyberwar!

Also, note that when the next forensic analysis of some hack occurs, the scanning IPs have now "communicated with IPs associated with North Korea". So any future activity of your IPs may be attributed to NK, by the FBI/etc.

21
ll123 21 hours ago 0 replies      
Countdown until North Korea starts nuclear war with us after a vigilante counter hacks them
22
hspak 22 hours ago 1 reply      
Why is this Fishy? I suspect the author didn't feel comfortable dealing with this controversial topic on his main account so he made a throwaway.
23
billions 21 hours ago 4 replies      
There is no way North Korea had the sophistication to hack SONY. Hacking requires knowledge of the latest security vulnerabilities. It's impossible to develop good hackers on such a censored network.
11
Bad code isnt technical debt, its an unhedged call option
190 points by yummyfajitas  23 hours ago   78 comments top 25
1
fcbrooklyn 19 hours ago 2 replies      
Technical debt is actually a really good analogy, precisely because non-technical finance people understand the tradeoff pretty intuitively. Sometimes incurring debt is the right thing to do, but you'll want to pay it down or you'll have to live with the interest. This is true of both monetary and technical debt. For extra credit, you might assign notional interest rates to your technical debt... if they're in the double digits I promise you they'll scare the finance guy.
2
angersock 19 hours ago 5 replies      
God, I want to short my codebase so hard.

What would the engineering equivalent of that be, I wonder?

Quitting and getting hired back as a consultant?

3
michaelochurch 3 hours ago 1 reply      
Bad code is only one kind of "technical debt" and it's the worst kind. Technical debt could also be choosing not to add a feature. If you document the shortfall (say, your system won't scale beyond a single-node database, because you aren't going to need that for at least a year) and why it is there, you can mitigate the damage and make it possible for future users and maintainers to figure out what's going on.

Code quality problems are, as expressed in the OP, an unknown unknown. It's rare that anyone has a handle on just how bad things are, until it's too late. Also, there are sociological issues with legacy rescue projects that businesspeople, who'd rather think of programmers as fairly interchangeable, don't like to contend with: maintenance is only effective if done by high-quality programmers, but it's expensive and often hard as hell to motivate good people to do it. A pat on the back and a 10% bonus won't do it, because it doesn't come close to offsetting the career costs of doing undesirable work.

There is an apocryphal story about a trader buying chocolate santa futures and forgetting to sell them on. Eventually a truckload turned up at the Wall Street headquarters.

This has little to do with the subject itself, but this has actually happened, sort-of. Futures contracts usually don't include Wall Street as a delivery location, so it's not like the trader ends up taking physical delivery at the workplace. The contract will specify where one must make or take delivery, and most delivery locations are in the Midwest or Plains.

When a trader fucks up and has to make or take delivery, there are services that handle it, but it's expensive. If you get stuck taking delivery of, say, $500,000 worth of butter, you'll probably have to pay the agency 50 cents on the dollar, or $250,000, to handle the task of getting your surplus butter to a wholesaler. It hurts pretty bad.

4
pavlov 19 hours ago 2 replies      
Maybe metaphors are like short-selling: unlimited future downside in exchange for potential quick gains.
5
meesterdude 18 hours ago 2 replies      
While I agree with the overall message in this article, "unhedged call option" is not a phrase most people will get. "technical debt" is much easier to understand. "technical gambling" might be better, though i realize not as accurate as "call option".

the term "technical debt" is fairly clear; "technical credit" or 'technical lending" is fuzzier but also more descriptive. Maybe the compromise is "technical debt with interest"

actually, "technical trading" might be a good contender too. You might make out in the long run, or crash and burn. It's all about trade-offs anyway; not writing tests now means future you will have to write them. Sometimes this is fine, sometimes this is a truly horrible idea, and sometimes it could go either way.

6
snarfy 1 hour ago 1 reply      
I still like shoddy construction/shanty village as an analogy. People know what it is and it is tangible unlike debt or any other financial reference.

You can build a shanty village without a plan that supports a million people, but the first fire, storm, earthquake, etc destroys the whole thing. Also, for some reason when Bob flushes his toilet, the power goes out briefly in the capitol building. Nobody knows why, but routing power through the sewer last sprint to save time probably wasn't a good idea.

7
gfodor 10 hours ago 0 replies      
Mapping other basic option trades:

Selling a naked put - Integrating a 3rd party library for a feature vs building it yourself. Immediate benefits, and probably unlikely to go sideways, but if the framework turns out to suck you suddenly incur a large unexpected body of pain. However, unlike a naked call, there's understood limited downside, limited to the functionality the library provides.

Buying a put - Building in any kind of protection from "very low probability but high damage" events, such as provisioning a completely separate infrastructure in the case of a massive DC or network outage, or having a completely separate monitoring system in case the first goes down in the middle of fighting a separate fire.

Buying a call - Basically anytime you make an engineering bet that costs a bit of time and is unlikely to pan out but if it does, you win big. Like a spike to try some state of the art algorithm or bringing on a short term consultant to solve a really hard problem. If your whole team is always doing these in the long run you lose, but strategically doing these when they make sense in the long run can result in huge gains.

Selling a covered call - Focusing on consulting services vs building a product. Steady income, but in the unlikely case you build something that strikes gold, it won't be you who becomes rich overnight, it's the person who paid you to build it.

8
brudgers 16 hours ago 0 replies      
The article ignores the risk of avoiding technical debt, YNGNI, where the `it` you're not gonna' need is the implementation of some set of future-proof architectural features. Or as Knuth put it:

Premature optimization is the root of all evil (or at least most of it) in programming.

The problem of making something worth maintaining has priority...and the software upon which it depends is often a second or third order priority. Facebook was built on PHP. That using PHP created technical debt was a nice problem to have on the way to the bank. None of which is to say that writing bad software is ok. It is to say that bad software includes software that wastes time trying to anticipate and solve the wrong problems at the wrong time for the sake of an ideal rather than current business needs.

9
overgard 17 hours ago 4 replies      
The problem with the technical debt concept is that it implies there's this debt-free way of doing things. Like there's some sort of platonic form of "good" code that everything must aspire to, and good teams write code that doesn't have debt.

What I've actually seen in practice is that "technical debt" mostly means "things I wouldn't have written that way." It has very little to do with the code and more to do with the philosophical leanings of the commenter.

10
dropit_sphere 19 hours ago 0 replies      
This is actually a worthy terminology switch, capturing the uncertainty of software development.
11
markbnj 14 hours ago 0 replies      
While I appreciate the interesting explanation of a naked call, I'm sorry to say that the category of good metaphors pretty much excludes any that require a lengthy exposition to understand.
12
dragonwriter 18 hours ago 0 replies      
It may be a more accurate analogy, but its a less useful metaphor for most audiences. Most people have a useful intuition about debt that the "technical debt" metaphor leverages.

Comparatively very few people have a useful intuition about "unhedged call options" that using it as a metaphor for poor code quality would leverage.

Also, I think predictable ongoing support cost is a big result of poor code quality in production systems, so that aspect of the debt metaphor isn't completely off-base (there are also unpredictable potential costs in the future, as well, so its not a perfect analogy.)

13
gojomo 18 hours ago 0 replies      
There's something to this the emphasis on the optionality of 'technical debt' but it's not quite 'unhedged'. There's not an 'unlimited downside', as there can be when writing a call on an asset that could appreciate boundlessly.

With tech projects there's almost always a de facto "abandon" option if not for the firm then at least for the individual engineers. So whenever the cost of proceeding is higher than the expected value, you exit. That clips the downside, more like some sort of combined option position.

Or more like a specific kind of debt: nonrecourse loans secured by collateral. You get the money up front, but if it proves impossible to repay, you simply surrender the collateral. In this case, the collateral is the project itself: either the option to continue, or the IP rights, or the enclosing firm. And, in the event of failure, those may be worth nothing, so you aren't losing an unbounded amount.

For monetary debt, these fragmentary artifacts may in fact be surrendered to the creditors. In the case of metaphorical technical debt, you surrender up those hopes and dreams and mental (sunk) costs, to the reality that there won't ever be the time and budget to fix the system.

And so this leads to a different conclusion than the article, which ends with a near-religious stance against the inflexible evil of debt. Because the downside project abandonment is capped, sometimes technical debt is worth taking on, when acceleration-to-market (and thus market-feedback-learning) is of paramount importance. You're borrowing from the future, but you only pay it back (with interest) if there's wild success. If you fail for any other reason perhaps things having nothing to do with the technical debt you don't have to pay it off, you just surrender the (essentially worthless) collateral.

That's a hard thing for people with an aesthetic or craftsperson mentality to accept. And it still sucks when it's the technical debt itself the cost of fixing old rushed choices that occasionally makes a project no longer competitively viable. Your monetary credit report is unblemished, but your self-conception can take a hit.

14
ThisIBereave 18 hours ago 3 replies      
Maybe this works for people in finance, but I can't see it being useful otherwise. The concept you're using as an analogy just isn't widely known.
15
mattvanhorn 19 hours ago 0 replies      
I worked at a place where the VPE would talk about technical debt as a mortgage. Yet they had a culture that abhorred paying down the debt through refactoring, and so it was more like one of those interest-only payment mortgages. They paid higher interest in the form of reduced productivity and speed, and ended up with the same amount of debt as when they started.
16
grandalf 17 hours ago 1 reply      
Not all code debt contains massive downsides. The key to using technical debt wisely is understanding the difference, and architecting a system that is loosely coupled to allow strategic debt to be taken on in places where it makes sense.
17
radiowave 19 hours ago 0 replies      
I've always liked this analogy in preference to the standard notion of debt, but if we find ourselves having to explain the analogy, is it such a good analogy in the first place?

So I think it needs to be judged in context - who are we trying to communicate with?

18
jerguismi 19 hours ago 2 replies      
"Debt is predictable and can be managed, its just another tool."

Well, not really. For example, if you took usd-based debt in russia, you are now quite royally fucked.

19
eCa 17 hours ago 0 replies      
Yeah, and if they worked for Lehman they're gonna love the unhedged call opinion. It's exciting.
20
abathur 9 hours ago 0 replies      
I wish the dominant metaphors weren't so business-y--they end up framing the discussion in a way that we're stuck talking about what other business-y metaphors would be better.

Explaining that doing things "bad" creates "technical debt" or "unhedged call options" I think sets us up for not being taken all that seriously because these things themselves are abstractions that we have to work hard to reason about effectively.

Is it more useful if we say something tangible--that programming is like working on a house? Imagine you just started working on updating a house last month. The owner comes to you with a request that they be able to see what's going on in the kitchen from the living room; you _could_ just knock a hole in the wall. That would satisfy the basic requirements, but it's going to make a mess, it's going to look like shit, and it may disrupt the electricity/plumbing/hvac or even the stability of the house itself. But even if we make a proper window or door between the rooms without disrupting other systems in the house and clean up our mess, it's still possible we make a door or window that is a little different than all of the other doors and windows the past builders have put in the house.

In 2 or 10 or 20 years, after several maintainers have come and gone, the owner discovers that they're being bled dry by the house's energy inefficiency. The new maintainer may go measure a window twice and get the dimensions down precisely and then count all of the windows in the house that look the same. Then they go place an order for N new energy-efficient windows. When the windows arrive and they're getting ready to replace them, they come to the terrifying realization that every window in the house is actually custom, and only a few of the windows will fit. They install the windows that fit and try to return the others only to find out the vendor charges a healthy restocking fee. Down 40% of our budget for replacing the windows but only having actually replaced 15% of them, we no longer have enough money to replace the rest. The owner decides to just use the remaining 60% to cover the increased energy costs instead of fixing the issues.

The new maintainer goes to put the diagonal blinds back up on the windows that did get replaced and discovers the custom mounting hardware has gone missing in the shuffle, so the blinds get left in the corner with a note that someone needs to find a way to put them back up when there is time and money. It's the middle of the winter anyways; the sun is always under the horizon when the owners are home. Because this is a _funhouse_, these were of course the blinds with the greatest role in reflecting summer heat, and despite the improved window technology, total energy efficiency is going to be in the shitter come summertime. For now, perhaps quixotically, energy efficiency is buoyed by the extra solar heat; the owner seems happy enough with the updates, and the metrics all show good performance.

Houses need maintenance. Things on or in them break and can't be fixed because either the parts aren't made that way anymore or the knowledge of how to fix it has eroded. They end up with empty telephone nooks and doors that were sheet-rocked over instead of being removed. A major appliance stops working and you discover the house needs infrastructure updates before a contemporary version of the same can be installed. The maintainers want to add a floor and some new rooms and they need an architect and an engineer to make sure they aren't going to ruin the roof line, slab or overload a wall. Down the road it will have different rooms built in the fashion of three decades over a span of 80 years. Further down the road a room will get sealed off when the roof caves in and the owner doesn't have the funds to fix it.

21
vegabook 13 hours ago 0 replies      
The key idea behind the colloquial usage of the unhedged call option metaphor (or "writing naked calls") in financial circles, is the small chance of a catastrophic loss in return for a guaranteed small payout now. It is the idea of unfavourable non-linearity (asymmetry) in the reward/risk profile of a decision. It is mostly used pejoratively for any scenario which is superficially tempting but ultimately unattractive. I am not certain this is a good fit for the often reasonable decision to take shortcuts in software development, usually because the competitive landscape makes it very important to ship before competitors and land grab before others.

Moreover, in a startup context, both metaphors break down: you can walk away from your technical debt by shutting down. You can't walk away from your financial debts (or call option shorts) so easily.

22
curiously 17 hours ago 0 replies      
This is such an excellent analogy and easy for me to understand as I am aware of how options work. Writing options is never a good act if it is naked (unlimited downside, limited profit). So what about buying a naked option (limited downside, unlimited profit) and what would that look like with code? A careful and well tested codebase to begin with that ends up paying off it's investment?
23
lifeisstillgood 18 hours ago 1 reply      
I keep bumping into the notion of options in software development - there are (and I will try to hunt them down) a couple of well known papers on option pricing different features. It is interesting because if applied it would spread through the whole organisation using it - once one part uses options as it's decision making mechanism, all parts have to respond.

But anyway, I agree - naked call options is a much more accurate term. But it is a difficult term for folks to get their heads around (incurring debt is something the post Marx Brothers world has gotten used to. Outside specialised areas of finance, not so much with Options)

24
brandoncordell 15 hours ago 0 replies      
asdf
25
dinkumthinkum 18 hours ago 1 reply      
I don't like this kind of discussion. I don't believe technical debt is a thing and I think this idea if technical debt as really straining the metaphor. This is like bike shedding.
12
Why Davis' TempleOS is better than Torvald's Linux
27 points by SwellJoe  2 hours ago   9 comments top 5
1
wrs 22 minutes ago 0 replies      
This takes me back to when computers had a lot fewer layers. He's inspired by Apple ][ and C64, but it's also reminiscent of Smalltalk-80 and Lisp machines, with the live UI links, inspectors, first-class graphic objects... I also enjoyed the birds screeching in mid demo.
2
SwellJoe 2 hours ago 0 replies      
I find TempleOS endlessly fascinating. The author is troubled, but an impressive coder nonetheless. The C compiler REPL covered in this video is actually kinda awesome.
3
coding4all 10 minutes ago 0 replies      
Probably the most fascinating OS ever.
4
transfire 57 minutes ago 0 replies      
Is it essentially a C interpreter that runs on the metal? Or is there more to it? What about device drivers?
5
AdmiralAsshat 44 minutes ago 3 replies      
"So now we're gonna put colors in. North America should be red, like Native Americans, [...] Asia should be yellow, [...] and Africa should be black."
13
Openntpd is not vulnerable.
118 points by e12e  18 hours ago   54 comments top 11
1
joshbaptiste 17 hours ago 1 reply      
Around 10 years ago it was written by Henning, at my request because the ntpd source code scared the hell out of us

And this is what I like about the OpenBSD community.

2
comex 10 hours ago 3 replies      
Because it was a rewrite, the major benefit in openntpd is that it priviledge seperated. If problems like these were found, they would not be realistically exploitable. Furthermore openntpd is a modern piece of code <5000 lines long written using best known practices of the time,

And it very probably has zero memory safety bugs (pretty easy in 4,000 lines of code, as measured in a tarball I just downloaded), and privsep provides further defense at the cost of adding a lot of complication to the code... but I'm getting bored of this approach to security. For new code - I know openntpd is not new code, but I don't think this approach is popular in general yet - why not switch to Go or (when it's stable) Rust, or even JavaScript or Lua, and end up with essentially zero chance of such bugs being possible, regardless of how carefully or not the code was written, without the need for any privilege separation code? For something like NTP or SSH or most of the other daemons written in C people have sitting around, with the possible exception of high performance HTTP servers, the CPU overhead of any those languages is very unlikely to be noticeable (and privsep has overhead anyway).

(Admittedly, this would not help with non-memory-safety attacks such as NTP amplification...)

In particular, I'm going to make a bold claim: for OpenBSD to continue to release new daemons written in C, as it does, is irresponsible for a security-focused distribution. As much as they like the language - I do too -, as much as the whole expertise and design of the project is based on C, such that there is a naturally huge disincentive to switch languages, and as much as OpenBSD is able to minimize (but not eliminate) security advisories of this type, memory corruption is so dangerous that building a truly secure system requires making as much code as possible immune to the possibility of it.

(Hmm, I guess running AddressSanitizer in production would be nearly as good.)

3
fubarred 2 hours ago 0 replies      
Excerpt from our several year old netbsd build scripts with annotations:

    install_openntpd() {      install_from_source net/openntpd ### via pkgsrc      cp /usr/pkg/share/examples/rc.d/openntpd /etc/rc.d/      replace_service 'ntpd' 'openntpd' ### idempotently edits rc.conf to ensure ntpd=NO and openntpd=YES    }
The openntpd default pool.ntp.org is a sane default that also doesn't leak host OS information as do most NTPd-based default confs.

4
w8rbt 2 hours ago 0 replies      
Code has bugs. The less code you write, the less bugs you have. It's pretty simple really. OpenBSD has always done this well and taken a lot of criticism for it.
5
finid 15 hours ago 1 reply      
I thought Linux distributions stopped using ntpd a couple of years ago.

On Fedora, which powers the computer I'm writing this from, Chrony is in use, not ntpd, so I guess my PC is immune from whatever is floating out there.

6
craftkiller 17 hours ago 1 reply      
For those interested about the srand comment I believe he was referring to arc4random https://www.youtube.com/watch?v=aWmLWx8ut20
7
gonzo 15 hours ago 1 reply      
Too bad openntpd isn't standards compliant.

Fortunately, phk is already fixing ntpd. http://phk.freebsd.dk/time/index.html

8
microcolonel 14 hours ago 4 replies      
You can also use timedated, an optional daemon packaged with systemd. Works really well for me, I just turned it on and it hasn't complained since.

The command you'll want to run is:# timedatectl set-ntp true

And it'll have you within a deep fraction of a second from a decent timeserver in about ten seconds, and keep you there indefinitely.

NOTE: For the person who downvoted this because they don't like systemd, seriously, you need to find a better hobby.

9
stefantalpalaru 16 hours ago 1 reply      
"The portable version is outdated and in need of a maintainer." - http://www.openntpd.org/

The most recent version for OpenBSD is 4.6 while the "portable" one is 3.9 .

10
peterwwillis 14 hours ago 0 replies      
Theo de Raadt is proof that it doesn't matter what community you come from, you can still be a huge douche as long as you write code well.
11
jgwest 16 hours ago 1 reply      
Dudes... I don't really care... I just want a way to synchronize my VPS clock with that of other, established, secure clocks... because DigitalOcean (okay, go ahead and downvote me) is not quite synched, and neither is Linode, and my server!!! Oh good Lord my server don't know the time at all! Dudes... just agree on something that I can install SIMPLY... 'cuz the infighting between ntp and openntp ain't nothing that I care to be involved with... JUST MAKE IT EASY!!!!
14
The Tale of Studio Ghibli
158 points by aaronbrethorst  1 day ago   49 comments top 19
1
ggreer 18 hours ago 5 replies      
If you haven't seen any Studio Ghibli/Miyazaki films, I highly recommend them. The animation is unique and gorgeous. The stories involve conflict with nature or mistaken antagonists, not evil caricatures. While many of the films contain violence, it rarely resolves things. Also, most Studio Ghibli films have strong female leads. I think Grave of the Fireflies and The Wind Rises are the only ones with male protagonists.

In all, Studio Ghibli's works are a refreshing contrast to the romance-obsessed princesses and 3D talking animals so often put-out by Disney and Dreamworks. Even if you're an adult, they're enjoyable to watch.

2
Isamu 16 hours ago 3 replies      
This article also mentions somebody's reaction to Princess Kaguya as looking "unfinished" - I saw the the same comment elsewhere (on HN maybe?)

I am really taken aback by this - the watercolor/ink style may be a little bit unusual these days, but it's not as though animation has never been done in this style before, nor that artwork of this style is completely alien. There is a ton of this stuff.

Nor is this film extremely minimalist, and minimalism been done before too.

I'm starting to feel that the overly-detailed, extremely complicated CG animation and action scenes in most movies are having an adverse affect on our expectations, so we are becoming less able to appreciate something simple.

I could tie this back to software - so much of what we deal with evolves over time to be rather complicated, with a million features, and we beg for more.

Geez, things can be simple. It's not "unfinished" to be simple.

3
saucymew 15 hours ago 0 replies      
As an 80's anime kid, I remember when I first saw Castles in the Sky and My Neighbor Toro Toro. It left lasting impressions on my world views of flying robots and bus-shaped giant cats.

But Princess Mononoke straight out floored me. The surprising heaviness and violence felt like they backdoored a complex film for adults. In Japan it was the highest grossing film of ALL TIME until Titanic came along.

My growing years have been enriched beyond measure by the Ghibli/Miyazaki films. May my curiosity and joy of trekking hidden paths be the legacy of Studio Ghibli.

4
DrPhish 11 hours ago 0 replies      
Possibly of interest to fans of Miyazaki Hayao's work is the series he did for Nippon Animation in '78: Future boy Conan[1].Despite the cheesy name, in my opinion it is his greatest work. I have watched the complete 26 episodes of this series multiple times with my kids, and they've watched it on their own many times. Anyone I turn onto this series is absolutely slackjawed with how good it is. Most of the character archetypes, themes and famous sequences from his subsequent movies are to be found in this series.

[1]http://en.wikipedia.org/wiki/Future_Boy_Conan

5
aidenn0 16 hours ago 2 replies      
They complain near the end that Disney dubs all the Ghibli films. This is an odd complaint since all the DVDs have the choice of sub/dub, and Disney does a good job with the dubs. Kiki's Delivery Service, for example, is one of the best dubs I've ever seen.
6
creamyhorror 10 hours ago 0 replies      
Miyazaki/Ghibli stand apart from the rest of the anime industry. Their sensibilities are both out of sync with the film and TV output of other companies[1], different from mainstream manga; yet when they put something out, the world watches. I think it stems from Miyazaki's unique outlook, which seems far less influenced by pop culture but instead hews to the dreams, whimsy, and imagination of children. It's less "modern", more "storybook", and that makes his films unique and widely appealing.

That's not to say there aren't other anime film directors who have put out impressive work:

- Satoshi Kon (a visionary sadly deceased, a true loss to the world)

- Makoto Shinkai (purveyor of the most beautiful animated scenes to exist[2])

- Mamoru Hosoda (works naturalistic yet fantastical, grounded in family and the real world)

All three have been labelled "the new Miyazaki" or compared to him.

Finally, since people are posting recommendations of Miyazaki's work:

On Your Mark, a 6-minute animated music video, is the sort of thing that you watch once and remember years down the road as a strange memory, a sci-fi short story from your teenage years.http://www.dailymotion.com/video/x1yx4sf_hayao-miyazaki-s-on...

=========

[1] In fact, Miyazaki has criticized the anime industry for being too full of people who don't observe other people, resulting in works that don't resonate with a wider audience; it's anime by anime geeks for anime geeks, full of stereotypes and a culture unto itself. That's why directors like Satoshi Kon and Shinichiro Watanabe are so needed, who paint rounded, real characters who're accessible to a wider audience. Discussion: http://www.reddit.com/r/anime/comments/1wltx5/miyazaki_the_p...

[2] Shinkai's most recent work is The Garden of Words, a lush feast for the eyes which might as well have been titled The Garden of Wallpapers: https://www.youtube.com/watch?v=HTTRweJ7jVs

7
tankerdude 15 hours ago 1 reply      
I still remember my first watching of Grave of the Fireflies. It was gut wrenching, brutal, and to be honest, depressing and demoralizing to the point of wanting to slash your own wrists at the pain and suffering that people went through after the war (especially the orphans).

It's a great film, but getting myself to watch it is seriously tough each time.

I wish there was another animator out there who would/could do something remotely close to what he did for his studio.

8
sekasi 7 hours ago 0 replies      
The first Miyazaki film I watched was Spirited away. The moment I saw it I became a life long fan of his work.

I've been to the museum in Mitaka twice, seen all his movies and despite pushing 35, I'm just as amazed and gleeful watching his work today with my three year old.

I can't think of any artist that had a greater influence on me. Much love.

9
kposehn 17 hours ago 0 replies      
I've been a Miyazaki/Ghibli fan for a long time. At this point I have so many Ghibli-themed shirts my wife jokes that I am nearing the Miyazaki-event-horizon and will collapse into the form of Totoro.

If you have not watched any of their films, I highly recommend you do so. Sheer joy of life is what they impart and it will stay with you for years.

10
sirmarksalot 7 hours ago 0 replies      
I hope the article is wrong about "When Marnie was There." I saw it in Japan just as it was about to leave the theaters, and it was amazing. The releases do tend to lag about a year behind, though, so I wouldn't be surprised if it just hasn't been announced yet.
11
toyg 18 hours ago 1 reply      
This is just a review, and as such it glosses over what has been reported elsewhere as the real reason Ghibli might not survive Miyazaki: the big man himself hoped to hand the reins to his own son, but the "young scion" failed short. Rather than look for alternatives, Miyazaki just gave up; which was a very Japanese thing to do.
12
kiliancs 12 hours ago 0 replies      
The article mentions the treatment these films receive in the West, but I would like to add that there is also a general aversion to anime. In Catalonia, where I live, only the young generations are (more than) open to anime, while the rest will miss beautiful artistic experiences like the Sutdio Ghibli films because of their prejudices against anime.In the US I've seen similar reactions but mostly I have found people who will classify all anime (including films such as these) as "cartoons" for children.
13
hw 17 hours ago 2 replies      
Whether you like watching animation or not, Studio Ghibli ones will open your eyes. From the iconic Totoro, to the classic Spirited Away, each one with different themes, told via simple, animated scenes that aren't as fancy or pretty as the Pixar or Disney ones, but are works of art. They also provide good insight into Japanese culture.
14
cyanbane 14 hours ago 0 replies      
I am a huge fan of Studio Ghibli's work. I read a really great book on Miyazaki and Takahata this year that discusses their early competition and the birth of Studio Ghibli. It also does a pretty in depth analysis of most of the Studio's films that came out before The Wind Also Rises.

[1] http://www.amazon.com/gp/product/B005E7AN1A/ref=oh_aui_searc...

15
new299 11 hours ago 1 reply      
For an Economist article I was expecting more covering the Studios economic troubles, periodic retirement of Miyazaki, and more contentious films [1].

[1] http://en.wikipedia.org/wiki/The_Wind_Rises#Controversy

16
monkmartinez 17 hours ago 0 replies      
Myself and my children aged 4 and 6 have seen most of the Ghibli movies... they are amazing. Obviously, Totoro is well liked... but Ponyo and Kiki are the favorites here (we have re-watched them many, many times) and that was a surprise for me.
17
Decade 8 hours ago 2 replies      
The studio just can't survive after Hayao's retirement.

I was especially disappointed when Ursula Le Guin trusted her Earthsea series to him, and he gave it to his idiot son.

http://www.imdb.com/title/tt0495596/

18
normand1 13 hours ago 1 reply      
This films looks to be the death knell of Studio Ghibli. Not a single ounce of the originality, character or detail that makes Studio Ghibli films unique.
19
tl 17 hours ago 0 replies      
http://www.canistream.it/search/movie/grave%20of%20the%20fir...

I realize that copyright is eternal and the producers have the right to set any terms they want, but it is the best set of options (for producers or comsumers) really pay $19 for a 26 year old movie or torrent it for $0 with no options for rentals, streaming, etc...?

15
U.S. ends TARP with $15.3B profit
100 points by prostoalex  16 hours ago   65 comments top 21
1
penrod 10 hours ago 4 replies      
It is no surprise that Treasury made a profit on its investments in troubled companies, since the fact of investing in them effectively rigged the market in their favor: If the government declares that it will not allow a company to fail, the companys borrowing costs are reduced and it now has a competitive advantage over companies that do not qualify for government intervention.

Of course the qualifications for this special treatment were: being very big, being politically well-connected, and having taken stupid risks. So every well-run, medium sized bank that didnt have an army of lobbyists got screwed. And now we see that our favorites have prospered and declare profit! while ignoring the red ink for everyone else in the economy.

2
protomyth 12 hours ago 0 replies      
"When Congress created TARP, it authorized up to $700 billion for the programs. That authority was later reduced to $475 billion. To date, a total of $426.3 billion has been disbursed under TARP. As of November 30, 2014, cumulative collections under TARP, together with Treasurys additional proceeds from the sale of non-TARP shares of AIG6, have exceeded total disbursements by $14.0 billion7. Treasury estimates that the combined overall cost of TARP will be approximately $37.5 billion. These estimates assume that the budget for TARP housing programs will be disbursed in full and do not include Treasurys additional proceeds from its non-TARP AIG shares."

http://www.treasury.gov/initiatives/financial-stability/repo...

Reading the press release is nice, but it does help to actually read the report to Congress.

3
cnntrolls 13 minutes ago 0 replies      
So what you're telling me is, adjusted for inflation, the government lost billions.

http://data.bls.gov/cgi-bin/cpicalc.pl?cost1=425.00&year1=20...

Nice spin though. Seems to have duped most of the 1337 h4x0rz here. Tell us CNN, how much of that QE* money are we getting back?

4
randomname2 8 minutes ago 0 replies      
The real windfall for the US has been the over $200 billion in bank settlements since 2008 in exchange for not jailing any bankers.
5
kolbe 11 hours ago 2 replies      
...because the Federal Reserve offered the real bailout. TARP's few hundred billion would be worth far less had the Fed not printed trillions, and used it to buy all of the banks' bad MBSs at above-market rates.
6
imaginenore 3 minutes ago 0 replies      
$15.3B profit on $426B investment is 3.59% over 6 years.

If they invested that money in S&P500, they would get 130% back.

7
acd 10 hours ago 3 replies      
This is simply not true us as us citizens has not profited from this. The banks has been printing money like crazy and the American middle class has not gotten richer while the top 1% has gotten really rich.

Here is a graph from CNN about ithttp://money.cnn.com/2011/02/16/news/economy/middle_class/

8
prokes 11 hours ago 0 replies      
At the cost of creating moral hazard, the full impact of which has yet to be seen.
9
dodyg 7 hours ago 1 reply      
The US spends 500 billion dollars on defense every year but the biggest threat to its security is actually its banking system.
10
mikhailt 12 hours ago 2 replies      
Wait, WTF...

At the end:

> Overall, the auto bailout was the one big money loser for TARP. Even with the Ally sale, taxpayers lost about $9.2 billion.

Do they mean it could have been 15.3 + 9.2 billion in the end for the total $24.5 billions in profit?

I don't think taxpayers lost anything if the program broke even.

11
transfire 50 minutes ago 0 replies      
Give me a break. TARP was just cover for the real bailout.
12
littletimmy 10 hours ago 1 reply      
The banks too profited handsomely from TARP, thereby creating a huge moral hazard. The government gave the banks money, which they used to make profits, AND then returned money to the government.

Let that sink in for a second, the banks sunk the economy, and the government helped them make huge profits.

13
_almosnow 12 hours ago 2 replies      
Does that profit takes into account inflation and stuff? Because if not the outcome could be somewhat different. However, even if they broke even, the potential negative effects that the bailout prevented are huge.
14
jokoon 6 hours ago 0 replies      
so if I understand it, TARP was not just money given to banks (as michael moore seemed to explain it in his documentary), it was a loan made by the government to the big banks.

How did banks manage to pay back this loan ? What kind of condition did the government give ?

15
percept 2 hours ago 0 replies      
Can haz dividend?
16
lurchpop 10 hours ago 0 replies      
As others have said, moral hazard. This is especially important as they're set up for the next bailout will to be a "bail-in" where bank accounts will be debited directly through FDIC-insured accounts. I doubt those losses will get reimbursed to savers.
17
higherpurpose 4 hours ago 0 replies      
I'm willing to bet this isn't actually true and is misleading. The companies that got the bailout got other tax breaks and I think they got more money through other programs as well, which they then used to "pay back TARP". I really, really doubt the government came out net-positive with the loan.
18
maaku 12 hours ago 0 replies      
If only all bailouts went like this...
19
erikpukinskis 9 hours ago 0 replies      
3% over six years? I would say that qualifies as losing money. It's not even keeping up with inflation (10% in the same period).
20
puppetmaster3 3 hours ago 0 replies      
lol. Welcome to a 1984 press release (for the stiff above the neck: wheat production, up, 18 %).
21
fleitz 11 hours ago 0 replies      
What they fail to note is all the good that could have come from not doing TARP, what innovative services might have sprung up instead?

Sure, Detroit might be dead (isn't it already?) but there would also be a huge infrastructure for Tesla to buy on the cheap.

Imagine a world in which Tesla and other startups are the only car manufactures in the US? (Well, save for Ford which didn't need a bailout)

What sort of opportunities would there be for FinTech companies sans Goldman, et al? How many small businesses could have been started with $475 billion?

16
Causal Inference Book
57 points by cottonseed  15 hours ago   1 comment top
1
nkurz 11 hours ago 0 replies      
Not from the book, but if you are checking the comments to find out what "causal inference" is and why you should care, here's a readable general overview that tries to explain: http://www.michaelnielsen.org/ddi/if-correlation-doesnt-impl...
17
How Deprogramming Kids from How to Do School Could Improve Learning
84 points by ColinWright  18 hours ago   51 comments top 4
1
jdmichal 13 hours ago 3 replies      
> And they rose to the challenge. I think the kids were just waiting to be let loose and to be treated like adults, Holman said.

This part struck me. It wasn't that long ago in human history that these "kids" would have been considered adults. Fourteen, which should correspond roughly to the beginning of American high school, would have been the year they would have started apprenticeship and taking on real adult responsibility. And that's assuming they were even going to a skilled trade; unskilled trades like farmers would be outright working by that point. We see this transition point in cultures that have preserved it as tradition, such as bar/bat mitzvahs, quinceaeras, and dbutante and cotillion balls.

2
coenhyde 11 hours ago 8 replies      
Our current school system is moronic and this is not an exaggeration. It is the worst possible teaching system I could possibly imagine. Instead of trying to improve the system it should be scrapped entirely.

Here is what is wrong with the current system:

- Grading is harmful. A pass should be 100%/A+. You should not be able to progress until you understand the subject matter 100% as future learning depends on you having a complete understanding of previous content.

- Grouping students by the year they were born is harmful. Some students progress faster than others. By grouping students this way, some students are held back while others are dragged forward; not understanding vital content.

- Grouping a student's progress for all subjects into a single metric is harmful. Eg. grade 1,2,3, etc. Every student will have a natural tendency to be good and bad at certain subjects. Maybe they are good and Physics but bad at English, or vise versa. By grouping a student's progress in all subjects into a single metric a student might be held back in the subjects they are good at or not understand vital content in the subjects they are bad at.

Here is what I imagine a good education system would look like:

- Students progress through subjects independently of their peers and the other subjects they are studying.

- Subject assessment would happen at much smaller increments and students do not progress until they understand that content block 100%.

- Students graduate school when they have reached a certain level of competency in all subjects. This could take an arbitrary amount of time. e.g. it might take one student 3 years but another 8. But once a student graduates they will be competent.

P.S I used to run an company that developed learning management systems, so I have thought about this stuff a bit and developed a hatred for the current education system.

3
beagle3 11 hours ago 3 replies      
Anyone who finds this interesting and is not aware of the Montessori[0] approach should definitely look at it. It is basically the same underlying idea, consistently applied from toddler age. It seems to be effective, if you measure by "people famous for the right reasons"[1], such as Bezos, Brin, Page and Wales.

[0] http://en.wikipedia.org/wiki/Montessori_education

[1] http://www.mslf.org/famous-montessori-students/

4
Animats 8 hours ago 2 replies      
There's a delightful article which I can't find at the moment about a mother who did her middle school daughter's homework alongside her for a week. Her daughter advised her "memorize, don't understand". There's not enough time for understanding.
18
A Poor Imitation of Alan Turing
137 points by dave446  1 day ago   37 comments top 10
1
NathanKP 21 hours ago 1 reply      
I agree with the author's criticism of the movie, but I still personally enjoyed The Imitation Game.

It's good entertainment even if it is quite exaggerated and not 100% historically accurate. If it exposes more people to some of the history of computing and one of its great early engineers then I think that is positive.

2
Animats 19 hours ago 4 replies      
Enigma (2001) is a better film. It, too, has a gratuitous spy plot. Actually, not only were there no leaks from Bletchley Park, the secret was kept until the early 1970s, and full details didn't come out until the 1990s. Turing was a reasonably important figure at Bletchley Park, which had about 9000 people at peak. But there were lots of other smart people working on the problem. Dilley Knox was in charge of the cryptanalysis, and Gordon Welchman did much of the design on the improvements to the Polish Bombe. Welchman went on to teach the first computer course at MIT and worked on computers until the early 1970s. He overcame his early opposition to vacuum tubes; he thought they would be too unreliable.

The real secret of US and British cryptanalytic efforts was to approach it as an industrial problem. That was new. Cryptanalysis until WWII was someone at a desk with pencil and paper. Cryptanalytic units were tens of people. The WWII effort on the Allied side involved not only Bletchly Park, but a big operation at Arlington Hall in the US and another operation in Hawaii. Bombes, the electromechanical key-testers, were built by the British Tabulating Machine Company, National Cash Register, and Western Electric. (NCR's was the most useful and was produced in large quantities.) About 60,000 people were involved at peak. It wasn't clear until long after WWII how big the operation was. Few people were allowed to see more than a small part of it. This wasn't a "one lone genius" thing.

Some of the secrecy was to make Churchill look good. There were times during WWII when Churchill sent a message to a general facing heavy opposition "Press on and you will be victorious", and took historical credit for his courage and decisiveness. Decades later we find out that Churchill had info such as "14th Panzer low on fuel and ammo, cannot fight for more than 2 hours" from intercepts. All German units sent in a strength return each day (all serious armies do this) which reads like "#1, 12000, #2, 450 ..." and is simply many effective soldiers, how much ammo, and other basic numbers. It's dull, boring, and tells which units can fight effectively and how far they can move. Much of Bletchley Park's work was decrypting and tabulating that info, which told Allied commanders where the weak spots were on the German side.

3
benihana 3 hours ago 0 replies      
This seems to be the summary of the article:

>These errors are not random; there is a method to the muddle. The filmmakers see their hero above all as a martyr of a homophobic Establishment, and they are determined to lay emphasis on his victimhood.

4
rnovak 11 hours ago 1 reply      
Compared to Hackers (1995) and the Matrix (1999) (and countless others), I could watch this movie over and over again. I supremely enjoyed it.
5
panzi 12 hours ago 0 replies      
Here is James Grime's (Enigma and Alan Turing expert/fanboy) comments on The Imitation Game:https://www.youtube.com/watch?v=kCSp1RZLhkghttp://aperiodical.com/2014/11/an-alan-turing-expert-answers...
6
ape4 18 hours ago 0 replies      
I agree with the article on the idea that Turing didn't get jokes. Us hackers like jokes and can very well understand them. How can you do tricky crosswords and not get double meanings, etc.
7
geographomics 21 hours ago 1 reply      
I had the same feeling when watching this film. Instead of giving this portrayal the depth Turing deserved, Cumberbatch instead fell back on his usual typecast genius character with minor tweaks. Very disappointing.
8
methodover 16 hours ago 1 reply      
I've always been skeptical of the suicide issue. We don't know what happened to Alan Turing at the end of his life, but of all the available options suicide by far makes the most sense.

I'm having trouble finding the sources now, but I could've sworn I remembered reading that no-warning, no-note suicide is not at all uncommon. And it's especially true with men, I thought.

9
Tycho 15 hours ago 0 replies      
Quite enjoyed the film but I think it's best scene (when they realise how to speed up the machine analysis and manage to decipher the latest transmission) is immediately followed by its worst (melodramatic revelation that one of the characters relatives was about to die).
10
revicon 21 hours ago 2 replies      
TLDR; Real life doesn't make for a movie anyone wants to watch, so the director added a bit of extra drama and rearranged a few things to make for a coherent story.
19
Bitcoin Promoter Gets 2 Years for Silk Road Money Laundering
87 points by prostoalex  22 hours ago   43 comments top 8
1
patrickg_zill 17 hours ago 7 replies      
Where are the criminal convictions for HSBC and Wachovia's money laundering for the Mexican drug cartels? They laundered over $300 Billion USD over the years, and no one went to jail?
2
downandout 17 hours ago 1 reply      
He's lucky he didn't also catch a case for ripping people off through Bitinstant. While no one knows the total of orders that they failed to deliver, it's likely to be in the hundreds of thousands of dollars. A class action suit was filed against the company before it folded. I'm guessing a big chunk of the stolen money went into this guy's pocket.

Curiously, it looks like he's also asking for donations on Twitter: https://twitter.com/CharlieShrem/status/546367889625079808

3
consz 7 hours ago 0 replies      
This guy doesn't sound too bright. His best plea to the judge was, what, that he could be a motivational speaker for not obviously flaunting AML laws? He really couldn't come up with anything better than that?
4
ilamont 21 hours ago 2 replies      
I screwed up, Shrem told U.S. District Judge Jed Rakoff. The bitcoin community, theyre scared and there is no money laundering going on any more."

Does this claim have any basis?

5
shiven 8 hours ago 0 replies      
For those who may have read The Billionaires Apprentice'[1], Jed Rakoff is the same judge who was involved in sentencing Rajat Gupta.

[1]: http://www.nytimes.com/2013/06/30/books/review/the-billionai...

6
wyager 17 hours ago 2 replies      
So can someone clarify this for me? It sounds like he didn't actually do any money laundering, but sent money on behalf someone who was accused of being a money launderer? 2 years seems pretty harsh for that.
7
marlinspire 13 hours ago 2 replies      
He was threatened with 30 years so he took a plea-deal.

It is a shame he was denied a trial.

May or may not have been guilty - taking a plea under such extreme duress is hardly an admission of guilt.

8
arto 17 hours ago 0 replies      
20
Emacs as the Ultimate LaTeX Editor
92 points by pmoriarty  21 hours ago   31 comments top 8
1
scorpion032 1 hour ago 1 reply      
The best LaTeX editor I have found is [Latexian](http://tacosw.com/latexian/)

In a discussion on Latex Editors, I find it surprising how it was not mentioned even, so far.

2
mih 18 hours ago 3 replies      
Anyone seriously considering Emacs for LaTeX editing should also take a look at the yasnippet package. The customizable autocompletion shortcuts make it really easy to insert commands using just keywords. In my case this has proved to be very helpful especially when inserting figures or inline equations. I can create a snippet so that when I type fig+<tab> it inserts a \begin{figure}.. \end{figure}float and positions the cursor at the proper field saving me a lot of keystrokes.
3
vsbuffalo 18 hours ago 4 replies      
I hate the best editor debate because I think it's distracting from what's more important what both editors can learn from each other and what both need to do to improve. I used Emacs for years, switched to Vim because of RSI, then recently switched back to emacs+evil. Frankly, for what I do most (R, R+knitr, C++ with clang autocomplete), no single editor is great. First, there's too little ability to switch between modes within a single buffer in both Vim and Emacs. The feature's entirely lacking in Vim AFAIK, and poly-mode in R uses a high level hack that (1) doesn't play well with other modes (including evil) and (2) has so thoroughly destroyed my documents in the past I refuse to use it now (mostly because it uses many buffers behind the scenes, which destroys undo history).

In general, if you want flawless R support in certain blocks of text (as in a .Rnw file) in between LaTeX blocks that are fully connected to AucTeX, well... you're out of luck. And Vim... Vim-R-Plugin is useful, but it's sort of a painful hack to use tmux just to get R and Vim to talk (and I'm saying this even though I love Tmux).

Vim has YouCompleteMe, which is smooth as silk compared to Emac's options (which are painful and poorly integrated, especially with clang). But some lower-level issue in Vim causes this constant error message in Vim whenever YouCompleteMe uses clang bloody annoying. So overall, both editors have huge issues that would require serious overhauls or tedious bug fixing in various modes. Sure, Emacs does AucTeX better, but until it does everything better (or Vim does everything better) it's a flawed editor. Both are flawed editors. But sadly everyone thinks the best course of action is to start fresh which usually creates a feature-poor flawed editor on a new shiny foundation, that fails to attract developers because it's feature poor. (apologies for ranting -- jetlag).

4
JoshTriplett 10 hours ago 2 replies      
I switched to vim years ago for everything else, but I still use emacs specifically for auctex. I have yet to find a vim equivalent nearly as convenient. In particular, C-c C-c as a do what I mean compiler (checking timestamps to figure out whether to run LaTeX, BibTeX, etc, as well as C-c C-e to insert an environment, and alt-enter to insert a newline and \item.
5
kgabis 7 hours ago 1 reply      
The best latex editor I used so far is TexPad (https://www.texpadapp.com), but it's only available on OSX and iOS...
6
grayclhn 12 hours ago 0 replies      
AUCTeX mode is great; less well known is CDLaTeX mode, which has a lot of quick abbreviations for mathematical symbols, etc. I'm a huge fan:https://staff.fnwi.uva.nl/c.dominik/Tools/cdlatex/cdlatex.el
7
kyrre 9 hours ago 0 replies      
I feel like the secret to being productive with LaTeX is not the editor, but the build script (latexmk).
8
alceta 18 hours ago 0 replies      
I'm tired of these superlatives like 'THE best editor', it adds nothing to the quality of a blog entry.
21
ShellCheck: a static analysis and linting tool for sh/bash scripts
83 points by pmoriarty  21 hours ago   18 comments top 5
1
SwellJoe 14 hours ago 2 replies      
I still write and maintain a surprisingly large amount of sh code (it used to be bash, but when Ubuntu switched to defaulting to ash, I had to remove the bashisms). Sometimes a shell script is still the shortest line from point A to point B, and it's always there.

But, whenever I work on that code after a long time of not reading it, I'm always afraid I'm breaking something, introducing new bashisms, etc. and have to test it thoroughly across multiple systems to be sure I didn't break something. There is checkbashisms which catches most bashisms, but there's still a lot of weird stuff in shell scripts, and without block or function scope it can be challenging to stay on top of a large shell codebase.

In short, I've been working on more automatic testing (i.e. tests without explicitly written unit tests) of my code lately, and the shell stuff is pretty hard to do that for, compared to Perl and JavaScript. So, this is probably awesome.

2
Chmouel 1 hour ago 0 replies      
There is this tools as well developed in OpenStack: https://github.com/openstack-dev/bashate
3
0942v8653 16 hours ago 0 replies      
This looks very useful, and there are some things on there I didn't know you had to do in bash. Here's a SublimeLinter plugin: http://github.com/SublimeLinter/SublimeLinter-shellcheck
4
nodesocket 16 hours ago 0 replies      
Awesome, thanks for this.
5
anonfunction 15 hours ago 2 replies      
Cool tool but I'm really confused by the name choice. Sure choosing names is hard but by using a name of a common tool for an unrelated tool will just cause problems for people looking for bash spellcheck tools.
22
The Insurance Market Mystifies an Airbnb Host
83 points by johnny99  21 hours ago   41 comments top 9
1
tptacek 19 hours ago 3 replies      
Ms. Pfeffer eventually found a solution, but it wasnt easy. And this is mostly the fault of the insurance industry, which doesnt always want to answer questions about this sort of activity, whose agents arent always as knowledgeable as they should be and whose own policy language can be incredibly confusing.

This isn't the "fault" of the insurance industry. Homeowners insurance policies mitigate the standard risks of residences. They don't cover hotels, which have wildly different risks due in part to the radically different incentives and risk tolerances of hotel room occupants.

This whole article is premised on the idea that it shouldn't be hard to figure out which homeowner's policies can be abused to cover ad-hoc hotel businesses. I'm alarmed that there are policies that do work that way; if I were a Liberty Mutual customer, I'd be painfully aware that my premiums took into account the idea that I might rent out my own house that way.

I sort of adore Airbnb and have never had a bad experience with it, but everyone in the Airbnb ecosystem appears to be relying on denial in one way or another.

2
_delirium 20 hours ago 1 reply      
This is not too surprising for people who are in the short-term / vacation rental market. If you regularly rent out your property short-term (when I lived in Santa Cruz, CA, a lot of people did), whether it's as a B&B, whole-property rental, boarding house, or anything similar, you typically need a completely different kind of insurance from regular homeowner's insurance. They exist, but are more expensive (unsurprisingly, since the risk of payout on the insurer's part is also higher).

The tricky thing with AirBnB, I think, is that a lot of people are in that market, but don't view themselves as "really" being in it. Even some people who are for all practical purposes running a full-time B&B don't see themselves as doing so, so they don't know about some of the standard stuff you have to do to safely operate in such a market.

3
patio11 10 hours ago 1 reply      
It seems like an enterprising insurance agent who understood this issue could do brisk business in writing new insurance policies by papering the town with postcards saying some variant of "Don't want your homeowner's insurance policy cancelled for using Airbnb? Give us a call -- we'll introduce you to a more appropriate policy."

If you wanted to pitch a company on something you could build for them, the above activity suggests a fairly straightforward one-day consulting engagement which you could credibly promise as being worth thousands of dollars.

4
datashovel 7 hours ago 0 replies      
This pattern seems to be quite common today:

Entrenched interests refuse to budge on issues related to evolving markets. Smaller, more nimble, more open-minded companies see opportunity and take market share. Eventually entrenched interests either lose market share (no longer as entrenched as they once were) or they catch up.

5
logicallee 20 hours ago 0 replies      
I bet the reason is that most airbnb hosts haven't been checking or asking for riders, since they're violating local ordinances anyway, and riding in a rather grey area. The risk is small, so most have just been eating it. So who has been checking?

People who have had repeated problems that have some specific ongoing source or reason due to something about their property or the way they conduct themselves. If so, these people are certainly about to file a claim. This means this is a terribly toxic pool of people, i.e. not the whole airbnb community. if this is true, airbnb has a really easy solution: just offer its entire customer base to an insurer, which solves the problem that most of its customers haven't been signing up for special coverage.

This is actually close to what it's doing, though with the difference is that it sounds like it's being white-labelled. Maybe there is no national insurance company that can accept every airbnb in every market, which is why airbnb doesn't use the name of one for its own program.

6
mrgriscom 6 hours ago 1 reply      
My insurer (Amica) was very easy to work with on this matter.
7
hnnewguy 19 hours ago 2 replies      
>bypassing these centralized dinosaurs

The reason they are centralized dinosaurs is to ensure they are large enough to cover claimants in the case of disasters. They need to be large to mitigate risk. Even as large as they are, they sometimes still fail.

Insurance is pretty basic, unglamourous work. Do you think a system of insurance startups, going bankrupt every time there's a large series of claims, is the way to go?

8
jqm 14 hours ago 0 replies      
So insurance companies don't like it when someone carries out commercial activities (with increased risk) on a residential policy...

Makes sense. To be fair, (in theory at least), the increased risk is spread among the remaining residential insurance payers who are not engaging in commercial activity. So once again, (as with taxation), borderline legal, er, I mean "innovative sharing" activity is imposing a financial cost on the rest of society.

9
pbreit 20 hours ago 2 replies      
Home and car sharing are with us now so the insurance companies will need to adapt or get steamrolled.
23
Study: Chicago red light cameras provide few safety benefits
76 points by greenburger  21 hours ago   33 comments top 10
1
GoodIntentions 19 hours ago 2 replies      
>get rid of the red light cameras because they increase rear crashes.

A better solution would be enforcing a sane minimum yellow light timing on camera intersections. Shortening the yellow to the point where the normal traffic sometimes brakes hard to avoid entering the now yellow, soon to be red camera intersection is the reason for this increase in rear-ends.

Per the article, this is something Chicago was guilty of, in addition to placing the cameras in areas that had no problematic history of accidents.

The real problem with these things is that they are used as another way to milk money out of the populace, rather than as a way to modify behaviour and improve safety.

These cameras ( and short lights ) are something that really really annoy me as a motorcyclist. I can stop fast any time but the SUV-clad soccer mom texting behind me probably wont. I sometimes find myself dropping a gear and hammering through a yellow I could easily stop for because of this.

/rant

2
w1ntermute 20 hours ago 3 replies      
Was it ever actually about safety? The whole point of it seems to be just to create another revenue stream for thousands of corrupt local governments all across the country.
3
bbarn 16 hours ago 0 replies      
The other massively understated thing about Chicago's red light cameras is that Chicago, compared to other areas with similar or even much lower traffic volumes, has very few left turn lanes, and even fewer left turn arrows when they are there. As a result, the general population is habituated to only being able to turn left at the last second on a yellow signal, which complicates the short yellow timings even more.
4
bsder 14 hours ago 0 replies      
The problem is that there are lots of other choices to make an intersection safer than cameras.

Generally, the psychology of the timing matters. Most of the intersections that I see people crash aggressively are busy intersections and have lights that take forever to cycle. So, there is a huge time penalty for missing the light (something like 4-5 minutes).

If you cycle the light faster, it may let fewer cars through, but it tamps down the aggressiveness with which people are willing to crash the light.

5
blisterpeanuts 3 hours ago 0 replies      
Phoenix has red light cameras as well, because of numerous scofflaws from south of the border. I have no idea how effective they are, but there are still a lot of horrible accidents.

To reduce if not halt such incidents, we should simply build crossing gates as at railroad crossings. When the light is turning yellow, a gate lowers, with flashing red lights and clanging bells. When the light cycles to green, the gate lifts. Problem solved.

Alternatively, hire more traffic cops to enforce the laws. It costs money, but the carnage on the roads calls for some kind of a real fix and not just a cosmetic patch like cameras that merely bring in more revenue.

6
talos 13 hours ago 0 replies      
In almost 3000 words, this article didn't see fit to mention the effect on pedestrian/vehicle collisions. I wonder if this is due to lack of data? I would check the Chicago Open Data Portal (https://data.cityofchicago.org/) but it's down right now for maintenance!
7
laoba 15 hours ago 2 replies      
I don't know how common these lights are around the world (I have never seen them in the US but have not been EVERYWHERE), but in China I regularly saw lights that also had a timer to the side of it counting down so the change from yellow -> red was more easily expectable. I think this would work rather well if we were all really concerned with safety.
8
jacalata 3 hours ago 1 reply      
Is the actual study available anywhere? I am wary of trusting a journalists interpretation of data, especially when it comes in an article that gives the impression "this study our paper funded demonstrates exactly what we thought it would!"
9
revelation 20 hours ago 3 replies      
This article seems to make a point that we should get rid of the red light cameras because they increase rear crashes.

Ignoring for a moment that T-bone crashes are likely to produce much more severe injuries than someone crumpling up the rear of another car, this should suggest that we instead need ways to monitor motorists for keeping proper distance and attentiveness.

The solution can't be to stop monitoring for one offense because it causes incompetent motorists to cause more offenses of another sort.

10
dang 20 hours ago 1 reply      
24
Why Why Functional Programming Matters Matters (2007)
129 points by olalonde  1 day ago   17 comments top 8
1
westoncb 19 hours ago 0 replies      
While reading, it made me think:

Its important when approaching architectureand probably software problems in generalthat you do not start by reasoning in terms of the constructs of a particular language (e.g. classes and interfaces). The philosophy underlying this is: your range of potential conceptions becomes wider as abstraction increases; and, as you become more concrete you inevitably introduce auxiliary problems that result purely from the features of the particular language you are expressing with, which means you are prematurely diverting thought from, what is the correct description of this thing to, what is the best wording for this description.

Also:

Working at a higher level of abstraction has a lot in common with looking at a scene from far away, a mountain range for instance: the further away you get, the easier it becomes to change which mountain you are experiencing: you merely turn your head a quarter of an inch and now three new mountains have entered your consciousness. If youre trying to decide which mountain to climb, this is an extraordinary benefitbut at the same time, if you are too far away, you cant perceive the features of the mountain clearly enough for optimal decision making. A similar tradeoff is always present in working at one level of abstraction or another, and understanding these tradeoffs helps you figure out where to stand more effectively.

2
vorbote 22 hours ago 1 reply      
Better formatted version of the blog post:

http://raganwald.com/2014/12/20/why-why-functional-programmi...

The original paper by John Huges mentioned in the posting:

Internet Archive: https://web.archive.org/web/20070323095313/http://www.math.c...

PostScript, PDF and the BibTeX reference.

3
thibauts 16 hours ago 0 replies      
To me what makes functional programming matter and what differentiates it from OOP today is less its essential properties than the simple fact that it pushes you to think in terms of functions that process collections of objects rather than functions that act on a single one. This may look like a simple statement yet the implications are huge.

Now it doesn't reflect a property of FP as such, it only highlights one of the biggest shortcomings of OOP along mutability that we are slowly awakening to as a community.

4
snarfy 2 hours ago 1 reply      
Pure functional programs either don't work or are bug free. At a high level the entire app can be viewed as one continuous evaluation, which is kind of beautiful. It could transfer to an FPGA without needing a von nuemann simulation.
5
fjarlq 17 hours ago 1 reply      
As an additional exploration of these excellent ideas, I recommend the textbook How to Design Programs (HtDP):

https://en.wikipedia.org/wiki/How_to_Design_Programs

It's available online: http://htdp.org/

6
einhverfr 16 hours ago 2 replies      
It's a good paper, but what it hints at (but doesn't really come out and say) is that in separating concerns, functional programming actually eliminates concerns from the programmer. This isn't really just a question of the language -- you can do FP in any language.

But by thinking about programming differently, we can eliminate concerns entirely. This is what garbage collection does, and it is what FP does.

7
jonahx 22 hours ago 1 reply      
Related: Does anyone know of an html / epub / mobi link to the original paper (I could find only the pdf)?
8
icebraining 1 day ago 0 replies      
(2007)
25
A collection of disassembled and commented source of parts of MS-DOS 1.0
32 points by Audiophilip  14 hours ago   6 comments top 4
1
WalterBright 10 hours ago 0 replies      
DOS looks trivial to me these days. I kick myself for not going into business making a workalike clone back then.

On a related note, I've kept most of my old machines, except (sadly) my H11. They're just piled up in the basement. Last year I wondered what I had on those old drives, and tried to turn on the computers. None of them would boot. One made a popping sound and smoke came out.

My next attempt was to read the hard drives. My 286 drive wouldn't fit anything modern. The oldest drive I could hook up was in the 486 box, and had to try plugging it into many machines before the old IDE drive was recognized. I was anxious to see what was on it.

Like opening old safes, turns out there was nothing much on it. I was surprised at how simple my old programs were, and how small.

2
mseepgood 12 hours ago 2 replies      
Why disassemble when the original source code is available?http://www.computerhistory.org/atchm/microsoft-ms-dos-early-...
3
dezgeg 12 hours ago 0 replies      
If you're interested in this kind of stuff, be sure to check the author's (Michael Steil) blog: http://www.pagetable.com/ - he has written all kinds of juicy blog posts about low level programming.
4
yuhong 14 hours ago 0 replies      
PC-DOS 1.0.
26
Cryptography for Springboard: Hashlet
4 points by imrehg  6 hours ago   discuss
27
TOCC: A Tool for Obsessive Compulsive Classifiers
51 points by pmoriarty  20 hours ago   2 comments top
1
aragot 3 hours ago 1 reply      
Funny reference in the name: "TOC" in French stands for OCD - "Trouble Obsessionnel Compulsif". http://fr.m.wikipedia.org/wiki/Trouble_obsessionnel_compulsi...
28
Cause And Effect: A New Statistical Test That Can Tease Them Apart
112 points by mazsa  1 day ago   51 comments top 19
1
learnstats2 22 hours ago 3 replies      
This statistical test for causation (X->Y) is based on the idea that X and Y each contain noise - noise present in X flows causally to Y but noise present in Y won't flow back to X.

But, even if true, it isn't clear that this makes for a good test. For example, it's plausible that Y could have a damping effect and remove noise, which would reverse the results of the test.

"They say the additive noise model is up to 80 per cent accurate in correctly determining cause-and-effect." This has been exaggerated by Medium from "accuracies between 65% and 80%" in the original article.

But a coin-flip model should be 50% accurate. 65% accuracy is unconvincing. The journal article's conclusion admits that their results are not statistically significant in any sense. As such, the results do not even meet the weakest possible scientific standard. They couldn't reproduce earlier published results in this field (typical of publication bias).

Their final paragraph concludes that there is surely a method of doing this, but they just haven't found that method here.

In my opinion, the results do not support that conclusion.

2
panarky 22 hours ago 0 replies      
Here's a tool Google built called CausalImpact to go beyond correlation and get at cause and effect in time-series data.

http://google-opensource.blogspot.com/2014/09/causalimpact-n...

And their related research into using Bayesian structural time-series models to infer cause and effect.

http://research.google.com/pubs/pub41854.html

3
cafebeen 23 hours ago 2 replies      
This isn't as generally useful as the title suggests... due to these assumptions:

"that X and Y are dependent (i.e., PXY=PXPY), there is no confounding (common cause of X and Y), no selection bias (common effect of X and Y that is implicitly conditioned on), and no feedback between X and Y (a two-way causal relationship between X and Y)"

4
mazsa 1 day ago 3 replies      
"The key assumption is that the pattern of noise in the cause will be different to the pattern of noise in the effect. Thats because any noise in X can have an influence on Y but not vice versa."[...] "Thats a fascinating outcome. It means that statisticians have good reason to question the received wisdom that it is impossible to determine cause and effect from observational data alone." https://medium.com/the-physics-arxiv-blog/cause-and-effect-t...
5
righttoremember 21 hours ago 0 replies      
In econometrics this approach is called "identification thorough functional form" because it relies on assumptions about the exact distribution of some is the variables.

The main problem is that it requires making assumptions that are very hard or impossible to test. Nonetheless it's an interesting idea, but I doubt this method can replace randomized trials or instrumental variables except in a tiny fraction is cases

6
Zak 11 hours ago 0 replies      
Ideally, upon finding correlated variables, one would perform an experiment, changing one to see if it causes the other(s) to change. Looking at noise enables the same principle to be applied when the researcher lacks the ability to perform such an experiment.
7
jsprogrammer 22 hours ago 4 replies      
>Obviously temperature is one of the causes of the total amount of snow rather than the other way round.

Can someone explain how this is 'obvious'?

How can this be a claimed scientific way to tell cause and effect and then drop a sentence like that in the middle of the explanation?

Even if you accept that it's true that temperature determines snowfall, it seems there is likely some feedback loop in there. The fallen snow doesn't just disappear, wouldn't it affect later measured temperatures? Remove a bunch of (cold) snow from an area and the average temperature of the area should increase faster than if you had left the snow, no?

8
fitshipit 22 hours ago 0 replies      
It's like all statistical tests -- it works really well (provably well) when the assumptions it requires hold. However, it's usually impossible to know if those assumptions hold without holding the desired answer in the first place. That's why nonparametric tests are so popular (not saying they have much to do with the article at hand, but people are definitely willing to get less definitive results in exchange for making fewer assumptions).
9
raverbashing 21 hours ago 0 replies      
Would be interesting to test this in data such as these: http://www.tylervigen.com/
10
xtacy 23 hours ago 0 replies      
Nice article. I think the fact that testing if "X-caused-Y", by exploiting the fact that this is not symmetrical, has also been used by the "pseudo-causality" Granger causality test: http://en.wikipedia.org/wiki/Granger_causality

Also, causality in reality can be quite complicated if there are feedback loops: X-causes-Y-causes-X.

11
streptomycin 22 hours ago 0 replies      
Reminds me of http://www.pnas.org/content/104/16/6533.full - interesting, but probably only applicable to very simple systems. If you have various complex interconnections between components, simple A -> B reasoning is not helpful.
12
Xcelerate 18 hours ago 1 reply      
Heh, so maybe we can finally figure out if the "random" correlations in quantum mechanics are really random or if there's a cause.

(I'm joking of course, but has anyone ever actually rigorously analyzed quantum random data?)

13
_almosnow 14 hours ago 0 replies      
Interesting, a year ago this was one of the challenge say Kaggle. Given a set of sample pairs determine which one of them (if any) is causing the others.
14
RoboTeddy 20 hours ago 0 replies      
From the paper (http://arxiv.org/abs/1412.3773),

> Concluding, our results provide evidence that distinguishing cause from effect is indeed possible by exploiting certain statistical patterns in the observational data. However, the performance of current state-of-the-art bivariate causal methods still has to be improved further in order to enable practical applications.

15
keithpeter 22 hours ago 0 replies      
Has anyone zipped up the data sets referenced in the paper in a handy file at all? Just before I start right clicking...
16
NoMoreNicksLeft 16 hours ago 0 replies      
> Another dataset relates to the daily snowfall at Whistler in Canada and contains measurements of temperature and the total amount of snow. Obviously temperature is one of the causes of the total amount of snow rather than the other way round.

This isn't obvious to me at all.

It's true that rainfall causes trees (and that drought can kill them). But it's less obviously true that trees (in massive numbers) can affect the regional climate enough to cause rain. They do this by pumping water out of the ground and increasing humidity, by changing wind patterns, etc.

When trees cause rain, it's a lesser effect than when rain causes trees, but it's still there.

So when someone tells me that it's obvious that hundreds of thousands of tons of frozen, powdered water laying on the ground doesn't cause the temperature, I have to wonder if they're a retard.

17
snowwrestler 13 hours ago 0 replies      
This strikes me as a fairly useless test, because it only works in situations where you are sure there are only 2 variables, and you're trying to determine which one is dependent. Such a situation only happens in a carefully controlled experiment--and in those situations, you can easily determine causation by creating counterfactual tests.

What people really want to know is whether statistics alone can be used to exclude hidden shared causes from an uncontrolled data set. Even the article itself uses such an example: the impact of hormone replacement on heart disease.

This test does not further that goal. I remain convinced that it is impossible. In fact to my understanding, that is the origin of the scientific method: rather than accepting conclusions from the first data set, science constructs hypotheses and tests to exclude hidden causes.

18
yarrel 21 hours ago 0 replies      
Sonitus post hoc ergo sonitus propter hoc.
19
dang 23 hours ago 4 replies      
We changed the URL from http://arxiv.org/pdf/1412.3773v1.pdf because, with some exceptions (such as computing), HN tends to prefer the highest-quality general-interest article on a topic with the paper linked in comments.

This comes up often enough that it is a good case for linking related URLs together, which is something we intend to work on in the new year.

29
Reverse-engineering the Kayak app with mitmproxy
63 points by shbhrsaha  19 hours ago   29 comments top 9
1
101914 16 hours ago 3 replies      
Note that mitmproxy, which requires a Python install, is not necessary to monitor what is being sent out from your computing device.

The same results can be achieved using only socat and the openssl binary.

While I understand the terminology is popular, I would not call this "reverse-engineering"; to me this is simply viewing your own traffic.

I believe users have a right to see the traffic they (or the apps they use) are sending, and for security reasons alone they should monitor what is being sent. https plus third party CA usage complicates such transparency, making proxying techniques necessary.

I wish more users would view their own traffic.

Keep up the good work.

2
ketralnis 14 hours ago 2 replies      
I don't know about Kayak's economics, but at least at Hipmunk we pay our data providers per search and it's really quite expensive. If they aren't offering an API anymore, it's probably because it was too pricey to operate.

You could easily cost a travel search company thousands of dollars very very quickly using an API they don't want you to use. I don't know if it's illegal or not, but it's certainly immoral.

3
chrisan 15 hours ago 2 replies      
Another great mitm proxy is Charles Proxy http://www.charlesproxy.com/.

It has a really nice UI for looking at JSON responses such as the Kayak. Sometimes a collapsible tree is invaluable in looking through a response.

The easy filtering and formatting is primarily why I like it so much. Here is how it handles SSL for various ways http://www.charlesproxy.com/documentation/using-charles/ssl-...

Here is a screen shot of my iPhone Kayak app request for comparison http://imgur.com/gvKB6fr

4
dthakur 15 hours ago 1 reply      
> From that folder, get the mitmproxy-ca-cert.pem file onto your mobile device by emailing it to yourself, for example. Then follow certificate installation steps for iOS or Android.

You can just go to http://mitm.it on the device. It's a 'magic domain' for the proxied host. See http://mitm.it/doc/certinstall/webapp.html

5
javiercr 3 hours ago 0 replies      
Interesting! We've used a similar technique to reverse-engineering mobile apps from different banks in order to create a Ruby gem to fetch bank data (balance and transactions).

https://github.com/ismaGNU/bankScrap

6
vertak 15 hours ago 3 replies      
Does anyone know the possible legal repercussions open-sourcing a web service's API when the company doesn't explicitly grant permission? This is really neat, but could also raise the ire of a service that doesn't offer an API for a reason.
7
nnd 4 hours ago 0 replies      
I wouldn't call this reverse-engineering.

How are UUID and HASH are generated? Are they unique to every installation?

8
xrjn 16 hours ago 1 reply      
Great writeup, there's definitely potential to do something cool, especially if it's possible to get around any tracking and the following price manipulation. I tried installing your demo client, however on running it I got the following error:

  root@kayak:~/kayak-mobile-client# python client.py   Departure airport code: LBG  Destination airport code: HAM  Departure date (MM/DD/YY): 12/26/14  Traceback (most recent call last):    File "client.py", line 56, in <module>      searchid = json.loads(r.text)["searchid"]  KeyError: 'searchid'

9
ianlevesque 16 hours ago 1 reply      
A useful technique for sure. The only technique I know to slow this down is to use certificate pinning, but it's probably pointless. Some of your users are probably extremely motivated (like ours [1]) and it's obvious to them that what they are doing is unsupported.

1. http://difm.eu/dox/

30
Openbay name clash
42 points by jontro  6 hours ago   11 comments top 5
1
mosburger 3 hours ago 1 reply      
Yeah, I'm one of the developers at www.openbay.com (I'm not Adam, I'm one of the other web developers).

We've spent a couple years trying to get our startup off the ground, and this really sucks for us. And I'm a fan of TPB - I've used it, and I admire their work. This isn't a case of some big corporation trying to exercise it's legal muscle to ward off bad PR, we're a few people running out of a co-working space in Cambridge.

Armchair lawyers will likely chime in how this isn't a "real" name clash and how they're different things in different lines of work (and they'd be right), but the thing that really sucks for us is SEO. Many users (especially older ones) have gotten in the habit of just typing "openbay" or "open bay" into google and not the URL bar - it's actually a pretty large percentage of our organic search hits. This happens even more if/when they hear about us in the press. What those users get for search results right now are a bunch of news sites about "piracy," which isn't a great thing, particularly for that demographic.

It'd be so awesome of the project could be renamed out of the kindness of the maintainers' hearts, but I know that's not likely. The cat's already out of the bag now. But we figured it wouldn't hurt to ask nicely.

2
eloisant 39 minutes ago 1 reply      
Beyond the name thing, I believe the source code for Pirate Bay is useless.

It was a crappy website, with bare-bone features, any web developer worth its salt could build a better torrent site in a couple weeks.

The value of Pirate Bay was their user base, uploader base and their resilience to legal threats and even a raid a few years ago. None of that has to do with the source code that powers the site.

3
unfunco 2 hours ago 0 replies      
Coincidentally; I had a failed project back in 2004 that was also called OpenBay (named after a Sharks Keep Moving song, in my case.) I'm hoping your dignified request is listened to, it's refreshing to see something like this being handled politely.
4
vortico 4 hours ago 1 reply      
Name clashes are extremely common and almost inevitable, and unfortunately there's not much the first project's owner can do about it. The trademark office would likely consider "auto sales" and "media sharing site" to be as distant as possible, and there are multiple TLDs for a reason.
5
ibrahimcesar 5 hours ago 1 reply      
"Clash"?
       cached 21 December 2014 17:02:02 GMT