hacker news with inline top comments    .. more ..    7 Sep 2014 News
home   ask   best   4 years ago   
Why I like Common Lisp
74 points by lispm  3 hours ago   41 comments top 9
brudgers 1 hour ago 4 replies      
This article illustrates one of the hurdles the Common Lisp community often sets before itself when it produces a piece of marketing. It points to resources that are not immediately obtained or that don't directly touch on the large subjects mentioned.

For example, one of the important capabilities mentioned in the article is getting close to the metal with profilers and hand tuning code and the runtime environment. Yet none of the links discuss this directly or provide tutorials.

The discussion about the wonders of CLOS suggests reading The Art of the Meta-Object Protocol - a $35.00 345 page dead tree tome which Amazon will get to you in about a week if you opt for free shipping. Graham's Ansi Common Lisp is of course even more obscure - it's been out of print for years and from Amazon you're in the realm of individual sellers [my anecdote is that it took about a month to get the copy I ordered - the seller was a nice person but it was a couple of weeks before they read their email from Amazon letting them know someone had purchased their copy].

The author goes on to recommend LispWorks as a platform. It's $1500 for 32 bits in North America and $4500 if you want to use all 64bits that your OS probably runs under. Sure there's a free version, it's 32-bits, not licensed for commercial work, and crippled. Don't get me wrong, the free version is probably fine for playing around with, but this article targets people for whom tuning the heap and GC are appealing, and this tends to be a professional rather than hobbyist market segment.

The "early Web 1.0" vibe is unfortunately common when the Common Lisp community attempts to promote Common Lisp. Let's face it, the linked Lisperati site looks a GeoCities page [a good one]. The LispWorks page requires four clicks to get to the price of non-crippleware. The author's own guide links to academic papers and black and blue on white websites.

The issue is that the Common Lisp Community is not just sending signals that most of the potential market ignores, it's that it is sending signals that turn the potential market off. I will unequivocally say that crippleware is no longer a viable approach to achieving significant developer mindshare. The landscape is open source and $4500 a seat [and calling it Enterprise Edition] is only going to hit some [potentially profitable] edge cases.

Of all the places accessible from the author's post, the only one that looks in touch with the present is the original article on BlogSpot. Everything else screams "This ain't for you" at most people.

It makes Racket look hip. And that's hard to do.

TheMagicHorsey 27 minutes ago 0 replies      
I'm no expert, but it seems like its easier to get started with Clojure than Common Lisp. There are tons of free Clojure resources and tutorials. There is a great IDE that works quite well (LightTable), and the command line tools with Leinengen are pretty sweet too.

I realize that you can probably optimize your Lisp code a lot more than you can your Clojure code since Clojure is on the JVM, but right out of the box Clojure performs as well as naive Common Lisp, so its probably a more attractive Lisp for most beginners.

csdrane 1 hour ago 1 reply      
Not too long ago I was researching what Lisp to learn. I contemplated Common Lisp, but ultimately chose to learn Clojure. The latter won because the development community seems to have more life to it. Additionally, the fact that one can import existing Java libraries is a huge plus. Very much enjoying everything so far.
pankajdoharey 8 minutes ago 0 replies      
This same article was published on planet Lisp http://planet.lisp.org/ by Pascal Costanza
pivo 2 hours ago 3 replies      
I have never been more productive than when I'm developing with Common Lisp in Emacs with Slime. It's a really fantastic environment.
hvd 21 minutes ago 0 replies      
coincidentally I am working through land of lisp, recommend it:http://www.amazon.com/Land-Lisp-Learn-Program-Game/dp/159327...Do I see myself using lisp in production code? Not in the near future. I do think there is value in learning something that exposes you to thinking in different ways, till now what Ive gained is that Python is pretty lispy!
yodsanklai 1 hour ago 3 replies      
What are the advantages of Lisp over ML (more specifically, I know Scheme and OCaml). My knowledge of scheme is probably incomplete, but for what I remember, Scheme is OCaml without static typing, sum types, pattern matching, module system. Scheme is more dynamic (dynamic types, code as data, symbols) but i find type safety much more important (and in a language with type inference, it comes with little overhead).

Am I missing something about Common Lisp?

matrix_nad 2 hours ago 5 replies      
How long was the learning curve to CL, if you don't mind me asking?

I remember learning Lisp and all that reverse polish notation was very unorthodox to me.

Modernizing less
31 points by zdw  1 hour ago   3 comments top 2
_delirium 14 minutes ago 0 replies      
Interesting! From what I'm reading so far, this seems not to be Illumos-specific (despite being motivated by Illumos's needs), but rather a cleanup that'd be applicable to any POSIX-like system. A fork of less that assumes POSIX-like functionality and cleans up a lot of things accordingly does seem like a worthwhile project. A bit more "unixy" design that uses the system versions of available functionality (like globbing and UTF-8!) should also reduce the risk of weird bugs & inconsistencies with how the rest of the system operates.
JoshTriplett 30 minutes ago 1 reply      
> Make less use getopt() instead of its byzantine option parser (it needed that for PC operating systems. We don't need or want this complexity on POSIX.)

This is the kind of thing that always astonishes me to see in a codebase: why reinvent something rather than just finding and including a compatibility implementation? Just grab an appropriate getopt.c and compile it in if the platform doesn't have one, then let the rest of the code pretend every platform has one. (Preferably an implementation of getopt_long; a quick search turned up some licensed under 3-clause BSD.)

OkCupids Unblushing Analyst of Attraction
38 points by boh  2 hours ago   10 comments top 6
burgers 4 minutes ago 0 replies      
The spookiest thing is not the data or the experiments, but the off the cuff conclusions.

For example:> As a group, for instance, Latino men rated Latinas as 13 percent more attractive than the average for the site, while they rated African-American women 25 percent less attractive.

That is an insane generalization going on.

What metric are we using to determine this? Is it possible that people who tend to participate in the rating of the looks of potential mates are more inclined to align with race? Not purely Latino men. Which is an enormous initial generalization to be making at the outset.

> Witness the actions of 35-year-old heterosexual men on OkCupid. These men typically search for women between the ages of 24 and 40, Mr. Rudder reports, yet in practice they rarely contact anyone over 29.

Again on this one, was age the only possible metric that caused the under 30 to be contacted more by 35 year old men? There isn't anything else that might be different about an under 30 profile that causes more communication to occur?

This is the actual scary stuff to be publicly releasing as real science. Just as the general public is ill informed about the experiments going on, they are also not aware of what metrics are used to determine these results. In my experience, many of these metrics are not as concrete as the appear and full of pushing the data to fit a narrative etc.

Combine that with the fact that most of this data is proprietary and private with no way to be peer reviewed. Dangerous stuff.

cbhl 17 minutes ago 2 replies      
I really enjoyed reading OkTrends posts, but having near radio silence for three years (apart from one post last July) followed by a full-on PR blitz for a new book (comes out on Tuesday) makes me a little sad.

This piece almost makes it sound as if Rudder has been blogging based on OkCupid results this whole time... and if you go to the OkTrends site, you see huge inline placement for Rudder's new book.

keerthiko 1 hour ago 0 replies      
I have a lot of respect for the people behind OkCupid. They tread a lot of grey areas that many online entities are scared to.

On the whole, I also think usage of willfully submitted and gathered user data for things other than advertising is amazing. Too many companies have invested too many resources into data analytics purely for increasing revenue from advertising.

I think if companies can find ways to gain better social understanding and from that provide better value to end users (rather than to advertisers as in the advertising model) in a way that gets them revenue, it will be amazing. I am all for social media companies doing research from their user data, in the hope that we can move on from the ad-driven web.

kachnuv_ocasek 6 minutes ago 0 replies      
Off-topic, but why am I getting told to log in on the NYT website?
norswap 10 minutes ago 0 replies      
The blog was dormant for 3 years, apparently they just posted a new post... Of course I unsubscribed last month :D
kevinthew 42 minutes ago 2 replies      
Might be cool stuff they're doing but it doesn't make it any less unethical no matter how much they try to whitewash it. This is slippery slope stuff
Naval Charts
20 points by vinnyglennon  1 hour ago   5 comments top 4
alephnil 3 minutes ago 0 replies      
It is hard to check if you don't have a map to check against, but in the waters I used to sail in Norway, all lighthouses seems to be there, but no rocks or areas with shallow water are marked, which makes the map useless for navigation there. This may come, but before it is there, it's not safe to navigate using these maps.

I also dislike that the line extending from the lighthouse sectors are not drawn, as that is very useful when navigating at night. They do have the intervals of the light houses, which is good. They also have a download function, which is important at sea, since you cannot expect to be online at all times at sea, but I did not get it to work. Since I've always used paper maps on sea, I can't tell if they have the right download formats if it should work.

All in all it is a nice testbed for testing out concepts, but still not ready for prime time. A sea map has much more stringent requirements for accuracy than street maps, and this still below the bar.

litmus 37 minutes ago 0 replies      
I'm not a sea captain or anything, but I find it interesting that there is little reference or comparison to S-57/S-63 data (vector-based nautical chart standard maintained by the International Hydrographic Association) in either scope or in terms of future goals. The site is loading slow for me so I didn't get a chance to extensively navigate, from the couple of tile I managed to see I couldn't see any depth information (sounding data). In theory, there is some potential here because the S-57 is an old format ill-suited for the web or desktop-based systems for that matter. They are in the process or revamping the format (S-101), but we all know how well top-down committee based formats are designed. Any bottom-up format that spreads and proves itself in the field would be exciting...But it'll be interesting what the open source data movement could do to in this area that could disrupt the vast network of national hydrographic associations that update the depth values of the world's oceans on a weekly and monthly basis and sell them to sea vessels the world over.
markbnj 54 minutes ago 1 reply      
It's a nice mapping site, but how do these qualify as charts? Am I missing something? I didn't see any depth or current information, aids to navigation, etc.
robinhoodexe 1 hour ago 0 replies      
Really nice, although the site seems to load somewhat slow for me...
The Mindful Brain: Cortical Organization, Theory of Higher Brain Function (1978) [pdf]
17 points by MichaelAO  1 hour ago   2 comments top 2
MichaelAO 1 hour ago 0 replies      
Jeff Hawkins in his book On Intelligence describes Mountcastle's 1978 article An organizing principle.. as "the rosetta stone of neuroscience".
aswanson 14 minutes ago 0 replies      
Thanks, insightful.
Netflix and the Future of Television
4 points by jfaat  18 minutes ago   discuss
Musical Scale Generator
21 points by bozho  4 hours ago   4 comments top 3
baddox 1 hour ago 0 replies      
This looks neat. I wish I could try it on my iPad.

There is a lot of interesting mathematics behind music. I particularly enjoy reading about different temperaments, which are our attempts to compromise between just intonation (the harmonic-based scales the article mentions, which ostensibly sound the most "natural" and "correct" in a given key), and physical instruments, which we usually want to be able to play in multiple keys without significant adjustment. Twelve-tone equal temperament is the most common temperament in Western music, where the frequency of each semitone is the frequency of the previous semitone multiplied by the twelfth root of two. With equal temperament instruments, the error between just intonation intervals is the same regardless of the key.

There is a rich history of temperaments. A early attempt, attributed to Pythagoras, illustrates the impossibility of constructing a temperament perfectly from the simple ratios generates by harmonics:


The Wikipedia article on the tuning of Bach's aptly-titled Well-Tempered Clavier is also fascinating:



Edit: this video is a great demonstration of the errors between just intonation intervals and our equal temperament compromise: http://youtu.be/6NlI4No3s0M.

pierrec 1 hour ago 0 replies      
The scale generation algorithm described here is really interesting - it starts from the natural core of harmony (using rational relationships between frequencies), and gives you the option to keep the resulting natural-tempered notes, or to adapt them to the modern-day 12-tone equal temperament (12TET) and its irratonal, twelvth-root frequencies.

The question of whether 12TET is "natural" or not is an interesting one, and in my opinion, it was great for a few centuries, but at some point music will move beyond 12TET. Take a look at one of richest websites exploring this question [1], whose author has a different opinion from mine.

As for skipping the 7th harmonic -- i'd say you shouldn't do it. If you scroll down [1] until you get to the "n-tone equal temperament" graph, you'll find one of the nicest ever justifications for 12TET. However, if you added more harmonics to that graph, you'd find that the 7th does not fall anywhere close the 12TET frequencies: that's why we're not used to hearing it, and it sounds the most alien to us.

[1]: http://www.geocities.jp/imyfujita/wtcpage004.html

mrcactu5 1 hour ago 1 reply      
Have you considered examining Jazz scales? https://en.wikipedia.org/wiki/Jazz_scale

They start with the modes - Ionian, Phrygian, Lydian... - and they were embellished with certain accent notes.

* https://en.wikipedia.org/wiki/Bebop_scale* https://en.wikipedia.org/wiki/Lydian_augmented_scale* https://en.wikipedia.org/wiki/Aeolian_dominant_scale* https://en.wikipedia.org/wiki/Altered_scale

Many examples in Jazz recording and later classical composers like Debussy and Ravel

At Alibaba, the Founder Is Squarely in Charge
23 points by dnetesn  21 hours ago   2 comments top 2
blutoot 1 hour ago 0 replies      
Is Alibaba equivalent to Proctor & Gamble? If so, what is the vision of Jack Ma? I'm still not sure how one articulates a vision for such a conglomerate.
michaelochurch 45 minutes ago 0 replies      
With the relationship souring, Mr. Ma transferred ownership of Alibabas fast-growing online payment service, Alipay, to an entity that he controlled. He didnt get the permission of Alibabas board. He just went ahead and did it.

This is one thing I love about Asia, when it comes to business. (There are many things not to love, but I'll avoid those for now.) People act instead of playing the emasculating permission-seeking games you see around here.

In North America-- and it's worst in the supposedly "entrepreneurial" VC-funded ecosystem-- business people have this effete need to manicure their reputations, which means they'll never go all-in on one strategy or business. The "This is right, I'm fucking doing it" impulse is not something you see in the U.S. where "failure" means "$350k/year EIR gig" [0]. In that situation, people are more inclined to play nice and let themselves lose.

[0] There are, of course, plenty of actually poor people in the U.S. for whom failure means something worse than $350k/year EIR gig-- in fact, that's probably 99.5% of Americans-- but the U.S. class system is pretty smooth-running at this point and keeps those actually poor people away from decisions that matter. For people who actually get to make business decisions in the U.S., failure has no real consequences. Conversely, if there's any chance that you might have a real stake in winning or something to lose in failure, the U.S. class system is pretty good at making sure you don't get any real power.

On the other hand, if you grew up in a place where failure has actual painful consequences (beyond mild embarrassment) you're more likely to play to win, rather than asking "How will this affect my reputation and my buddies?" and settle for a draw.

In the U.S., people play permission-seeking games because their goals aren't to win in business, but to get on the boards of prestigious non-profits and get their children into prep schools (which means that "just fucking do it" business moves are too risky; you might compete too hard against someone on the board of something important, like the museum where the deans of various prep schools are also on the board). In Asia, they play to win (like Ma) because they're only a generation or two removed from having no other choice.

I tend to have more respect for people who act swiftly out of a sense of self-preservation than I do for U.S.-style social climbers who decline to act boldly because they're afraid it'll keep their kids out of MBA school.

Opposed-Piston Engine
32 points by acd  4 hours ago   7 comments top 4
cstross 4 hours ago 1 reply      
See also the Junkers Jumo 2014: http://en.wikipedia.org/wiki/Junkers_Jumo_204

And its descendant, the Napier Deltic engine: http://en.wikipedia.org/wiki/Napier_Deltic

And in particular, the animation of the Deltic layout (three cylinders, three crankshafts (one of them contra-rotating), six(!) pistons): http://en.wikipedia.org/wiki/Napier_Deltic#mediaviewer/File:...

(Opposed piston engines aren't exactly new.)

ferongr 31 minutes ago 1 reply      
The animation shows a turbocharger but it also shows that the engine operates on a 2 stroke cycle. Conventional wisdom would say that the turbocharger would be a waste since both intake and exhaust ports are open at the same time, preventing the compression of the mixture, while at the same time potentially wasting some of it as it is forced out of the exhaust port. I'd speculate that the turbocharger is small enough and used mainly to introduce the mixture into the chamber with enough velocity to make it swirl, while at the same time improving the removal of exhaust gases.
JeanSebTr 1 hour ago 0 replies      
To play the animation, please click the arrow located in the center of the image below.

That's the most hand-holding approach I've read telling me to play a video.

papaf 1 hour ago 0 replies      
Can anyone recommend any books on the design of combustion engines? I am looking for a text book that would be approachable to someone without an engineering background although comfortable with undergraduate level mathematics.
American Super Computing Leadership Act (2013)
7 points by mjstahl  1 hour ago   2 comments top
Breakthrough in light sources for new quantum technology
5 points by lelf  1 hour ago   discuss
BuzzFeed: An Open Letter to Ben Horowitz
44 points by nns  6 hours ago   17 comments top 6
boomzilla 1 hour ago 5 replies      
I looked at a16z portfolio [1] and could not find any "real" liquidity event, meaning IPO or big acquisition. There are a number of really high valuation companies, though, which are mostly based on the last round funding, often led by a16z. Now, a natural question is who exactly is benefiting from the high valuation? OK, certainly not the employees as most of them are not vested, and even if they have, they can't sell to anyone, and there is no cash dividend. The founders and top management sometimes can negotiate a better deals financially in these funding rounds [2], and in general can be resume boosters for them, so they do benefit somewhat from the high valuation. The LPs are not exactly getting anything at this time as the gain is not realized yet. That leaves the VCs who have a huge incentives to lead big rounds, as their incomes are proportional to the size of the funds they raise from LPs as detailed in this HBR article [3]. Given this dynamics, I am not surprised to see average startups with average products get funded at a hugely inflated valuation. I just hope that the LPs are investing with their own money, and are not retirement fund managers (specifically, not MY retirement fund).

[1] http://a16z.com/portfolio/

[2] http://allthingsd.com/20111001/vcs-unite-chamath-palihapitiy...

[3] http://blogs.hbr.org/2014/08/venture-capitalists-get-paid-we...

jokull 1 hour ago 2 replies      
His main argument is that by funding BuzzFeed he is "contributing to intellectual decrepitude" of young readers. This is an outrageous statement. BuzzFeed started out with socially optimised content and were one of the first to recognize the power of sharing over front page habits. Today they employ many great journalists and are creating high quality content that is much easier to consume and spreads more quickly (admittedly along with some pretty banal stuff). A16Z enter the partnership based on future prospects, nothing else. BuzzFeed may or may not become a good source of enlightened reporting in the future, but at least that is their goal and they have a pretty good chance at it for a generation that mostly consumes through social media.
camillomiller 36 minutes ago 1 reply      
Is anybody here really convinced that buzzfeed isn't a fad?Its numbers are so badly intertwined with social media platforms that they actually depend on the continued success of the platform themselves. How is that a good business strategy?
jgalt212 1 hour ago 1 reply      
Wasn't this Chris Dixon's deal? I mean, I'm sure Ben and other partners had to sign off on it, but I think Chris lead this one.
sparkzilla 1 hour ago 0 replies      
blutoot 1 hour ago 0 replies      
"I decided to put 10,000 of my own money, just to see some of the ideas Im nurturing could fly." Maybe you wanna edit your comment now?
Safely Creating Temporary Files in Shell Scripts (2005)
17 points by mapleoin  3 hours ago   2 comments top
e28eta 1 hour ago 1 reply      
Section 3.5 doesn't seem very safe to me, because I think it allows the user running the script to dictate where the directory will be created via an environment variable.

I don't know what specifically would be gained with that control. Maybe an attacker could specify a TMPDIR that resolves to a path on a FUSE mount and start doing nefarious things with the data in the tmp file?

Structured Programming with go to Statements (1974) [pdf]
4 points by nkurz  3 hours ago   1 comment top
nkurz 2 hours ago 0 replies      
This is a long but prescient 1974 article by Donald Knuth on the tradeoffs between readability and performance. Following are some choice quotes to entice you to read the whole thing.


I should confess that the title of this article was chosen primarily to generate attention. There are doubtless some readers who are convinced that abolition of go to statements is merely a fad, and they may see this title and think, "Aha! Knuth is rehabilitating the go to statement, and we can go back to our old ways of programming again." Another class of readers will see the heretical title and think, "When are diehards like Knuth going to get with it?" I hope that both classes of people will read on and discover that what I am really doing is striving for a reasonably well balanced viewpoint about the proper role of go to statements. I argue for the elimination of go to's in certain cases, and for their introduction in others.


[I]t seems that fanatical advocates of the New Programming are going overboard in their strict enforcement of morality and purity in programs. Sooner or later people are going to find that their beautifully-structured programs are running at only half the speed--or worse--of the dirty old programs they used to write, and they will mistakenly blame the structure instead of recognizing what is probably the real culprit--the system overhead caused by typical compiler implementation of Boolean variables and procedure calls.


At the present time I think we are on the verge of discovering at last what programming languages should really be like. I look forward to seeing many responsible experiments with language design during the next few years; and my dream is that by 1984 we will see a consensus developing for a really good programming language (or, more likely, a coherent family of languages). Furthermore, I'm guessing that people will become so disenchanted with the languages they are now using--even COBOL and FORTRAN-- that this new language, UTOPIA84, will have a chance to take over. At present we are far from that goal, yet there are indications that such a language is very slowly taking shape.


My own programming style has of course changed during the last decade, according to the trends of the times (e.g., I'm not quite so tricky anymore, and I use fewer go to's), but the major change in my style has been due to this inner loop phenomenon. I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure (as in the change from Example 1 to Example 2) so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn't take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.


There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.


For some reason we all (especially me) had a mental block about optimization, namely that we always regarded it as a behind-the-scenes activity, to be done in the machine language, which the programmer isn't supposed to know. This veil was first lifted from my eyes in the Fall of 1973. when I ran across a remark by Hoare that, ideally, a language should be designed so that an optimizing compiler can describe its optimizations in the source language. Of course! Why hadn't I ever thought of it? [...]

The programmer using such a system will write his beautifully-structured, but possibly inefficient, program P; then he will interactively specify transformations that make it efficient. Such a system will be much more powerful and reliable than a completely automatic one. We can also imagine the system manipulating measurement statistics concerning how much of the total running time is spent in each statement, since the programmer will want to know which parts of his program deserve to be optimized, and how much effect an optimization will really have. The original program P should be retained along with the transformation specifications, so that it can be properly understood and maintained as time passes. As I say, this idea certainly isn't my own; it is so exciting I hope that everyone soon becomes aware of its possibilities.


The Future

It seems clear that languages somewhat different from those in existence today would enhance the preparation of structured programs. We will perhaps eventually be writing only small modules which are identified by name as they are used to build larger ones, so that devices like indentation, rather than delimiters, might become feasible for expressing local structure in the source language. (See the discussion following Landin's paper [59].) Although our examples don't indicate this, it turns out that a given level of abstraction often involves several related routines and data definitions; for example, when we decide to represent a table in a certain way, we simultaneously want to specify the routines for storing and fetching information from that table. The next generation of languages will probably take into account such related routines.

Program manipulation systems appear to be a promising future tool which will help programmers to improve their programs, and to enjoy doing it. Standard operating procedure nowadays is usually to hand code critical portions of a routine in assembly language. Let us hope such assemblers will die out, and we will see several levels of language instead: At the highest levels we will be able to write abstract programs, while at the lowest levels we will be able to control storage and register allocation, and to suppress subscript range checking, etc. With an integrated system it will be possible to do debugging and analysis of the transformed program using a higher level language for communication. All levels will, of course, exhibit program structure syntactically so that our eyes can grasp it.


A great deal of research must be done if we're going to have the desired language by 1984. Control structure is merely one simple issue, compared to questions of abstract data structure. It will be a major problem to keep the total number of language features within tight limits. And we must especially look at problems of input/output and data formatting, in order to provide a viable alternative to COBOL.

Stop and Seize
284 points by cgtyoder  6 hours ago   175 comments top 28
zorrb 1 hour ago 2 replies      
Reading through these comments, I see quite a bit of ignorance. A lot along the lines of, "Why would you ever carry cash!?" Which fortunately a lot of cogent comment have successfully answered. Another is, "Why would you carry that much cash it's so RISKY?"

As someone whose had to carry around five figure sums around quite a bit in the past because of my profession (hint: it rhymes with "rambling") my far and away, it's not even close, fear is having the money taken by cops from an illegal search. This fear is justified because I've never been mugged or robbed and no one I know who was in the same line of work has either. But I know MANY people who have had funds seized by cops. ESPECIALLY in the south. Actually the horror stories I personally know about have happened only in the south.

Usually it's the cop seeing an out of state license. Pull you over, lies and says you were "swerving" or refuses to give you a reason. Then forces you out of the car, takes the money and sayonara.

wheaties 4 hours ago 1 reply      
The fourth amendment, protection against unlawful search and seizure, was put into the constitution as a direct consequence of the very laws that civil forfeiture was based upon. That makes me sad. How can we, as a society, function in which we've incentivized our own police force to take property from our citizens without proof of a crime? It just doesn't make sense.

See http://en.wikipedia.org/wiki/Fourth_Amendment_to_the_United_...

MattyRad 3 hours ago 2 replies      
I live in Reno, so with Burning Man recently I saw a disproportionate number of traffic stops off the freeway. The interesting thing about Burning Man, however, is that all vehicles are marked by the tell-tale dust from the Playa. Cops can easily identify which cars should be pulled over. I've seen cops rooting around in dust marked cars while concerned Burners watch.

I can't stress enough how important it is to know your rights [1]. Never, ever consent to a search. Never, ever "talk" to police (insofar as to assert your rights). They are not your friends. You are being pulled over because you are being accused of a crime. And all too often they will use word games and intimidation to get you to "cooperate" (forfeit your fifth amendment rights).

Even though Burners might be less likely to have large sums of money in their car (cash money is forbidden in trading at Burning Man), suspicion of drugs is a likely story for a stop and seizure.

[1] https://www.youtube.com/watch?v=hkpOpLvBAr8

DominikR 2 hours ago 6 replies      
Why doesn't your society resist? Parts of your police is now equiped like an army on a battlefield and money can be seized without any reasoning.

And then there's NSA spying, torture and drone executions of suspected terrorists (even US citizens) without trial.

If someone would have told me 20 years ago, that the US will turn into this, I would have thought that this person must be insane.

Now it wouldn't even surprise me, if the US turns into a giant gulag in the next 10 years.

Edit: I wonder how these policemen justify their actions, after all they are going after their own people.

spodek 4 hours ago 1 reply      
Talk about Orwellian double-speak: they call it "asset forfeiture" and the "Equitable Sharing Program." It's theft and everybody knows it.

> "despite warnings from state and federal authorities that the information could violate privacy and constitutional protections."

It's illegal, pure and simple. No need to sugar coat.

> "Police can seize cash that they find if they have probable cause to suspect that it is related to criminal activity. The seizure happens through a civil action known as asset forfeiture. Police do not need to charge a person with a crime. The burden of proof is then on the driver to show that the cash is not related to a crime by a legal standard known as preponderance of the evidence."

Cops in the story act like the mere existence of a lot of cash is probably cause. This is the opposite of one of the major reasons for law and law enforcement: to protect people's property. We have police so we can feel secure with our property, not to take it so we have to fight back for it.

CapitalistCartr 5 hours ago 3 replies      
The local governments invariable claim it's not about the money; it's about safety or some such. I want an amendment that puts all the money from fines, seizures, etc. to the national debt. That's the closest thing to a black hole for money I can think of. Watch them scream then. Not about the money, my ass.
lostcolony 5 hours ago 4 replies      
'9/11 caused a lot of officers to realize they should be out there looking for those kind of people, said David Frye, a part-time Nebraska county deputy sheriff who serves as chief instructor at Desert Snow and was operations director of Black Asphalt. When money is taken from an organization, it hurts them more than when they lose the drugs.'

Wait, what? I was unaware that 9/11 was perpetrated by drug traffickers.

tomohawk 5 hours ago 2 replies      
Near the end, they talk about how police can use psychological pressure to keep someone at the side of the road, not telling them they can leave any time the initial traffic stop is over. They then make 'productive' use of that time.

Why you should never talk to police: http://www.youtube.com/watch?v=6wXkI4t7nuc

runeks 21 minutes ago 0 replies      
> Those laws were meant to take a guy out for selling $1 million in cocaine or who was trying to launder large amounts of money, said Mark Overton, the police chief in Bal Harbour, Fla., who once oversaw a federal drug task force in South Florida.

If that was your intention then why on earth did you allow seizures from civilians? Seems to me that the best way to seize a million dollars from a cocaine dealer is to arrest him if you have reasonable suspicion, and if he's convicted, seize the money.

But of course, it's a lot easier to just take his money without being able to prove your suspicion. :\

clarry 4 hours ago 1 reply      
It is sad. Yet people like Gavin Seim and Adam Kokesh get a lot of shit for standing up.

"have you seen the video where the guy just keeps repeating, 'Am I being detained? Am I free to go?' what an asshole"

And anyone defending those people is instantly labelled a gun freak.

covercash 5 hours ago 0 replies      
Apparently in Philly the DA will seize entire homes over a $40 drug bust: http://www.cnn.com/2014/09/03/us/philadelphia-drug-bust-hous...
fiatmoney 2 hours ago 0 replies      
There is a word for armed men who prowl highways looking for valuables to seize.
briandh 25 minutes ago 0 replies      
People upset by this (as I am) may be interested in EndForfeiture.com, a very slick brochure-style site you can send to your friends.

The organization behind it is the Institute for Justice, a public interest law firm that has performed ample pro bono litigation against civil asset forfeiture abuse.

If you choose to donate to them (which I encourage), note that they do legal work in other areas that you may or may not agree with, so you should take a look at that first.

(Note: I am not affiliated with IJ, I just admire their work.)

marquis 5 hours ago 0 replies      
The New Yorker's take on this last year:http://www.newyorker.com/magazine/2013/08/12/taken
solutionviatech 1 hour ago 2 replies      
--------Can we solve part of this problem through a smartphone app? -----------------

Here are my initial thoughts on how it would work, feel free to comment on better ways to implement it / other features

1) If you get pulled over by a cop, you start up the smart phone app and place the phone in the windshield mount (the mount should have power so the phone battery doesn't die)

2) When the app starts, it immediately starts recording (or streaming) audio and video

3) The app quickly connects/matches you to a real lawyer versed in your state's laws

4) Your newly matched lawyer can listen in on the subsequent conversation between you and the police officer. Or alternatively, the lawyer could act as a buffer and do all the talking with the police officer.

Benefits of the app:5) Since the police officer knows he's being recorded, he may act nicer and more appropriately.

6) If the cop does try to "play games" such as holding the person without arresting, then the lawyer can quickly "step in" and ask the cop if his client is free to leave or whether he is under arrest.

Problems that I'm not sure how to solve:

7) Maybe the cop will tell the driver to step out of the car and around to the back of the vehicle - with the motivation of getting out of earshot of the lawyer/smartphone app.

8) Looking through a cynical lens, this app may face scrutiny by cop unions opposed to it - in how to shut it down

loupereira 5 hours ago 0 replies      
In an era of budget shortfalls and market-based solutions, the American government is quickly becoming a for-profit enterprise at all costs.
shin_lao 4 hours ago 7 replies      
I don't see what the problem is.

In France if you carry more than 10,000 in cash, you need to make a custom declaration.

It's also illegal to buy something for more than 3,000 in cash.

lukasm 5 hours ago 1 reply      
Well, I guess this is one extra reason to stay in Europe and start business here.
hitchhiker999 5 hours ago 0 replies      
I cannot believe this is happening - that is truly insane.
coldcode 5 hours ago 2 replies      
All that will happen from this is that criminals will find ways to move money without cash (like bitcoins, etc) and the only people caught will be non-criminals or really stupid criminals. Everything evolves in reaction to pressures.
einrealist 2 hours ago 0 replies      
So what is proof that the money is not linked with any crime? Does a envelope count, sealed by a notary with a letter that explains the purpose?
DanielBMarkham 5 hours ago 0 replies      
Those laws were meant to take a guy out for selling $1 million in cocaine or who was trying to launder large amounts of money, said Mark Overton, the police chief in Bal Harbour, Fla., who once oversaw a federal drug task force in South Florida. It was never meant for a street cop to take a few thousand dollars from a driver by the side of the road.

I love watching how systems of people operate. Fascinating stuff.

Over and over, what I see in U.S. history is somebody who has a real problem that needs solved -- perhaps a gnat is flying around the house. The problem looks really sexy on TV. Politicians can thump the table and get lots of votes.

Then these politicians get elected and suddenly we're building industrial flame-throwers to get rid of gnats. "But it's only for the gnats!" they say. And it is -- for about 20 years. Then it slowly becomes a tool like any other.

Part 3 of this story is the most interesting. It's where the rest of us figure out how we've gotten screwed and start clamoring to have something done about it. But nothing happens.

Democracy's bug? That every generation feels like it is special, and that the problems it is having are special. So they "have" to start heavily mucking around with the system. Most every time, these fixes work -- until they don't. Then we're stuck with them forever.

I don't have an answer here, just venting.

a3n 5 hours ago 0 replies      
The United Shakedowns of America.
kohanz 3 hours ago 9 replies      
This is by no means a justification for these seizures, but in reading this I can't help but wonder in what legitimate situations one would actually need to carry such large amounts of cash. Would a money-order or bank-certified check not do the job, or would those be seizable by the police as well?

Again, I say this more in a "how can I avoid this type of situation" sense, rather than implying that there was any wrong-doing on the part of the cash carriers.

guard-of-terra 5 hours ago 2 replies      
Now that's a Kin-Dza-Dza grade police force.
rubyfan 5 hours ago 0 replies      
Interesting article but only made 1/2 through because the WP mobile site is so horrendous.
edw519 5 hours ago 4 replies      
He was carrying $75,000 raised from relatives to buy a Chinese restaurant...

They took $18,000 that he said was meant to buy a used car.

...was stunned when police took $17,550 from him during a stop in 2012 for a minor traffic infraction...

The deputy found $75,195 in a suitcase in the back seat...

They were carrying $28,500 in church funds meant for the purchase of land...

No doubt there's injustice that needs to be addressed, but it's tough to have much sympathy for idiots. There's simply no logical reason to travel with that much cash. These people should feel lucky that the police, and not someone else, took their money.

icantthinkofone 5 hours ago 10 replies      
This story is so unbelievably biased and one-sided, it confirms my suspicions of the value of the Washington Post anymore.

Carrying $75K in cash around should raise anyone's suspicions as to the purpose, no matter what the carrier claims. Think about it. Why would YOU carry $75K in cash?

The obvious and well known reasons are, it's the most secure way to fund drug and terrorist cells without leaving a trail. The Post even acknowledges that!

"There is no question that state and federal forfeiture programs have crippled powerful drug-trafficking organizations, thwarted an assortment of criminals ... "

But not until they make their accusations and end the above sentence with:

"... and brought millions of dollars to financially stressed police departments."

Note that only a sixth of those seizures were ever challenged but the Post claims it's only because it's too expensive to do so. They do NOT question whether it's because it's drug or terrorist funding (and I will state that it is).

Just unbelievable what these formerly trustworthy newspapers are churning out nowadays.

Mojang and the Bukkit Project
42 points by bane  3 hours ago   29 comments top 7
nightpool 58 minutes ago 1 reply      
EDIT2: I might have been a little harsh here. I was missing some context that I've been avoiding over the past few weeks. On further reflection, I have nothing but anger for the way Mojang has handled this situation. Even though they claim that they "own Bukkit outright", completely devaluing the many, many, many community contributions made, they continue to refuse to support the project in even the most trivial ways, and I have nothing but respect for many of the community leaders they've driven away, including @mbaxter, @EvilSeph, @amaranth and many more. Mojang is absolutely to blame for this, no matter how childish Wolverness' behavior may be. I've left my original comment below, to give some context on the complexities of the situation, but I can maybe understand where Wolverness is coming from at this point.

This is my perspective as someone who was once inside of the Bukkit community as an outside developer (not one of the core team), but Wolverness has been a very toxic maintainer of the project, especially as it regards to responding to pull requests. Either your code complies 100% with the contributing guidelines (including confusingly written and sometimes unwritten rules about spacing, formatting, naming and pull request formatting) or it gets rejected out of hand. Any mention of forks of the Bukkit project (such as Spigot, Glowstone or SportBukkit) could get you banned from IRC, the forums and your comments deleted on Github. This upon many, many instances of just caustic behaviors and treatment of interested developers and contributors.

I'm still subscribed to the projects on Github, and I was never burned by these rules personally (so these feelings are not out of spite--they're out of sorrow for the developers who were turned away) but I don't think I'd ever have a chance of getting any non-trivial PR integrated into the project. The standards were just too insane. I'm no fan of how Mojang has been handling the minecraft community recently, but anything that would drive this guy out is just fine by me.

(This takedown is BS btw. CraftBukkit has ALWAYS linked against the minecraft official server library--a release of CB made by a Mojang employee doesn't change ANYTHING about the GPL status of Minecraft. Note that the proprietary minecraft server DOESN'T link against in any way shape or form)

dang 1 hour ago 1 reply      
fencepost 57 minutes ago 0 replies      
A couple additional notes:

1. Many (most?) of the servers running with Bukkit also had some form of payment for items, skills, ranks, etc. that can affect game play, also known as "pay-to-win." Mojang recently announced that they were going to be cracking down on this since it's a violation of the license, though I don't know what if any enforcement has actually happened. Enforcement was due to start August 1.

2. Mojang has a couple of multiplayer options without Bukkit - the Minecraft Server piece that is part of this discussion and Minecraft Realms. If Bukkit is dead (which may be the case no matter what happens because W. is not the only outside developer who could pull this kind of stunt), there are still multiplayer options and Mojang could likely implement some of the features currently in Bukkit but under their control - perhaps under a licensing model that lets them sell the server software as well.

Basically this guy has managed to shut down something being used by folks with business practices that Mojang was already having some issues with, while leaving available some less feature-rich options fully approved (and controlled by) Mojang. The odds of that resulting in their giving in and open-sourcing something that's included in Minecraft as a whole are somewhere around nil - worst case for Mojang is that the Minecraft server community that they're not a direct part of takes a feature and popularity hit for some time while they get to point at this guy as the reason why.

kevinpet 2 hours ago 1 reply      
I think the game is the following:

1. Bukkit is GPL, developed without help from Mojang, but with a questionable use of the decompiled source.

2. Mojang likes Bukkit, hires some of the devs and buys name. And I'll assume the these core devs would be willing to assign copyright to Mojang.

3. (Eventually) One of these core devs, now a Mojang employee, makes a new release of CraftBukkit, including Bukkit and linked against the Mojang official server blob.

Although the timing is different, this is exactly the situation with GPL libraries like Readline. You can't use them in a proprietary product.

On the other hand, I suspect if this went to court, Mojang could argue that since that's all bukkit ever did, and this dev contributed to it while that was the case, the license should be interpreted as including something like the classpath exemption, because it's absurd to assume that people wrote code and distributed it but did not legally intend for it to be used in exactly that way.

binarymax 2 hours ago 3 replies      
Hard to tell what's going on here - but it looks like the developer of a mod (Bukkit) has a large enough userbase, that he's using that as leverage and trying to blackmail Mojang to open source the official Minecraft Server.

--EDIT-- fixed spelling.

Shank 2 hours ago 0 replies      
Here's the DMCA request that was sent to GitHub: https://github.com/github/dmca/blob/master/2014-09-05-CraftB...
Airhat 2 hours ago 1 reply      
First, has HN really fallen so far as to link slashdot of all things?

Second, a quick explanation of the issue, as I have it: Bukkit is a server made by the community to make up for the simplicity of the official server. It uses deobfuscated, reverse engineered java code from the official server, and has never been given the actual source from Mojang. While Bukkit was sort-of folded into Mojang (They own the name, and a few of the developers went to work for Mojang), no one at Mojang is paid to work on Bukkit and they still don't get source access. Now, a disgruntled Bukkit Dev, Wolverness, has thrown out this DCMA request in some sort of retaliation / blackmail attempt.

uiGradients Beautiful coloured gradients
66 points by jonphillips06  3 hours ago   9 comments top 7
shurcooL 53 minutes ago 0 replies      
These are pretty and simple. Bookmarked for future use.

It was only 15 days ago when I changed the background in my app to use a gradient [1], and I still think it makes it look much nicer.

[1] https://github.com/shurcooL/Conception-go/commit/92ed6d952f0...

Igglyboo 32 minutes ago 0 replies      
FYI, If you click the "Add Gradient" button while the "Get CSS Code" dialog is still visible, the "Add Gradient" dialog will pop-in behind the css dialog making it not visible.
cardamomo 2 hours ago 0 replies      
Yum! There are some really great-looking gradients here. I would love to have an easy way to see them at different scales, though. I'd want to see, for example, how a gradient that looks great over the full width of the browser window looks as the background of a smaller element.
quaffapint 2 hours ago 0 replies      
Really simple to use - nice job. Maybe a way to throw in a sample page with some different text/typography/colors/sizes so you can really see how it would work.
aw3c2 1 hour ago 1 reply      
I see lots of banding, should I check my display or is that normal?
smu3l 2 hours ago 1 reply      
The first one I landed on was called "Influenza". Not sure about that name. But I like the colors.
rrodriguez89 2 hours ago 0 replies      
Startup like template become every day more generic.
A new foundation for mathematics
5 points by magoghm  3 hours ago   1 comment top
jesuslop 1 hour ago 0 replies      
The Mizar project has been verifying a good share of math results for decades, not to demerit HoTT but to clarify the article.
JavaScript to PHP source-to-source transpiler
5 points by endel  1 hour ago   discuss
Lessons I learned from the failure of my first startup, Dinnr
324 points by jefflinwood  1 day ago   156 comments top 39
chvid 20 hours ago 11 replies      
I am reading thru this and it is a good, recommended read however I am getting slightly annoyed with academic attitude of the author. Like batman in the cartoon I want to slap him but not for "sloppy market research". But for using the term "market research" at all.

(This is essientially the first and maybe the second point reiterated.)

Suppose you were a cook with your own restaurant. How do you figure out what works and what doesn't?

Do you go and do market research, have questionaries, focus groups, do statistics, user-driven cooking or whatever?

Surely not. You create dishes, put the new ones in the menu as today's special. Then you figure out whether people like them or not.

How do you figure that out? Do you ask them? Of course you don't because you know that if you ask "did you enjoy your dinner?" People will just be polite and tell you "yes - it was excellent - thank you very much". No. You look at whether they finished their plates or send it back half-full. Whether you get any sales. Whether they come back for more.

Even my mother knows this. When she cooks cookies, she doesn't ask me "do you like them?" Because she knows I will just say "yes - they are the best cookies in the world, mum". She asks me if I want another. And if I stay the night, she looks at whether I get up at 3 am to roam the kitchen for more of them cookies.

This is basic human behaviour but somehow lost in the academics of business administration.

The second thing is that I get the feeling that the author simply haven't explored the huge "product space" between takeaway and ready-to-cook prepackaged ingredients with a recipe at all. How about - this is essientially takeway but you have to heat the curry in your microwave and then add the fresh coriander yourself - the upshot is that it will last a week in your fridge should you not eat it all tonight. Or maybe: It is protein shakes and vitamin pills for a week - no problem - it stays "fresh" for 5 years. Or: It is the latest Jamie Oliver book with ingredients to get you started on the first three dishes.

briholt 22 hours ago 7 replies      
I can't help but have a very negative reaction to this, not necessarily at the author, but at the startup ecosystem as a whole. These are posted as "lessons" as if they were difficult to learn, when this really should be labeled "things no responsible person would ever not know before even considering raising money." Of course you should be self-critical of your product, of course you should test with real customers, of course you need a technical co-founder, of course you should pick a market with a bigger niche than tech-saavy-foodie-cooks-in-major-cities. Every mistake was classic marketing grad trying to do startup things in a bubble environment. These are the types of "lessons from my failed startup" that have been written a thousand times before. I fear over-exuberant and irresponsible entrepreneurs will repeat these mistakes for as long as startups exist.
ufmace 15 hours ago 1 reply      
Right after I read this, my immediate reaction was to feel kind of annoyed at the author. A little superficially, no tech or design talent in house, no food industry experience, and not much marketing and sales expertise, so what exactly does he bring to the table here, besides really wanting a startup?

Also, this idea hits on one of those things that lots of people really want to believe that they would do - cook their own healthy, organic, gourmet, etc food - but very few will actually take much initiative to do on their own. That doesn't necessarily mean that it isn't viable, but it does mean that your big challenge is going to be marketing and conversions - getting people to go beyond saying things that basically mean "I really want to believe that I am the kind of person who would use something like this, even though I'm not", and actually buy the product regularly.

After thinking about it some more and reading some of the posts on here, this might be a viable idea, but it's going to need a lot better marketing to go anywhere. A few ideas, some stolen from other posts, mostly in an attempt to exercise my own marketing-think muscles:

Make the product stickier - try to sell a recurring plan of a meal a week that has to be explicitly cancelled. Or at least email people who have ordered regularly to suggest new meals for them. You do have new, promoted meals, right?

Have a spread of products, from things that just require heating, barely a step above microwavable, though mild prep, up to things that may require an hour or 2 in the kitchen to make. See which ones sell the best, and emphasize those.

Try to partner with anybody involved in cooking or recipes with an audience in the area. If they're on the web, make it super easy for them to submit ingredient lists for their recipes to you, then an easy way for them to put a link on their site for "Get the ingredients for my [whatever] delivered to your door today with Dinnr!", with affiliate payments for orders. Now you're helping them monetize their sites too, so they have a good incentive to work with you.

In person, too. There's probably some cooking classes in the area. Get those classes to plug your site for their students to get ingredients for an affiliate fee.

Get some cooking experts on staff, and start producing your own youtube videos and blog content on how to cook things, with an emphasis on explaining the basics for newbie cooks. It would probably help you make sure you're actually getting good quality ingredients too.

Make social connections. Make a way for people to tell their friends on FaceTwitInstaPint that they just cooked X, and it was awesome! It might even be good to make it so that they can only order basic things at first, and they get points or something from cooking them, which will eventually make them eligible to order the more advanced things. They can brag about how many Dinnr points they have. They can see that they're either getting more than the other guy, so they can think they're better than them, or they aren't getting as much as somebody else, so they need to order more stuff. Then you can suggest new meals, too, based on their skill level and their preferences.

The real pain point isn't shopping, getting ingredients, or throwing away unused ingredients. It's wanting to be seen to their social circle as the kind of person who cooks awesome stuff. Figure out how to hit that, and you'll probably sell.

l33tbro 20 hours ago 3 replies      
Post mordems like this make me sad.

Not for the founder's bank account per se, but for the fact that he has all the wrong takeaways from his startup experience.

You started off well. You did the market research. Great. You've clearly gotten people interested in the service. Even better. You demonstrated in your post that a clear-headed approach was taken and you objectively determined people would want your product.

But then ... what ... you just buy a bunch of groceries and think people will just engage with your service? Not once in your start-up debrief do you mention your marketing approach. That's a huge red-flag for me about how you thought about your business.

Lesson 8. Marketing.

I believe this idea would have worked if you had decent marketing. Do I live in England? No. But I still know that the Brits are getting a lot more concerned about health. Even Jamie Oliver with school dinners and other social ventures. It's a sentiment that has cultural traction and I think the idea here is great.

But marketing ... oh yeah that thing:

- What was your overall engagement strategy?

- What genuinely creative concepts did you come up with that would make your service compelling and sexy to your people?

- How did you demonstrate how to use the service?

- How did you make others see the value your market research subjects saw?

- What compelled people to emotionally invest in your company as a consumer?

OP, you're a smart guy - so I'm using your failure as a case-study for a greater trend I see amongst startups around "pain points". Sure, pain points are important. But people forget that marketing, done well, is the creation of pain points.

I believe most startups die because founders don't have a vision beyond the mechanics of the business. It's sad to see post-mortems like these that point the finger back at the idea - which was inspired and had legs. I'm not sure what the marketing strategy was here (OP didn't think it was important enough to tell us), but I'm assuming like most he was happy enough with I-stock artwork and a quirky explainer animation that the public is so thoroughly bored of.

tchock23 19 hours ago 3 replies      
It's a bad idea for founders to be doing their own market research. The prevailing advice from "startup gurus" like Steve Blank is that you must "get out of the building" and talk to customers face-to-face. Here's the problem with that advice (as Dinnr found out):

1. Very few people will tell a founder their idea is bad right in front of them; and

2. Founders suffer from the worst case of confirmation bias ever. They are only looking for positive signs that their idea is good.

There are ways around this error if you know market research methods well enough, but unfortunately the prevailing wisdom in the startup community is the only way to do thorough market research is for the founders to "get out of the building" and do in-person interviews.

It's refreshing to see articles like this where the founder realized their errors and didn't just outright blame the process of market research.

rcarrigan87 19 hours ago 1 reply      
This seems like a fairly easy idea to test without any capital investment but a bike and a landing page.

1. Throw up landing page explaining service

2. Have people put in orders via email or form submission

3. Ride bike to grocery and deliver items

4. Collect cash or use Square upon delivery

Market test complete. Failure with minimal $ committed.

It worries me that an idea that could be done for so little capital is able to raise money. These are heady times.

Question to OP: did you ever consider targeting caregivers who would like to cook for their loved ones but can't leave the house for significant periods?

girvo 11 hours ago 1 reply      
Amazingly negative comments on this. How interesting. I thought the post has value, even if they are mistakes that people make over and over... which is interesting, to my mind, and means I should try extra hard to not fall prey to them. It seems like that's easier said than done however, no matter how "obvious" and "simple" it is...
burgers 22 hours ago 5 replies      
> even if your company isnt tech-heavy (such as Dinnr).

I feel like this would be a giant red flag to me. You are essentially a logistics startup, and you believe this is not "tech-heavy". Logistics is likely one of the most complicated tech-heavy problems being solved right now.

Things like pricing(30% margins in grocery food sounds very high), recommendations etc. Was there ever a way for food bloggers to have their recipes automatically pulled into Dinnr for readers to purchase?

This is almost a 100% tech company. I wonder how much of underestimating that, combined with the founders possible(I'm guessing) lack of tech knowledge contributed to the failure.

DontBeADick 20 hours ago 1 reply      
> b) People are too optimistic about their future behaviour.

I read about a focus group once where a company was asking consumers about different color tea kettles. They had tea kettles in a dozen colors and asked people which one they would purchase. The responses were split fairly evenly among all the different colors. At the end of the day, the people who participated in the focus group were allowed to take home one of the tea kettles for free. Almost all of them chose white or black.

idlewords 21 hours ago 0 replies      
I like post-mortems but I vehemently disagree with the author's approach of trying to draw more general conclusions, to appeal to a startup audience.

What specifically happened? What were the recipes like?

Did anyone like the food?

Your experience is the most valuable thing you gain from a failed experiement. I urge you not to try to distill it into business platitudes. Be specific and descriptive!

tptacek 21 hours ago 2 replies      
This post links to an e-book by @robfitz called _The Mom Test_, which I hadn't known about before. It's fantastic and totally worth the price; also, that guy can write. I can see so much of my own bad pitching in the "how not to do it" examples he gives.
bhaile 23 hours ago 1 reply      
Thanks for writing it up. Processing what went wrong will lead you to better opportunities up ahead.

Interesting though on #5: "As one of my first investors later told me, you must have developers on your core team in the same room."

I think this is never the guess and am surprised when some startups take their devs offshore.

hartator 19 hours ago 1 reply      
"I will try not to go too much into the Dinnr specifics. Most of the readers of this post will not be interested in Dinnr itself."

It's actualy the interesting part. Advices or lessons don't have a lot of value without context.

mikpanko 5 hours ago 0 replies      
Looks like Blue Apron is doing much better with the same idea in the USA - according to CrunchBase they raised $50M in round C several months ago (http://www.crunchbase.com/organization/blue-apron).
mbohanes 17 hours ago 1 reply      
Hi all, I am the author of this post. Thank you very much for your comments, some of which were really thought-provoking. I will address them in a separate blog post when I find the time, probably next weekend. @mbohanes
pbreit 16 hours ago 0 replies      
Count me among the many wondering if the correct lessons were learned. I still have the feeling that the basic concept is OK and that the execution was lousy. He needed 10 very good, easy-to-provide/make recipes at a decent price point. And then market/sell the bejeezus out of it.
gioele 1 day ago 1 reply      
> No way it would be an investment that would give investors a 10x return.

I think this kind of expectations ruin good business ideas.

A thing like Dinnr may have worked well if run for years, not just tested out for 18 months. It may have not produced a 10x return, but it may have been enough for four people to live on and, maybe, then expand to another city and have four other people live working at Dinnr 2.

james1071 4 hours ago 0 replies      
I am in London and just don't see who the target customer was meant to be.

If I wanted to buy some ingredients, I would go to the supermarket or (usually a better choice) one of the many different foreign grocers that are 5 minutes away.

markdown 14 hours ago 3 replies      
An entrepreneur in New Zealand created such a service in March 2013. It's now worth $27m and is expanding into Australia.



mgkimsal 23 hours ago 2 replies      
"Having such detailed feedback wouldnt require diving into the data, but having someone spend an hour stress-testing my thinking and assumptions would have been gold dust and could have prevented a lot of effort wasted."

Hrm... I get asked (on occasion) to do this sort of 'stress test' on someone's idea - and usually they don't want to hear the negatives anyway. Sometimes it can morph a not-great idea in to a better/great one, but sometimes the reality is "this should be dropped". I wonder if the author had actually received some of that hyper-critical thinking on dinnr earlier, and the feedback was "this is bad, don't do it", if he'd still pressed on or not.

vinceguidry 22 hours ago 1 reply      
> I really would have needed a critic who said:> Look, you have the following problems:> 1. Show me your market research.> 2. You do realize theres no comprehensive online supermarket in Sweden, right?> 3. What are your assumptions about the customers problem?

Yeah, after trying, I've found it's very difficult to get people to listen to this kind of advice and give it the weight it deserves. I feel like most would-be entrepreneurs either can already think in this manner themselves and so don't need me or anyone else to do it for them or are destined to learn how, expensively.

sgdesign 17 hours ago 2 replies      
Something I see missing from a lot of these startup discussions are the ideas of identity and community.

Sure, "solving a problem" is important. But Nike isn't solving a problem that other shoe companies haven't solved a thousand times over. The reason they're successful is that buying into their brand makes you part of a bigger community, and that community has a positive identity attached to it.

Same thing with a startup like Exposure (https://exposure.co/). Sure, the product is great, but they're also fostering a community and projecting a very specific image. Without that special "something" they'd be no different from Flickr or the countless other photo sharing services out there.

This brings me to Dinnr. Did you try to grow a community of passionate members organically by writing a blog, hosting events, being active on forums like whatever the Hacker News for cooking is?

Or did you just try to "solve a problem"?

porker 11 hours ago 1 reply      
If anyone wants to hire a critic (per #4) I am your man! I find entrepreneurs don't like hanging out with me because I am "not positive" enough (you're meant to surround yourself with upbeat go-do people right?). However, this pessimism also translates to realism. I'm not always right, many times too pessimistic esp about new ideas, but many times right about why an idea isn't going to fly and where the weaknesses are.

If you have ideas how to make that skill marketable (or even better received)...

dusklight 14 hours ago 0 replies      
Uh so the author ventures the hypothesis, that no one used his product because there was no demand. Can I venture a second hypothesis? No one used his product because it was too expensive? How much did things cost and how did dinnr experiment with pricing?
freddyduarte 6 hours ago 0 replies      
My only question is: Why would anyone invest on Dinnr when it had an insignificant client base and no profits? OP must be a great sales person or have great connections.
dkarapetyan 22 hours ago 2 replies      
This is only one level removed from what munchery does and I actually would pay for this service if it was in SF and they also offered on premise cooking lessons or something along those lines. This is still in my opinion a pretty good idea and has wings.
porker 11 hours ago 0 replies      
I disagree strongly with the generalisations in #2. I do not find people in the UK people who actively do. The education model sounds very similar to Austria.

London Business School is world-class and the self-starters who go there are self-selecting.

gingerlime 23 hours ago 1 reply      
Funny, I just bumped into a startup[0] that seems to do the same thing, and looks like it received some funding[1]...

I wonder how well these guys are going to do.

[0] http://signup.marleyspoon.com/ - it's operational in germany at https://www.marleyspoon.de/

[1] $1.5M Seed according to https://angel.co/marley-spoon/

usav 23 hours ago 0 replies      
Thanks for sharing this! I appreciated how candid you were. I'm sure a lot of startup founders are taking a look at what they're building right now to look for the same patterns.
endzone 6 hours ago 0 replies      
this guy has some funny ideas about "anglo saxon" education. british education is by and large very much like the austrian model in delivery at least, though we are not as keen on specificity.
xux 18 hours ago 1 reply      
>However, 5 months later, after the new website was up and running

Why take 5 months to make a new site?

mrfusion 23 hours ago 2 replies      
So why is plated doing so well?
spydertennis 21 hours ago 0 replies      
You should have just delivered multiple meals at once. With an accompanying schedule and appropriate food that wouldn't go bad before its time.

Then you are removing the inefficiency of buying food items that you only use portions of and eliminate things going bad.

liuw 19 hours ago 0 replies      
The "Turd Polishers" picture really makes me laugh.
sblank 18 hours ago 1 reply      
Hmm. Was this written in 1999? Or perhaps the notion of the Lean Startup is not understood in London?

Seriously, the author seems completely unaware that his "lessons" could have been avoided if he would have simply looked at the growing body of literature, on-line classes, Startup Weekends, blogs by Eric Ries, Alexander Osterwalder, et al that now exist.

Even in writing these lessons he seems to be unaware of them.

What's missing?

learnstats2 21 hours ago 0 replies      
All the A/B testing in the world feels desperately pointless when a founder claims that the #1 lesson learned was a 7th grade statistics lesson. https://en.wikipedia.org/wiki/Suggestive_question
mrfusion 23 hours ago 1 reply      
What's an online supermarket?
jqm 15 hours ago 0 replies      
I'm sorry to hear all the "psh...of course it didn't work. he should have done this and that's!".

I thought this was an interesting read. Armchair after-the-fact analysis when you weren't the one involved (which can cloud judgement) is pretty easy.

Thanks to the author for taking the time to write up his experiences.

Bypassing a Python sandbox by abusing code objects
95 points by JoachimS  17 hours ago   20 comments top 5
bryanh 13 hours ago 7 replies      
Building a truly safe Python sandbox (well, in CPython at least) is widely considered a fool's errand [0]. However, a Python sandbox can be relatively safely done in two ways:

1. By completely reimplementing Python at a lower level and intercepting system calls, like PyPy's sandbox feature [1].

2. By utilizing other, more mature OS level constructs (IE: think containerization like LXC (which is imperfect), stacking OS features like seccomp/chroot/etc. or better, true virtualization like Xen).

Ideally, you'd combine them both and then run them on a "dumb box" which is just a REST API with no other keys or system access. The key to designing moderately secure sandboxes is just accepting that there are angles of attack you haven't considered yet, so design in layers.

If you find yourself inspecting code to decide if it is safe, you are fighting a losing battle... I'd love to hear of any success stories here though!

Great detailed write up of the pitfalls @op, thanks.

[0] https://github.com/haypo/pysandbox/ or https://lwn.net/Articles/574215/[1] http://pypy.readthedocs.org/en/latest/sandbox.html

bobbyi_settv 1 hour ago 0 replies      
> However, that requires accessing the "func_code" member, which is explicitly blocked.

But you've already shown that you can build the string "func_code" using your xor code. So you can access the member using getattr/ setattr:

getattr(func, str_containing_func_code)

setattr(func, str_containing_func_code, new_code)

zzleeper 13 hours ago 2 replies      
I'm wondering if it's possible to get a simpler/better attempt at sandboxing by just inspecting the generated bytecode, instead of focusing on the .py text file..
thu 8 hours ago 0 replies      
I was able to connect to any database from a multi-tenant ERP written in Python. The idea was to get hold of the class of the connection object, and re-instanciate it, while passing a new database name in parameter. That ERP lets users (of different tenants) write Python code in various places. The kind of code you could write was already restricted but not sufficiently. Now all the dunder are forbidden, things like getattr() are also forbidden, which makes the trick used in that blog post not possible. One of the funny thing I had to do was to use lambdas to name intermediate values, because statements were prohibited.
orf 9 hours ago 0 replies      
Very interesting read, I did something similar[1] on a Microsoft site. I didn't think of creating a code object from scratch though, great idea.

I did a web app test once for an application created by IBM. They offered the ability for administrators to run Python code in a 'restricted' sandbox to manipulate data. The app was Java so it was run through Jython, and they locked down all the usual suspects like open(), file() etc. But because it was Jython you could just use the Java io.* classes and bypass all their restrictions.

1. http://tomforb.es/breaking-out-of-secured-python-environment...

Reverse engineering a counterfeit 7805 voltage regulator
218 points by carloscm  1 day ago   58 comments top 9
quarterwave 14 hours ago 0 replies      
This is a beautiful post, and I am reluctant to try and say anything profound. Still, all of HN's a stage so I'll attempt a brief explanation for the general reader about why we need voltage regulators.

A logic chip like a microprocessor is designed for a particular supply voltage, if this voltage drops too much the logic circuitry will switch falsely. Say we had only a capacitor and we tried to power the logic chip with it. As the chip draws current the capacitor discharges - this is because current is movement of charge, so the charge (and energy) can come only by draining the capacitor. For an ideal capacitor the voltage is directly proportional to the charge across it, so as the charge drains the voltage falls. To hold the voltage constant we need to keep 'topping up' the capacitor with charge. This is what a voltage regulator does - it uses a negative feedback loop to sense the capacitor voltage and when that voltage falls the circuit provides just the right amount of charge 'juice' for the top-up.

As we take the foot off the clutch pedal in a car, the load gets engaged to the engine and if we sense a stall we press the gas pedal a bit. That's the imagery of a voltage regulator in action.

The capacitor plays a key role because the regulator feedback loop isn't very fast - one trouble with fast feedback circuits is chatter, or responding to every blip. Negative feedback circuits are designed to be more like ship wheels - they like to steer sedately and not respond to every excited cry from the mast. But what happens if a current blip arises because a logic circuit block turns on all at once (in response to some block of code)? That local current blip is provided by the capacitor, it acts like an ATM to provide local draws - but it still depends on the regulator to top it up.

In fact you can think of a battery as a capacitor that tops itself up via electrochemistry, it works as long as there are ions in the electrolyte. If instead of 'bandgap energy' we used the chemist's terminology of 'electrochemical potential difference', then the system similarity becomes evident.

kjs3 23 hours ago 2 replies      
This is with all seriousness the most informative thing I read this week. Took me right back to my abortive flirtation with EE as an undergrad. If thing had been explained this clearly, I might have stuck with it.
dewitt 1 day ago 7 replies      
Super informative post. Thanks for sharing.

One thing I didn't understand was the author's comment that "I bought the part off eBay, not from a reputable supplier, so it could have come from anywhere."

A 5-pack of quality 7805's can be found on Amazon for $5, including Prime shipping, (e.g. http://amzn.com/B00H7KTRO6), so what's the incentive to buy parts of unknown provenance on eBay or the like?

I ask not being a hardware guy myself, so genuinely curious, as I've heard stories like this before.

Again, fantastic overview of the chip, though. I learned a lot.

[Edit: spelling]

joshu 11 hours ago 0 replies      
Ken, If you're gonna write stuff like this, I'm gonna continue to offer to fund your eBay spelunking expeditions.
MarcScott 1 day ago 1 reply      
Prior to reading this the only thing I knew with certainty about a 7805 was never to use it to pick up your PCB. They get a little toasty when shorted, and I've burned myself on them more often than on my soldering iron.
data-cat 1 day ago 1 reply      
Very interesting. I really liked the interactive chip guide.
bluedino 1 day ago 3 replies      
I know almost nothing about electronics - why don't the thin wires going to the die melt like a fuse, are they just made out of the right material?
raverbashing 1 day ago 1 reply      
Well, the 7805 is sold by multiple vendors, so, it may be a counterfeit, or just a subcontracted part?

Very, very nice explanation of the 7805 though. This is the bread and butter of linear regulators.

baq 22 hours ago 0 replies      
it continues to amaze me how such a little thing can generate so much heat while doing useful work and operating normally.
IPv6 privacy addresses crashed the MIT CSAIL network
173 points by anderskaseorg  1 day ago   119 comments top 14
ay 22 hours ago 5 replies      
The "issues with IPv6" are with education, operation, configuration.

I personally ran WiFi networks with 8000+ wireless clients on a single /64 subnet (my employer's CiscoLive conference), and assisted/consulted in running the networks with more than 25000 clients on a single /64 subnet (Mobile World Congress).

The default kinda suck, and the bugs may happen. but the statement "IPv6 is not ready for production" is wrong.

I'd be happy to volunteer a reasonable amount of time to work with the OP or others having a network of >1000 hosts, to debug the issues like this, time permitting, vendor independent. (glass houses and all that).

There are bazillion knobs in IPv6 and a lot of things can be fixed by just tweaking the defaults (which kinda suck).

Network of <500-700 nodes generally don't need to bother much. It's not optimal a lot of times with the defaults but it will work.


the seeming "charity" of volunteering the time isn't. I want to understand what is broken, so we can bring it up back to IETF and get it fixed + make better educational publicity to prevent folks shooting themselves in the foot. It'll make it to the stacks in another decade, but it will.IPv6 is powering nontrivial millions of hosts now - so the correct words to use "needs tweaking for my network", not "not ready for production". Let's see what the tweaks are and if we can incorporate them into the protocol, if necessary.

smutticus 18 hours ago 1 reply      
The title should probably read,"Bugs in JunOS caused network downtime."

This isn't really news. There are bugs in all routing and switching OS's. That's why they hire support people. This isn't me trying to rag on Juniper. I know lots of people who work for JTAC and they're incredibly smart folks. I'm sure they'll get this sorted out and fixed in JunOS and, like any bug, merged into upstream releases.

This isn't me trying to point out that IPv6 is infallible. There might be some design choices that the IETF made with IPv6 that were stupid, but mostly they got it right, and it's too late now to change most of them anyways.

This is just reality with new software. New software has bugs.

You can blame it on MLD, but really MLD is no more complicated than IGMP. You can blame it on NDP, but really NDP isn't much more complicated than ARP.

At a minimum IPv4 required ARP to function. However, in reality it also required atleast IGMPv2. Since without IGMP, or some way to manage multicast, how are you going to get something like VRRP to work. Link-layer multicast is not new to IPv6.

praseodym 21 hours ago 0 replies      
We've also hit this Intel Ethernet driver bug, even though we don't have IPv6 deployed. Linux will send MLD packets on bridged ports by default, triggering the Intel driver bug on Windows machines.

With only two Windows machines saturating their Gigabit Ethernet connection whenever they went into standby, we managed to crash the university's switches big time (we're a group with our own VLAN within the university's network, so we make use of their network equipment).

Naturally, because the issue only occurs during standby, and usually users don't log off thus preventing Windows from sleeping, we first hit the bug during the Christmas holidays (2013). The culprit hosts were all in use for just a couple of months. In the end, it took a couple of hours to reproduce this bug during working hours!

We fixed it by using different NICs (we didn't want to rely on the Intel driver to be updated after a clean install; Windows Update doesn't have the fixed version), and by disabling MLD snooping on the Linux hosts, since we aren't yet using IPv6 anyways. This prevents the Intel bug from being triggered in our environment.

tgflynn 1 day ago 5 replies      
As someone watching from the sidelines I had no idea there were such major issues with IPv6. It seems like IPv6 has been out there for a long time (about 10 years) in terms of being supported by OS's and networking hardware, if not ISP's. So I would have thought that cutting edge institutions (like MIT) would already have years of experience with it and have worked out most of the kinks by now.

If this is not the case what does it mean for more widespread IPv6 adoption ? If such adoption is significantly delayed or stalled what will the consequences be, both for current Internet growth in the face of IPv4 address depletion and for new technologies like IoT ?

ghshephard 14 hours ago 0 replies      
Nobody on this thread really seems to be talking about one of the issues brought up in the IPv6 analysis (though I'm not sure if it caused their outages - any time I read a post mortem with the phrase, "bridge loops" I usually don't look any further - that alone is enough to bring a network down)

If I read the post correctly, one of the roots of their problems seems to be that either (A) Traffic flooding causing excess traffic as a result of multicast packets being flooded over their network, or (B) If they used MLD snooping to reduce the flooding, the switches they have only support 3,000 entries for multicast groups - which are quickly exceeded with the privacy IPv6 addresses that are generated by the hosts (each of which creates it's own multicast entry, and some of their hosts had 10+ addresses)

Other than turning off privacy based IPv6 addresses, and moving to something like RFC 7217, is there a solution? Increasing the number of multicast entries on the switch to something larger, say, around 30,000 entries combined with reducing the length of time in which a privacy address is valid (and therefore requiring a group) from one day, to say, one hour?

mrb 18 hours ago 0 replies      
An issue glossed over by people is:

"the entire TCAM space dedicated to IPv6 multicast is only 3,000 entries"

Some mid-level switches normally have TCAM space for hundreds of thousands, or millions of entries, IPv4 or IPv6. Maybe their vendor artificially crippled their line of switches, or maybe the switches were deployed with a configuration error. It is probably the former though. Network vendors like to make you believe some features cost a lot to implement and that you really need their highest-level gear, when in fact even the biggest TCAM in silicon cost a few tens of dollars, at most.

AaronFriel 1 day ago 4 replies      

    I used Ubuntu as an example, but it is hardly the worst offender. We have seen    Windows machines with more than 300 IPv6 addresses
Wow! I don't operate a very large network, but I do operate an IPv6 network and I've never seen one of our machines use more than 2 addresses. I feel like they've got some other configuration option or oddity going on that's causing a lot of these problems, but I am guessing they're much smarter than I am, so I don't know what to say.

Could someone elaborate on this? I've never seen this behavior on an IPv6 network, and I'm just running a server or two with radvd and no custom switch configuration.

MichaelGG 23 hours ago 2 replies      
What's the reasoning for dropping ARP? It seemed like a simple architecture. The post seems to indicate IPv6 requires a ton more hardware resources. And if Juniper doesn't have a basic feature like MLD snooping after all this time, uh? Shouldn't practically designing a high-volume switch be part of creating such a fundamental protocol? (I know designing 2 elegant implementations of other protocols would have fixed a ton of things in nasty protocols like HTTP - dumbass things like line folding and comments-in-headers.)

Is this a case of idiocy seeping through the IETF because they can? It's pretty easy to write something down on paper if you don't have to implement engineering and product management on the result. Or because you're out of touch with reality, like the source routing feature which was kept in IPv6 despite it only ever being a problem in IPv4? Or is this a case of the protocol being superior and vendors just being very lazy?

p1mrx 23 hours ago 2 replies      
Here's a proposed algorithm for making privacy addresses more manageable:


Essentially, the suffix is hash(secret | prefix), so your address is stable on a given network, but changes as you roam between networks.

spindritf 1 day ago 3 replies      
the random address is changed regularly, typically daily, but the old random addresses are kept around for a fairly long time

I don't understand this part. I have Ubuntu machines in a network which is technically /48 but only one /64 prefix is announced by radvd and they all have only two addresses, one derived from MAC and one private/random changing over time. They certainly never have eight.

Are those previous addresses not visable in ifconfig or ip -6 addr show?

s_q_b 18 hours ago 0 replies      
This is now the second major network that I've heard ran into this exact problem. The update to Windows caused the end-user nodes to send out lots of IPv6 packets The access layer switches went to full CPU utilization, and you ended up with packet storms across the network. There really should be an advisory about this.
walshemj 23 hours ago 0 replies      
It would have been interesting to see a network diagram.
acd 4 hours ago 0 replies      
Advocate the engineering principle fault domains to isolate the problem with L2 broadcasts with L3 routers between the L2.
windexh8er 17 hours ago 1 reply      

I have no idea where to even start - this article was written by someone who has no large scale IPv6 deployment experience. There are errors upon, back-to-back, errors in what's assumed and the expected results and assertions with the vendor (Juniper) and the protocol operation (IPv6).

I'm not surprised that it's towards the top of HN but it shows the relative understanding of the HN crowd with regard to complex network related topics.

Zeroing buffers is insufficient
290 points by MartinodF  1 day ago   128 comments top 25
pslam 20 hours ago 2 replies      
Part 2 is correct in that trying to zero memory to "cover your tracks" is an indication that You're Doing It Wrong, but I disagree that this is a language issue.

Even if you hand-wrote some assembly, carefully managing where data is stored, wiping registers after use, you still end up information leakage. Typically the CPU cache hierarchy is going to end up with some copies of keys and plaintext. You know that? OK, then did you know that typically a "cache invalidate" operation doesn't actually zero its data SRAMs, and just resets the tag SRAMs? There are instructions on most platforms to read these back (if you're at the right privilege level). Timing attacks are also possible unless you hand-wrote that assembly knowing exactly which platform it's going to run on. Intel et al have a habit of making things like multiply-add have a "fast path" depending on the input values, so you end up leaking the magnitude of inputs.

Leaving aside timing attacks (which are just an algorithm and instruction selection problem), the right solution is isolation. Often people go for physical isolation: hardware security modules (HSMs). A much less expensive solution is sandboxing: stick these functions in their own process, with a thin channel of communication. If you want to blow away all its state, then wipe every page that was allocated to it.

Trying to tackle this without platform support is futile. Even if you have language support. I've always frowned at attempts to make userland crypto libraries "cover their tracks" because it's an attempt to protect a process from itself. That engineering effort would have been better spent making some actual, hardware supported separation, such as process isolation.

willvarfar 1 day ago 3 replies      
Excellent point! I really hope such a sensible suggestion is added to mainstream compilers asap and blessed in future standards.

Apologies to everyone suffering Mill fatigue, but we've tried to address this not at a language level but a machine level.

As mitigation, we have a stack whose rubble you cannot browse, and no ... No registers!

But the real strong security comes from the Mill's strong memory protection.

It is cheap and easy to create isolated protection silos - we call them "turfs" - so you can tightly control the access between components. E.g. you can cheaply handle encryption in a turf that has the secrets it needs, whilst handling each client in a dedicated sandbox turf of its own that can only ask the encryption turf to encrypt/decrypt buffers, not access any of that turf's secrets.

More in this talk http://millcomputing.com/docs/security/ and others on same site.

AlyssaRowan 1 day ago 4 replies      
It's becoming gradually more tempting to write a crypto library in assembly language, because at least then, it says exactly what it's doing.

Alas, microcode, and unreadability, and the difficulty of going from a provably correct kind of implementation all the way down to bare metal by hand.

The proposed compiler extension, however, makes sense to me. Let's get it added to LLVM & GCC?

cesarb 1 day ago 0 replies      
For AESNI, you probably are already using some sort of assembly to call the instructions. In the same assembly, you could wipe the key and plaintext as the last step.

For the stack, if you can guess how large the function's stack allocation can be (shouldn't be too hard for most functions), you could after returning from it call a separate assembly function which allocates a larger stack frame and wipes it (don't forget about the redzone too!). IIRC, openssl tries to do that, using an horrible-looking piece of voodoo code.

For the registers, the same stack-wiping function could also zero all the ones the ABI says a called function can overwrite. The others, if used at all by the cryptographic function, have already been restored before returning to the caller.

Yes, it's not completely portable due to the tiny amount of assembly; but the usefulness of portable code comes not from it being 100% portable, but from reducing the amount of machine- and compiler-specific code to a minimum. Write one stack- and register-wipe function in assembly, one "memset and I mean it" function using either inline assembly or a separate assembly file, and the rest of your code doesn't have to change at all when porting to a new system.

kabdib 1 day ago 2 replies      
I don't think this can be a language feature. It's more a platform thing: Why is keeping key material around on a stack or in extra CPU registers a security risk? It's because someone has access to the hardware you're running on. (Note that the plain-text is just as leaky as the key material. Yike!)

So stop doing that. Have a low-level system service (e.g., a hypervisor with well-defined isolation) do your crypto operations. Physically isolate the machines that need to do this, and carefully control their communication to other machines (PCI requires this for credit card processing, btw). Do end-to-end encryption of things like card numbers, at the point of entry by the user, and use short lifetime keys in environments you don't control very well.

The problem is much, much wider than a compiler extension.

dmm 1 day ago 0 replies      
Remember this the next time someone says "C is basically portable assembler." It's not precisely because you can do many things in assembly that you can't directly do in c such as directly manipulate the stack and absolutely control storage locations.
pbsd 1 day ago 2 replies      
> For encryption operations these aren't catastrophic things to leak the final block of output is ciphertext, and the final AES round key, while theoretically dangerous, is not enough on its own to permit an attack on AES

This is incorrect. The AES key schedule is bijective, which makes recovering the last round key as dangerous as recovering the first.

ggchappell 23 hours ago 3 replies      
This article makes a good point, but I think the problem is even worse than he describes.

Computer programs of all kinds are being executed on top of increasingly complicated abstractions. E.g., once upon a time, memory was memory; today it is an abstraction. The proposed attribute seems workable if you compile and execute a C program in the "normal" way. But what if, say, you compile C into asm.js?

Saying, "So don't do that" doesn't cut it. In not too many years I might compile my OS and run the result on some cloud instance sitting on top of who-knows-what abstraction written in who-knows-what language. Then someone downloads a carefully constructed security-related program and runs it on that OS. And this proposed ironclad security attribute becomes meaningless.

So I'm thinking we need to do better. But I don't know how that might happen.

anon4 22 hours ago 2 replies      
If I have enough control to the point where I can read your memory in some way, I can just use ptrace. Heck, I could attach a debugger. It seems ludicrous to want that level of protection out of a normal program running on Mac/Win/Linux.

Now, if your decryption hardware was an actual separate box, where the user inserts their keys via some mechanism and you can't run any software on it, but simply say "please decrypt this data with key X", then we'd be on to something. It could be just a small SoC which plugs into your USB port.

Or you could have a special crypto machine kept completely unconnected to anything, in a Faraday cage. You take the encrypted data, you enter your key in the machine, you enter the data and you copy the decrypted data back. No chance of keys leaking in any way.

nly 1 day ago 1 reply      
Anything sent over HTTP(S), such as your credit card numbers and passwords, likely already passes through generic HTTP processing code which doesn't securely erase anything (for sure if you're using separate SSL termination). Anything processed in an interpreted or memory safe language puts secure erasure outside of your reach entirely.

Afaict there's no generic solution to these problems. 99.9% of what these code paths handle is just non-sensitive, so applying some kind of "secure tag" to them is just unworkable, and they're easily used without knowing it... it only takes one ancillary library to touch your data.

Someone 1 day ago 3 replies      
"As with "anonymous" temporary space allocated on the stack, there is no way to sanitize the complete CPU register set from within portable C code"

I don't know enough of modern hardware, but on CPUs with register renaming, is that even possible from assembly?

I am thinking of the case where the CPU, instead of clearing register X in process P, renames another register to X and clears it.

After that, program Q might get back the old value of register X in program P by XOR-ing another register with some value (or just by reading it, but that might be a different case (I know little of hardware specifics)), if the CPU decide to reuse the bits used to store the value of register X in P.

Even if that isn't the case, clearing registers still is fairly difficult in multi-core systems. A thread might move between CPUs between the time it writes X and the time it clears it. That is less risky, as the context switch will overwrite most state, but, for example, floating point register state may not be restored if a process hasn't used floating point instructions yet.

Chiba-City 1 day ago 2 replies      
Please, assembly is OK. It's not even magic or special wizardry. My dad programmed and maintained insurance industry applications in assembly side by side with many other normal office workers for decades. Assembly is OK.
db999999 9 hours ago 0 replies      

  #include <string.h>  void bar(void *s, size_t count)  {        memset(s, 0, count);        __asm__ ("" : "=r" (s) : "0" (s));  }  int main(void)  {        char foo[128];        bar(foo, sizeof(foo));        return 0;  }  gcc -O2 -o foo foo.c -g  gdb ./foo  ...  (gdb) disassemble main  Dump of assembler code for function main:   0x00000000004003d0 <+0>:sub    $0x88,%rsp   0x00000000004003d7 <+7>:mov    $0x80,%esi   0x00000000004003dc <+12>:mov    %rsp,%rdi   0x00000000004003df <+15>:callq  0x400500 <bar>   0x00000000004003e4 <+20>:xor    %eax,%eax   0x00000000004003e6 <+22>:add    $0x88,%rsp   0x00000000004003ed <+29>:retq     End of assembler dump.  (gdb) disassemble bar  Dump of assembler code for function bar:   0x0000000000400500 <+0>:sub    $0x8,%rsp   0x0000000000400504 <+4>:mov    %rsi,%rdx   0x0000000000400507 <+7>:xor    %esi,%esi   0x0000000000400509 <+9>:callq  0x4003b0 <memset@plt>   0x000000000040050e <+14>:add    $0x8,%rsp   0x0000000000400512 <+18>:retq     End of assembler dump.

delinka 1 day ago 1 reply      
Why are there no suggestions to change processors accordingly? Intel should be considering changing the behavior of its encryption instructions to clear state when an operation is complete or at the request of software. Come to think of it, every CPU designer should be considering an instruction to clear the specified state (register set A, register set B) when requested by software. Then, the compiler can effectively support SECURE attributed variables, functions, or parameters without needing to stuff the pipleline with some kind of sanitizing code.
erik123 1 day ago 1 reply      
It very much looks like a situation in which the system has already been compromised and is running malicious programs that it shouldn't. These malicious programs could still face the hurdle of being held at bay by the permission system that prevents them from reading your key file.

However, they could indeed be able to circumvent the permission system by figuring out what sensitive data your program left behind in uninitialized memory and in CPU registers.

Not leaving traces behind then becomes a serious issue. Could the kernel be tasked with clearing registers and clearing re-assigned memory before giving these resources to another program? The kernel knows exactly when he is doing that, no?

It would be a better solution than trying to fix all possible compilers and scripting engines in use. Fixing these tools smells like picking the wrong level to solve this problem ...

gioele 1 day ago 1 reply      
WRT the AESNI leaking information in the XMM registers, wouldn't starting a fake AES decryption solve the problem?

Also, wouldn't a wrapper function that performs the AES decryption and then manually zeroes the registers be a good enough work around?

lnanek2 1 day ago 2 replies      
Doesn't actually seem true. OK, running the decrypt leaves the key and data in SSE registers that are rarely used where it might be looked up later by attackers. There isn't any portable way to explicitly clear the registers. Then why not just run the decrypt again with nonsense inputs when you are done to leave junk in there instead? Yes, inefficient, but a clear counter example. You could then work on just doing enough of the nonsense step to overwrite the registers.
ge0rg 1 day ago 1 reply      
Even if the proposed feature is added to C and implemented, there is still the (practical) problem of OS-level task switching: when your process is interrupted by the scheduler, its registers are dumped into memory, from where they might even go into swap space.

It would be consequential (but utterly impractical) to add another C-level primitive to prevent OS-level task suspension during critical code paths. Good luck getting that into a kernel without opening a huge DoS surface :)

Demiurge 1 day ago 3 replies      
Every time I read one of these posts about a clever "attack vector", how something can be gleaned from this special register, or a timing attack, somesuch, I remember about a theory that the sound of a dinosaurs scream can be extracted from the waves impact made on a rocks crystal structure.

I googled pretty hard for real life example uses of a timing attack, and now using of stale data on the register, but couldn't find anything. Does anyone know of examples of this actually being done?

zvrba 1 day ago 0 replies      
Posts like this make me just more convinced about that C combines the worst of "portability" and "assembly" into "portable assembly".
cousin_it 1 day ago 4 replies      
I don't completely understand the C spec. Would the following approach work for zeroing a buffer?

1) Zero the buffer.

2) Check that the buffer is completely zeroed.

3) If you found any non-zeros in the buffer, return an error.

Is the compiler still allowed to optimize away the zeroing in this case?

ausjke 1 day ago 0 replies      
There are some chips providing zeroizing a small region of device memory when needed and it's specially designed to hold encryption keys etc. It's also done by hardware.
rsync 1 day ago 1 reply      
Would running your file system read only and optimizing the system for fast bootup be a workaround ? If so you could zero successfully by rebooting...
cheez 1 day ago 1 reply      
The suggestion has the right idea, but the wrong implementation. The developer should be able to mark certain data as "secure" so the security of the data travels along the type system.

Botan, for example, has something called a "SecureVector" which I have never actually verified as being secure, but it's the same idea.

higherpurpose 1 day ago 1 reply      
> It is impossible to safely implement any cryptosystem providing forward secrecy in C

What about Rust?

Realtime streaming from torrents in the browser
129 points by 0x4139  20 hours ago   80 comments top 22
feross 13 hours ago 2 replies      
This is neat, but what we really need is BitTorrent over WebRTC, for actual decentralized BitTorrent in the browser. See http://webtorrent.io

The project's goal is to build a browser BitTorrent client that requires no install (no plugin/extension/etc.) and fully-interoperates with the regular BitTorrent network. We use WebRTC Data Channels for peer-to-peer transport.

WebTorrent is designed to match the BitTorrent protocol as closely as possible, so when the time comes, existing BitTorrent clients can easily add WebRTC support and swarm with web-based torrent clients, "bridging" the web and non-web worlds.

WebTorrent is already working as a node.js bittorrent client (just do `npm install webtorrent -g` and use the `webtorrent` command), and as a web-based client (though the docs for this latter part are currently very lacking -- this will improve in the coming days!).

nimbusvid 19 hours ago 2 replies      
I created (and later closed) a similar service using mega.co.nz instead of torrents. The main problem I see with your approach is that you serve video from your server (and presumably do the torrent fetch server side). This opens you up to liability, makes you responsible for DMCA take downs and puts the workload on the server.

In contrast NimbusVid was entirely client side. The drawback was that the source data needed to be a web friendly seekable format; you couldn't play an arbitrary video file.

alkimie2 9 hours ago 1 reply      
Just a couple of not-very-deep comments:

The service did nothing at all on Chrome with ad-block-plus installed.

On Firefox the service did show some very nice blue balls moving from left to right after I did a search on some common video content and selected it, but that was about it.

caractacus 11 hours ago 1 reply      
China is so far ahead of the west in real-time (and live) peer to peer streaming. Look at video systems like QVOD (now dying after state intervention), Xigua, JJVod: they all have a central tracker and use swarms of users to supply realtime video streaming. PopcornTime was a decent example of this but China's been doing it for years - it was live television streaming over p2p first with PPLive, PPS, Sopcast, etc and this then morphed into streaming of films and television. All use a slightly modified version of bittorrent. Most use bittorrent hashes to mark content.

I still can't understand why this has been Bram Cohen's main focus for the last few years and he still doesn't have a working prototype.

stefan_kendall3 16 hours ago 0 replies      
Guys I just broke into some guy's house that's known to be suuuuuper litigious. Check out all my selfies!
drdaeman 6 hours ago 1 reply      
Doesn't work for me, even if I allow Facebook CDN. Search works, but then...

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://ec2-54-68-78-110.us-west-2.compute.amazonaws.com/deta.... This can be fixed by moving the resource to the same domain or enabling CORS.

lukasm 7 hours ago 1 reply      
In FF: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://ec2-54-68-78-110.us-west-2.compute.amazonaws.com/deta.... This can be fixed by moving the resource to the same domain or enabling CORS.

ChromeFailed to load resource: net::ERR_EMPTY_RESPONSE

in p0rn mode just shows the player and nothing is going on.

ris 6 hours ago 1 reply      
Nicely done. A(nother) webapp that refuses to do anything unless you allow it to load facebook's javascript.
cantbecool 14 hours ago 0 replies      
Awesome idea, but I don't see how this will last very long. I expect your host will pull the plug in a few days to a week, especially if you're opening yourself up to downloading and then hosting the content.

I run www.moviemagnet.net which surprisingly is hosted in the US, but hasn't been taken down yet.

benjymau5 5 hours ago 0 replies      
When I can get the page to load (which i assume is due to the HN DDOS of love) i still get "video format or mime type not supported" on Firefox 32.
SimeVidas 5 hours ago 1 reply      
I can confirm that it works in Chrome. (I tried with a Korra episode. It started playing almost instantly, with a few image glitches here and there.)
TD-Linux 19 hours ago 0 replies      
What could possibly go wrong?
jpdelatorre 7 hours ago 1 reply      
There's another service that I use that exactly do the same thing and it works well. http://put.io
rglullis 15 hours ago 0 replies      
Don't mean to hijack the thread, but has any of these stream-from-torrents-services developed any kind of "magnet link metadata database"? By metadata, I do not mean the common torrent metadata, but some kind of queryable index.

What I have in mind would be some kind of mapping between magnet link and themoviedb (or tvdb, musicbrainz id, etc). Seems like an obvious feature for me...

shmerl 9 hours ago 2 replies      
BitTorrent doesn't order file fragments by design. So how can you stream it without downloading the whole thing first (to the server at least)? Which defeats the purpose of streaming really.
m0dest 19 hours ago 1 reply      
Hosed. Getting timeouts for /details. 2 issues in general, though:

(1) The BitTorrent protocol is not hospitable to linear downloading.

(2) Pirated video content is extremely incompatible with browser-based playback... stuff like MKV and AVI containers, DTS/AC3 audio, multiple audio tracks, segmented RAR files.

If this works even for vanilla MP4 content, I will be super impressed - but the browser is the wrong place for this type of media app right now.

malvosenior 19 hours ago 5 replies      
Haven't got this to work yet. I'd love to see a BT streaming solution not as a centralized service, but as a locally running app. Does such a thing exist?
imaginenore 19 hours ago 0 replies      
I hope you have good lawyers.
rotub 16 hours ago 0 replies      
Searched and started playing an episode of the Simpsons on my iPhone within a minute. Impressive!
rileyjshaw 17 hours ago 1 reply      

  Uncaught ReferenceError: React is not defined

nimbusvid 19 hours ago 1 reply      
Can you use magnet links?
notastartup 16 hours ago 1 reply      
How long until the creator gets arrested with a full military gear tactical swat team and paraded before CNN as an evil technical mastermind hurting the country's entertainment industry and an interview with MPAA ?

At least use a server located in countries which US cannot freely exercise it's jurisdiction powers to buy you time. Hosting Digitalocean node in Singapore won't help.

Modern anti-spam and E2E crypto
383 points by timmclean  1 day ago   132 comments top 28
petercooper 1 day ago 3 replies      
It's amazing how little sender reputation can count for with Gmail in the face of other features, however. I have a good reputation as a sender but also send almost a million mails a month and I spend a lot of time investigating oddities in Gmail deliverability.

All of my mails are newsletters containing 10-30 links, and more than once I've found the mere inclusion of a single link to a certain domain can get something into spam versus a version without that link, often with no clear reason why (domains that are particularly new are one marker, though). Or.. how about using a Unicode 'tick' symbol in a mail? That can get a reputable sender into Spam versus a version without the same single character (all double tested against a clean, new Gmail account) :-) Or how about if you have a link title that includes both ALL CAPS words and ! anywhere? Your risk goes up a good bit, but just go with one of them, you're fine..

I now have a playbook based around numerous findings like this, some based on gut feelings looking at the results and some truly proven, and even with my solid reputation as a sender, I'm having to negotiate a lot content-wise each week. But do I like it? Yeah, in a way, because it's also what stops everyone else being a success at it.. Gmail sets the bar high! :-)

(Oh, a bonus one.. include a graphic over a certain size? Your chance of ending up in the Promotions folder just leapt up. Remove it, you're good. It doesn't seem to be swayed much by actual content. So I've stopped using images where at all possible now and open rates stay up because of it.)

skrebbel 51 minutes ago 0 replies      
I don't understand the objection against email costing money. I send you a mail? I pay $0.0001 to you. You reply? You pay $0.0001 back.

There is idea that this somehow blocks access to email for people who have a hard time paying for things on the internet (for whatever reason), but it is misguided: everybody who has access to the internet pays for it. ISPs could easily give every subscribed 10000 free emails every month.

Texting costs money and yet people do it.

What am I missing?

runeks 1 day ago 3 replies      
> A possibly better approach is to use money to create deposits. There is a protocol that allows bitcoins to be sacrificed to miners fees, letting you prove that you threw money away by signing challenges with the keys that did so.

This wouldn't work, because a miner can easily pay himself any amount of bitcoins that he has saved up in fees, and include this transaction in his own block (not broadcasting it). Thus he can basically create these "deposits" for free, and sell them for a profit.

That's the thing: whatever you try as a counter-measure, you always come back to money: in the above scenario, money would replace "deposits" because "deposits" would just be sold on the open market for money. Proof-of-work becomes money: if something important requires proof-of-work, you can be sure that a web app would surface that performs proof-of-work in exchange for money.

It always comes back to money, because whatever restriction you put on something, whether it be "pay fee to Bitcoin miners", "Solve proof-of-work puzzle", or something else entirely, these things will always end up being sold for money in an efficient market, because of the increased efficiency of division of labor: why should I use my inefficient smartphone to calculate proof-of-work, when I can pay a service with custom ASICs to do the job for me at a fraction of the cost?

As far as I can see, the only alternative that can work besides money is something that cannot be sold for money. And I can't come up with anything that fits this requirement.

sounds 1 day ago 2 replies      
One important concept that seems to be missing from the discussion is Sender Stores.

Email currently uses a Receiver Stores model. SMTP servers can relay messages, but in almost all cases the message is transmitted directly from the originator's network to the recipient's network. The storage of the message only effectively changes _ownership_ once, even if the message headers say it was forwarded many times.

That makes email a Receiver Stores model: the recipient's network is expected to accept the message at any time and then hold it until the recipient comes to look at it.

Some of the bitcoin messaging protocols propose a Sender Stores model. That is, the message may be transmitted any number of times but the recipient's network is not responsible for long-term storage. The sender's network must be able to provide the message at any time up to the point when the recipient actually looks at the message.

There are some obvious restrictions such as requiring that the message be encrypted with a Diffie-Helman key (negotiated when the message is first transmitted to the receiver's network) to reduce the feasibility of de-duplicating millions of messages. And in order to prevent revealing exactly when the recipient reads the message, the recipient's network doesn't ack the message for a while.

Ultimately all of this is just designed to make bulk email (slightly) more expensive. Spammers run on very, very thin margins. But it doesn't do anything to solve the problem of account termination or blacklisting.

patio11 1 day ago 0 replies      
Worth reading for confirmation regarding the importance of reputation in deliverability, which is something that is not widely understood by non-experts but which has really toothy consequences for many HNers' businesses.
idlewords 1 day ago 2 replies      
This is an incredible write-up. Can someone who knows the author plead with him to write up the long history of the Spam Wars that he mentions in this document? I could read this stuff all day.
beloch 1 day ago 3 replies      
I'm not too knowledgeable about this stuff, but would it work if end-to-end encryption was only initiated after the first time somebody replies to an address? e.g. If somebody contacts you for the first time, they lack your public key (and/or a shared secret for authentication) and must send you plaintext. Then, if you reply, you automatically provide them with your public key and/or authentication info to send you encrypted messages in the future. Thus, most spam would be in plain-text, anyone who knows how the system works would avoid discussing sensitive info in the first email they send somebody, and everybody else wouldn't know the difference.
zokier 1 day ago 0 replies      
One thing nice about E2E crypto in messaging is that it implies strong identities, which most importantly allow building whitelists with high level of confidence. And of course if we can make those identities costly to acquire/burn, either by proof-of-work or even just with a CA model, that alone should cut spam significantly.
rwallace 7 hours ago 0 replies      
> When we started gmails were about $25 per 1000 so we were able to quadruple the price. Going higher than that is hard because all big websites use phone verification to handle false positives and at these price levels it becomes profitable to just buy lots of SIM cards and burn phone numbers.

How does that work? Don't SIM cards cost more than 10 cents?

thaumaturgy 1 day ago 3 replies      
Well this is pretty neat.

I've been working on custom software to improve the spam filtering on my mail server for the last year (side project). It currently works by letting hosted users forward spam messages to a flytrap account, and then the daemon runs, reads the forwarded message, tracks down the original in the user's mail directory, does a whois on the origin in the mail headers, consults its logs, and then adds a temporary network-wide blackhole to iptables.

Originally it was intended to work alongside SpamAssassin and SQLGrey and all that, but last night I started considering replacing SpamAssassin altogether. I love SA, but the spammers are beating it regularly now. My TODO notes in the code actually say, "reputation tracking for embedded URLs, domains, ccTLDs and gTLDs, sender addresses, and content keywords." I wrote the first bits of code for reputation tracking this morning.

It's not much of a step for the software really, because it already uses embedded URLs in a message as part of the profile "fingerprint" for finding the original message from a forwarded version.

But I'm a bit chuffed to hear that I'm on the right track, considering how effective Gmail's tactics have been. :-)

Small service providers have it really tough right now. Users don't tolerate any spam at all. A few years ago, the state of the art for small independent services was SpamAssassin + SQLGrey (or other greylisting) plus a few other tricks; that's not sufficient anymore, and most of us smallfry lack the resources to come up with something much better.

After just 6 weeks in production, the software already has 20+million IPs blocked at any given time.

sgentle 1 day ago 0 replies      
I wonder if this would be an interesting application for Homomorphic Encryption. True FHE is still wildly inefficient, but there are some interesting applications like CryptDB where sort-of-Homomorphic-Encryption is feasible for certain restricted operations (keyword search being one).

In a system like that, maybe you could send your encrypted message along with some encrypted keywords that you consider to be spammy to some centralised service. That would, at least, avoid some of the client-side-filtering-is-too-hard problem.

As far as reputation, this might be one of the rare times where a Web of Trust seems like a good idea. Generating lots of false positives and negatives would be a lot less powerful if the value of those reports was filtered by how much you trust the account that made them. With email you already have an implicit source of trust, in that anyone you mutually email with is unlikely to be a spammer.

Seems like a really interesting problem space to be involved in.

ch 1 day ago 6 replies      
Couldn't some form of proof-of-work system be used to increase the cost of sending a message without it having much of an economic impact on a casual sender? Was that what he was alluding to with the "burning bitcoin" reference?
anon4 1 day ago 1 reply      
So why not use one key per source, kind of something like this:

Alice wants to receive mail from Bob. Alice generates a public/private key pair and gives the public half to Bob. When Bob wants to send mail to Alice, Bob uses the public key Alice gave him. If Alice receives spam, she marks the public key it was encrypted with as "fuck it, the spammers got it" and never receives mail with that key again. Then she notifies Bob that the key he had has been compromised and sends him a new one. Alice could then, after Bob has lost her key to spammers one too many times, simply decide not to talk to someone like him.

This would give mailing list operators a large incentive never to share your email with anyone, otherwise you could just block them forever.

On the flip side, if the mailing list is really important to you, the operator could reject your new key and tell you you'll either receive their spam or you won't be part of the mailing list. Though I don't see why someone would do that in favour of just including ads in the mails themselves.

PaulHoule 1 day ago 0 replies      
I think reputations are part of it but there are other aspects to.

I switched to gmail because my mail with every other provider and client was choked with phishing messages from major banks. So much work has been done on preventing origin spoofing in 2014 that accepting phony mail from chase.com is a sign of gross incompetence.

dochtman 1 day ago 0 replies      
I submitted this without the ?hn at approximately the same time. Pretty weird that this one gained traction while my submission did not.


p4bl0 1 day ago 2 replies      
The discussion here is already quite long so maybe I missed it, but I don't see anyone asking (or answering) the first question that came to me while reading the linked email:

Why is the cost of end-to-end crypto never taken into account?

I just can't believe that we have reached a point where it is possible to cheaply mass mail the way spammers do if you need to encrypt each email for each recipient. That alone should be disuasive enough, at least that's always what I thought. If I'm right, all the discussion about the need for client to extract features from emails and send them to a necessarily trusted centralized third party is useless. But I may be missing something, where am I wrong?

fdsary 1 day ago 0 replies      
Btw, this is written by Mike Hearn, who'd I'd like to nominate to hacker of the year. Super cool guy, mad respect to him :)
lazylizard 1 day ago 0 replies      
could there be a antispam gateway that replies to 'maybe'(as in spam, ham and maybe) mails with a temporary url that hosts a webform, before they reach the inbox? the webform could even limit message length, prevent attachments, be protected by akismet and so on. let the message from the form be actually relayed to the real mail server. and once the recipient replies, automatically whitelist that sender or possibly even the domain?
hendzen 1 day ago 0 replies      
Mike Hearn is also a core Bitcoin developer, as well as an HN commenter. Hi Mike!
Oculus 19 hours ago 0 replies      
Really interesting article until it gets into the Bitcoin talk. I feel like his passion towards Bitcoins seeped a little too much into the article towards the end.
zerr 1 day ago 2 replies      
>we had put sufficient pressure on spammers thatthey were unable to make money using their older techniques

Could anyone comment how spammers make money actually?

loup-vaillant 1 day ago 4 replies      
> Botnets appeared as a way to get around RBLs, and in response spam fighters mapped out the internet to create a "policy block list" - ranges of IPs that were assigned to residential connections and thus should not be sending any email at all.

So basically, I can't send email from home? This is unfortunate. If we want freedom, we need decentralization, and this kills it.

bilalhusain 1 day ago 6 replies      
I wish Google provided an API to lookup a sender's reputation so that even a locally deployed spam filter could use the information.
orf 1 day ago 2 replies      
The Gmail spam filter is indeed impressive, but on several occasions I have found 'real' emails being triggering it. Those times were just me browsing the spam folder randomly and I hate to think what else it has swallowed.
joelthelion 1 day ago 1 reply      
Can someone explain botguard? I'm not sure I get it.
awt 1 day ago 1 reply      
No mention of Bitmessage, which provides E2E crypto and anti-spam.
Zigurd 1 day ago 0 replies      
Some of my contacts have been using verification gateways/whitelists for email for decades. If spam were to become a problem, I would use one.
danso 1 day ago 2 replies      
Fascinating read, and as amazing as email is, the OP manages to still make me realize how much I take it for granted:

> So I think we need totally new approaches. The first idea people have is to make sending email cost money, but that sucks for several reasons; most obviously - free global communication is IMHO one of humanities greatest achievements, right up there with putting a man on the moon. Someone from rural China can send me a message within seconds, for free, and I can reply, for free! Think about that for a second.

RunSwift: Try Swift in Your Browser
111 points by jparishy  22 hours ago   21 comments top 4
jparishy 21 hours ago 2 replies      
Hi! I made RunSwift this past week and thought it was pretty neat so I wanted to share it. Have fun!
bezalmighty 6 hours ago 1 reply      
Nice work! I've been looking at using the REPL to make some web stuff too, so I understand the challenges involved here!

BTW we are running a Swift hackathon @ GitHub HQ in a few weeks, it would be cool if you could join us: http://www.swifthack.splashthat.com

arturventura 19 hours ago 1 reply      
I dabbled with implementing Swift on JavaScript, but has proven a bit difficult. The grammar is very big and just implementing a parser is huge task on its own.

I would like to see someone tackle this though.

general_failure 15 hours ago 1 reply      
If I hit compile I get "gtimeout: failed to run command /Applications/Xcode6-Beta4.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift: No such file or directory"
       cached 7 September 2014 19:02:01 GMT