hacker news with inline top comments    .. more ..    22 Jun 2014 News
home   ask   best   4 years ago   
U.S. Postal Service Logging All Mail for Law Enforcement (2013) nytimes.com
136 points by gwern  5 hours ago   70 comments top 14
jmadsen 3 hours ago 2 replies      
This is the biggest danger we face - the slow deterioration of our privacy to the point where when things like this emerge, a large part of the population's reaction is "why didn't you expect this?"

In fact, I would NOT expect the post office to spend millions in technology to track letters. They shouldn't have any INTEREST in doing that - it isn't important to their main function of delivering mail.

I expected them to use tech to better be able read zip codes & route more efficiently, and to count mail to better place people and resources where most needed.It should stop at that.

Simply because something is public does not mean the govt should be spending resources tracking and storing it

GabrielF00 3 hours ago 8 replies      
The outside of a letter is public information - when you give the letter to a postal worker you know that a series of people who you will never meet are going to have to look at that address in order to route it to the right place. It's not clear to me how you can have a reasonable expectation of privacy.
aviv 2 hours ago 0 replies      
I'm surprised the article did not raise the point that the US government effectively maintains a huge database of everybody's handwriting.
bediger4000 4 hours ago 9 replies      
This deserves a lot more attention than it's gotten so far.

Why isn't this data used to do something for good, rather than for what we can safely presume to be evil? For instance, I'm sure we could use this data to track down every evil junkmailing sub-human "direct mail marketing" moron, publicize their contact info, and let's see how they like getting nothing but poop in their mailboxes?

wfunction 1 hour ago 1 reply      
Reminds me: anyone know why envelopes delivered by USPS sometimes have small (< 1cm long) and consistent tears along their edges sometimes? Why does the post office do this?
zaroth 1 hour ago 0 replies      
This is just a single good example of an extremely wide reaching pattern of large scale data acquisition, processing, storage, and data mining.

License plate capture, facial recognition, mail and email envelopes, contact lists, credit card receipts, CCTV, social network trawling... soon enough drones will capture and store 24/7 aerial video of all major cities.

I take it for granted that the government will know everyone I know, know everywhere I go, and know everything that I buy, sell, earn, and save. Pretty much the only thing left you have a chance at keeping private is the content of your conversations, and good luck with that.

VonGuard 44 minutes ago 0 replies      
That's why the library is the only civil service you can trust. They delete your records when you return books, unless you opt-out.
delbel 1 hour ago 0 replies      
I live out in a rural area and our post office is horrible. When I moved into town, I didn't receive mail for 8 months even after approaching them numerous times and complaining. At one point I opened a mail box in a neighboring town. UPS and FedEx had no problem delivering packages. I actually bragged to my neighbors that "I don't get mail" at one point. Anyway I finally complained big time after not receiving some USPS package from aliexpress, and they started to deliver. I wish I could opt-out, I have never given out my address before, officially, and only receive junk mail. Makes me think of the Seinfeld skit, https://www.youtube.com/watch?v=Hox-ni8geIw
pascalo 4 hours ago 1 reply      
The Stasi did mail tracking on a massive scale in the GDR before the wall came down. Funny/Sad how things seem to go in circles.
greg5green 2 hours ago 1 reply      
> Together, the two programs show that postal mail is subject to the same kind of scrutiny that the National Security Agency has given to telephone calls and e-mail.

Uhh, I'm pretty sure the NSA scrutiny included the bodies of said calls and emails. I'm slightly offended by an agency tracking who I'm in contact with, but I've very offended by an agency knowing why I'm in contact with said person.

higherpurpose 13 minutes ago 0 replies      
It's like they don't care at all about the Constitution anymore.
dylanrw 3 hours ago 1 reply      
The EU's right to be forgotten laws are sounding very attractive...
sfrank2147 4 hours ago 2 replies      
I don't think this is comparable to recent NSA actions. The Post Office is a government agency. It's not reasonable to expect the government not to keep track of the mail it delivers.
NoMoreNicksLeft 4 hours ago 2 replies      
I suspected this months ago, and I pointed out how trivial it would be to log addresses on reddit. I was called a conspiracy theory nutcase.

I'll go one further: the high-speed sorting machines for envelopes could easily be modified to photograph the interior of envelopes and that this is already happening. You only need to shine a light through them to do this.

A Sound You Can't Unhear (And What It Says About Your Brain) theatlantic.com
123 points by givan  8 hours ago   41 comments top 17
shmageggy 3 hours ago 0 replies      
Since this is a technically inclined audience, here's a tiny bit of mathematical background.

When Ms. Das says that our brains constantly use prior information, she (probably) means prior in a certain specific, technical sense. Modern cognitive scientists often think about perception and cognition in probabilistic terms, so you might characterize the brain's task in interpreting that utterance as finding the most probable sentence (S) given the acoustic input (X), or P(S|X). Bayes' rule says you can write this expression as P(X|S)P(S)/Z, where Z is a normalizing constant (don't worry about it for now).

Because these expressions are so common, we've come up with names for referring to their parts. The first part, P(X|S), is called the likelihood, which tells us how probable the input we experienced would be given a particular interpretation. For instance, if the sentence actually read "competition center" rather than "Constitution center", the sound in the recording would be less likely (although maybe still possible, aka non-zero probability, due to noise, speaker variance, etc). The second part, P(S), represents the prior probability of the sentence. Given our knowledge of English, some sentences are simply more likely than others. For instance, the sentence "colorless green ideas sleep furiously" is grammatically well-formed but tremendously unlikely.

So, to conclude, when the presenter says we use prior information, she (probably, no pun intended) means that upon hearing what the correct interpretation should be, we increase the value of P(S), thereby allowing us to compute the proper perception.

Here's a nice and fairly readable overview paper (with a whole section on prior knowledge) if you think this stuff is as cool as I do -- www.indiana.edu/~kruschke/articles/JacobsK2010.pdf

*edits for clarity and formatting

suprgeek 52 minutes ago 0 replies      
The first time I encountered this effect was when I was listening to this TED Talk "Michael Shermer: Why people believe weird things" https://www.youtube.com/watch?feature=player_detailpage&v=8T...

The whole thing is worth a listen ..but in the relevant clip he talks about audio illusions and gives a great example.

This just reinforces the lesson, it is the Brain that is Hearing!

ChuckMcM 18 minutes ago 1 reply      
I think it is fascinating that I can hear it and my wife can't. The only difference I can imagine is that I play music and she doesn't so perhaps I've trained myself a bit to pick out structure. Conversely she can see those 3D images when you hold up the picture to your face and I can't.

It gave rise to another thought, is there an audio equivalent of color blindness? Not deaf so much as unable to process certain sounds?

reporter 1 hour ago 1 reply      
This is really interesting to me. I can't hear it and I am a native english speaker. I was just in a room of people and played it and everyone clicked into the meaning right away. I played it about seven more times and I guess I kind of hear it now, but still, not really.

I did horrible in elementary school and high school until I realized I was what I labelled myself a "visual learner". I excelled in college and am just about to finish my PhD in evolutionary biology, largely because I stopped attending lectures and decided to learn everything on my own. After listening to this illusion I decided to search for auditory dyslexia and sure enough there are disorders like this and I definitely fit the definition, especially central auditory processing disorder. Does anyone know if this test is correlated with audio disorders or where I can get more information on this?

jtchang 1 hour ago 1 reply      
For some reason I can't seem to hear it. I just hear the general tempo and beat but not the words.
atesti 1 hour ago 0 replies      
I'm not a native speaker and I feel a similar effect to have happened to me very often with song lyrics: Parts I could not understand clearly are like text that makes no sense, I hear other words than the real ones. Once I read through the lyrics, I always hear the right words, the real text.
brainless 1 hour ago 1 reply      
This article brings me back to a thought I have been having for some time now - could we use illusions (audio, visual) to distinguish between humans and bots?

If CAPTCHAs are becoming increasingly easier to break, could illusions give stronger guarantees because they use more inherent "human" features of our brain - things that bots will not easily decipher in the foreseeable future?

srean 3 hours ago 3 replies      
It is really interesting and amusing how persistent certain 'illusions' can be even when one is exposed to it just once and for a short time.

Fun story, this was a long time ago: I was interning at Google at that time. One day I tell my then cube neighbor about an interesting experiment on visual perception that I had read about. A professor at MIT had carried out experiments on his class. Students were asked to wear prismatic goggles that shifted their vision and then try to catch objects. Hilarity ensued, but soon enough the brain adapted to the shift. Same with inverting glasses, soon the students would not even realize that their vision was inverted. The fun part was when they took their glasses off, their motor reflexes would still compensate assuming that they were wearing those glasses. Much hilarity again. I was telling all this to my cube neighbor Michael Riley, not knowing who he was, he says with a twinkle in his eye "Yeah, that was us".

The most remarkable thing about these experiments that I learned from him was that the professor would provoke an illusion on the students on the first day of class. I dont remember exactly what the illusion was, but it was some visual artifact, seeing patterns that werent visible a moment ago, much like the OP. At the end of the semester the professor would demonstrate that the entire class could still see that illusion, although they have not been exposed to it in the intervening 4 months !

I tried hard to find an articles on these experiments and phenomena, but my google fu is not working today. I distinctly remember wikipedia articles on it, but am not able to retrieve them. Either my keyword memory has gone down or Google's search quality/relevance.

Navigating Google was such a nerd minefield, but in the best possible way. The excited student that I was, I ended up lecturing about longest common subsequence to Thomas Szymanski not knowing his association with the history of diff on unix. Same thing happened with SVM's, I was explaining its merits and demerits to Corinna Cortes, my other cube neighbor, not knowing she was the first author of the paper on SVMs. Not only would they not take offence they would all keep indulging. Then one day I step out for a break, a senior person whom I knew had a cube on the row behind me, approaches me, apologizing profusely and ad infinitum that he had got locked out, could I please let him in. No big deal, but he just would not stop apologizing and thanking me. A few days later a co-intern asks me if I know that guy. I said sure, I let him in once. He says no, do you know who he is. He asked me to checkout the name tag on his cube. I saunter off, "Brian Kernighan" !

An important takeaway of this internship was to experience the humility of all these people, and the sense that you are surrounded by such iconic stalwarts in CS and you wouldn't even know it because they are so... normal.

Coming back to illusions, another visual/auditory one that does not stop working even when you know exactly what is going on is the McGurk effect https://www.youtube.com/watch?v=G-lN8vWm3m0

EDIT Ummm so many downvotes ? I did not see that coming, would greatly appreciate what you found downvote worthy. It is always insightful to know how ones comment may rub someone the wrong way. Feel free to reply, I promise no offence will be taken and I will learn something along the way.

@tbirdz thanks for the perspective, I did not realize that it could come off as bragging. IMO you can brag only about things that you have achieved using your own efforts. For me it was a mix of foot in the mouth and an important learning experience, especially in humility.

ajuc 1 hour ago 0 replies      
I've read the description before listening to the sample and I've heard it the first time.

Not a native speaker. if that's important.

Riseed 4 hours ago 3 replies      

The first time through, I heard "[jibberish jibberish jibberish] is at the next stop." (Perhaps I've spent too much time on public transport.) What does that say about my brain?

But yes, once I heard the whole sentence, I couldn't not hear it.

baby 3 hours ago 2 replies      
Doesn't really work on me, maybe because I'm not a native speaker?
the_cat_kittles 3 hours ago 0 replies      
this might be a good lesson in why its so hard to have insights sometimes. in this case, you need to exert energy to not hear the words after you know what the phrase is. in the context of problem solving, maybe this phenomenon can cause you to stick at a local maximum- your brain is forcing the information to conform to your best mental model, and that makes your search for a more optimal solution even harder. those visually ambiguous pictures that have two or more "sticking points" are another example. in any case, that audio example is an absolutely amazing example!
emgeee 3 hours ago 2 replies      
This reminds me of when people talk about hearing demonic messages in popular songs when played backwards. I think Stairway to Heaven is one of the more famous examples
dav- 3 hours ago 1 reply      
Wow, that's so cool - are there any other examples of these audio illusions?
opendais 3 hours ago 0 replies      
Did anyone else find the only part of the jibberish they heard was "...next stop" and they heard it on the first pass?
nomnombunty 4 hours ago 0 replies      
nooooo i cannot unhear it! #foreverstuck
dickdales 3 hours ago 1 reply      
I really don't like the philosophers anecdote about cognition and perception. Why is it so shocking that perception is simply a chemical reaction to sensory input and cognition is an identified repetition of such?
Eigenmorality scottaaronson.com
294 points by bdr  14 hours ago   66 comments top 20
knowtheory 13 hours ago 2 replies      
This is very long but worth reading.

The modeling exercise herein is basically attempting to use a game theoretic model to test out some really dumb/simplified models of cooperation and whether the behaviors observed approximate anything approaching what our intuitions might say is moral behavior, up to and including an 'eigenjesus' and 'eigenmoses' up against tit_for_tat bots and the like.

derefr 11 hours ago 1 reply      
As Aaronson points out, PageRank has a few edge-cases when used to do this analysis, basically because it treats its graph as a closed, internally-solipsistic system--it has no definition of morality other than what each of its nodes prefer of one-another. This works if you have a diverse spectrum of preference functions distributed among the nodes (the result tends toward a "live and let live" meta-ethics), but if your analysis is aimed at a preferentially homogeneous group (e.g. Nazi Germany), PageRank won't give you the solution of "move the 'evil' majority toward the tenets of the good minority." It'll instead suggest that the optimal system would have the 'good' minority give up and become 'evil'.

Scott Alexander suggests (http://slatestarcodex.com/2014/06/20/ground-morality-in-part...) you could instead use DW-nominate, the tool that does meta-cluster-analysis to mathematically detect "party lines" in congress (which are basically just clusters in human-utility-function-space anyway), to find what preference-subfunctions (e.g. helping old ladies cross the street, returning a wallet you find laying on the ground) correlate together into a cluster (that might be called 'goodness') -- and then grounding/normalizing the PageRank analysis with that, such that you can tell whether the system as a whole is in a 'good' or 'evil' state.

jonnathanson 9 hours ago 2 replies      
Please note that what follows can be interpreted as criticism, but it's not intended as such. I found this article quite interesting, and for me, it was the starting point for a lot of different thoughts about game-theoretical approximations of morality. So what follows is a somewhat tangential addition to the article, and not a critique of it.

My problem is not with the "eigenmorality" concept, nor with the various takes on playing it out across consecutive Prisoner's Dilemma sessions. That aspect is extremely interesting. Rather, my problem is with the Prisoner's Dilemma as a valid ground on which to test something like morality.

The Prisoner's Dilemma is a foundational, theoretical framework for evaluating human behavior. And it's a wonderful, elegant framework. But it treats humans as emotionless agents, and the "punishment" as an abstract, theoretical, rationally navigable scenario. Place real human beings into the Prisoner's Dilemma, with real-world consequences, and you get all sorts of unexpected results. The Prisoner's Dilemma is notorious for holding up perfectly fine in vitro, but less so in situ. Cultural conditioning plays a huge role in how real people act in the game. So do emotions, and irrational heuristics like overemphasizing loss aversion. (Tversky and Kahneman's work has a lot to say about the latter.)

Using the Prisoner's Dilemma as a proving ground, I think you'd arrive at an abstract model of morality -- but you wouldn't capture how morality actually plays out with quasi-rational, emotional, circumstantially driven, human agents. And, philosophically speaking, that's where morality actually counts the most.

jzwinck 5 hours ago 0 replies      
"The deniers and their think-tanks would be exposed to the sun; theyd lose their thin cover of legitimacy."

Don't we have the ability to do this now by visualizing or analyzing citations? A set of "fake" think-tanks which promote bogus ideas should be identifiable as a mostly-disconnected component of a graph today. We don't need to get each think tank's explicit opinions about the others. Aaronson points out this single-purpose inquiry would encourage gaming, but analyzing a graph built for other incentives may give more "honest" results (at least for a while).

And we have, at least five years ago: http://arstechnica.com/science/2009/01/using-pagerank-to-ass... . You can follow links from there to a project called EigenFactor, academic research about shortcomings of PageRank in this application, and more.

Results of such analyses should be used as input to human thought processes and not some sort of legislative robot.

mrb 9 hours ago 2 replies      
Wow: "The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil." The author says this diverge violently from most peoples moral intuitions, but actually this result is PRECISELY what moral relativism predicts. See, there are 2 school of thoughts attempting to explain where morality comes from:

- either morality is an absolute concept (things are inherently good or evil, theists might say this good/evil is defined by a god or gods). This is http://en.wikipedia.org/wiki/Moral_absolutism

- or morality is relative, defined by people, defined by cultures (what one culture might consider immoral, another culture will consider it moral, and nobody is inherently right or wrong). This is http://en.wikipedia.org/wiki/Moral_relativism

If moral relativism is right, it would be absolutely expected that the 98% are "almost perfectly good", since they do things that the majority consider good. What a fantastic essay...

andrewflnr 10 hours ago 1 reply      
I think the definition of morality in the article is far too simplistic. In my (Christian) view, it's an important aspect of moral maturity to be able to be nice to immoral people without cooperating with their goals. Besides that dichotomy, the article already mentions that the model lacks critical information, specifically, the actors don't know whether the other actors they're [not ]cooperating with are "good" or "bad".

That said, I find this approach to defining morality fascinating. Maybe if the definitions are refined it will manage to tell us something we already know (not entirely sarcastic; that would be legitimately impressive for a mathematical construct regarding morality).

MichaelDickens 11 hours ago 0 replies      
This is an interesting idea. Aaronson may be joking when he says he's "solv[ed] a 2400-year-old open problem in philosophy," but in case he's not, this doesn't come anywhere close to solving ethics. Philosophically speaking, it's still necessary to show why his definition of "moral" holds up. All he's done is assess a certain quality and then call it "morality." I think it could better be called "meta-cooperativeness" or something like that.

I think Aaronson realizes this, because he does talk about how Eigenjesus and Eigenmoses don't accord with our moral intuitions in some cases. He also addresses this somewhat in the section "Scooped by Plato." His major point--that something like Eigenjesus can be useful, even if it cannot deduce terminal values--still holds.

ipsin 11 hours ago 0 replies      
I found the addendum about the time-sequence of bad acts to be the most interesting, in that how you approach the problem leads to another wide spray of outcomes.

Scott mentions the "forget the past" and "address root causes" sides, but how do you deal with things in the middle?

Even being able to provide a model that allows for injustices from centuries ago would be impressive, but how should such things decay? Again, the same pressures come into play, based on the interests of the judged parties.

tveita 7 hours ago 0 replies      
His definition is much closer to "popularity" than to anything I would recognize as "morality".

It's strange to exclude intent from your model when it's an important factor in almost all systems of morality.

MichaelDickens 11 hours ago 1 reply      
This seems related to the idea of coherent extrapolated volition (https://intelligence.org/files/CEV.pdf). Both have some of the same problems--in particular, setting up the system requires making moral judgments about how to do so, so it's not actually value-neutral.

(Aside: If I have two completely different thoughts about an article, should I post them in two separate comments or in the same comment?)

MarkPNeyer 12 hours ago 3 replies      
my friends and i had started on this already. i had a hard time explaing to people why it was valuable; looks like scott has done it for us.

please help us!


right now all we have is a way to state which facebook users a person trusts. there's a chrome extension to help with this. it's extremely basic.

i have a server running at https://dewdrop.neyer.me - we need a lot more help!

i'm just putting it on github now - so i'll update the readme in a few minutes.

minority 10 hours ago 0 replies      
Considering that a majority of people who agree with each other are "moral" is highly problematic. Even if everyone in the system is morally equal, this system would automatically create and enhance differences between groups.

The author uses the example of climate-change deniers to express the opinion that minority groups have "withdrawn itself from the main conversation and retreated into a different discourse."

Is this true of other minority groups - feminists? Homosexuals? Minority ethnic groups? It seems highly awkward to claim the same thing.

A better system would be one which considers how to cater for individuals rather than declaring a populist majority to be a special, protected ingroup. There's enough of the latter already.

hhm 12 hours ago 1 reply      
This Tolkien quote builds a similar circular definition of "worth", which might be amenable to the same kind of analysis. https://twitter.com/JRRTolkien/status/480127254857400320
bryan_rasmussen 11 hours ago 1 reply      
It seems to me that this would only be of interest if it can be shown that an immoral person is not someone that cooperates with other immoral people but not with moral people.
cma 10 hours ago 1 reply      
Needy babies are moral monsters according to many of these models..
neotoy 8 hours ago 0 replies      
Good read, but I can't help but end in thinking that by the time all of this would have been figured out, our civilization will be long gone.
javert 10 hours ago 0 replies      
Happiness is the only intrinsic value for a human being, and thus a moral person is a person who pursues happiness effectively. (How to do that is another story.) However, Aaronson's proposed definition of a moral person is not the effective way to pursue happiness. Thus, it is immoral.

It's also immoral to call for all of us to sacrifice industrial output for future generations to solve the supposed climate change problem. There is no reason to presume that future generations are more important than the present generation (in fact, it is demonstrably the case that they are not). Thus, this position is profoundly immoral.

However, the implicit assumption that sacrifice is moral is common to most world religions and also altruism, which is probably where he imported it from. All of them are morally bankrupt. A scientist shold be able to be skeptical and see such logical flaws, even if he is not able to propose the correct solution.

lohankin 9 hours ago 1 reply      
I was following Scott's posts for a while. Most notable feature of those posts: everything he says is predictable. Blog is designed to appeal to the liberal academic establishment, which knows answers to all important questions, and is never in doubt.I don't remember a single example of Scott's opinion which could be deemed controversial in any sense. "Eigenconformism" would be a better name for his blog.
yason 8 hours ago 1 reply      
There is no right or wrong, just acts with unescapable consequences and your freedom to learn something from your choices.
hyperion2010 7 hours ago 0 replies      
You can't "solve" this problem in the same sense that you cannot develop a universally consistent foundation for mathematics. Goedel is there preventing you from EVER proving that one set of axioms is better than another.

I again wrote a longer response but have shortened it because the author seems to have committed a rather grave error which is to assume that human moral 'intuition' is in any way consistent. There are heaps of evidence (cue the trolley car) that human moral judgements really should not be considered a guide for anything. The fact that we can capture the disasters of collective morality observed under various regime's during the 20th century ought to tell us that following those models as a universal foundation for human relations is a terrible idea.

Might also be worth paying a visit to eigennicolo and not adhere to such rigid systems.

The Lambda Calculus for Absolute Dummies (2012) palmstroem.blogspot.com
84 points by agumonkey  7 hours ago   20 comments top 8
Jupe 3 hours ago 4 replies      
I see no inherent value in writing 2x3=6 as:

2 x 3 = MULTIPLY 2 3 : ( abc.a(bc)) ( sz.s(s(z))) ( xy.x(x(x(y)))) = c.( sz.s(s(z)))(( xy.x(x(x(y))))c) = cz.(( xy.x(x(x(y))))c)((( xy.x(x(x(y))))c)(z)) = cz.( y.c(c(c(y)))) (c(c(c(z)))) = cz.c(c(c(c(c(c(z)))))) = 6

This makes Brainf*ck look elegant!

dkural 3 hours ago 1 reply      
What's so funny about watching/reading all these explanations (videos, images, tutorials etc.) is the utter lack of mastery displayed by all the computer scientists / programmers in presenting a simple, rigorous, complete mathematical definition. Due to leaving out these essential details, and unable to actually define what 'the lambda calculus' is, they end up confusing people. Jim Weirich never actually articulates a clear definition and ends up uttering a jumble of words.. although I'm sure he understands it intuitively, he's not able to convey it.
optimiz3 2 hours ago 1 reply      
People hate math formalism because it only makes sense if you already understand it.
samirmenon 6 hours ago 1 reply      
I don't actually find this explanation easier to understand...

I find "To Dissect A Mockingbird" a more intuitive and simpler explanation of Lambda Calculus. I think the visuals help a lot.


eggy 4 hours ago 1 reply      
I thought it was clear. I would start with 1, not 0 for the first example of the derivation of Natural numbers via succession or the related Peano axiom. I know some texts start with 1 and others 0, but given counting most likely started with objects, it is more fundamental to begin with 1. The concept of nothing or zero just seems an abstract step away. Nice treatment of the lambda calculus.
vitorarins 2 hours ago 1 reply      
I wish I was an absolute dummy to get all of that...
cannon10100 5 hours ago 0 replies      
This article is actually really confusing...
Lessons from NYCs improperly anonymized taxi logs medium.com
65 points by vijayp  7 hours ago   12 comments top 6
abalone 1 hour ago 1 reply      
Whoa, whoa.... You're telling me there's a public data set of all taxi trip geolocation data with GPS precision? That's f'ing insane!

I think there's a MUCH bigger privacy issue here than what the author focuses on.

Couldn't you deduce many passenger identities based on addresses? There's a lot of scenarios where passenger identities could be effectively de-anonymized, just based on GPS data. You could then use this data set to analyze their comings and goings.

1. For people who live alone in a single family home, you can pretty much completely track when and where they went by taxi. From this you can deduce a lot about their interests, lifestyle, workplace and schedule, private life, etc. It's profoundly invasive.

2. Even if there's a few people sharing an address, the other dropoff/pickup point can be used to narrow down the likelihood of who it is, especially when combined with other easily obtainable data.

For example if you knew an employee (e.g. that cute barista) lived in a certain neighborhood you could track their trips to/from work and deduce their home address.

Or if you knew there was only one senior citizen (or Muslim, etc.) living in a building, a regular trip to a senior center (or mosque) would reveal when their apartment is vacant.

Or if there's only one young man in a building, a single trip home from a gay bar could out them.

Holy shit.. can you imagine someone just plotting all the trips from a single gay bar? Listing off all the connected residential addresses? And not only that, any subsequent trips home from those addresses the next morning? Taking the walk of shame to a whole new level!

Likewise trips could be used to deduce affairs and other deceptions by fellow residents. "You said you were working late, but the only taxi trip to our building that night was from a bar."

This is just off the top of my head.. I feel I could go on for hours listing all the possible ways this data set could be exploited.

How is this not front page New York Times???

salmonellaeater 1 hour ago 0 replies      
These are old lessons: in 2006 AOL [1][2] and Netflix [1][3] both released datasets that were supposed to be anonymized but were easily de-anonymized. There are older examples based on Census data[4]. It's difficult if not impossible to release a dataset that is both useful and truly anonymized; in Schneier's words:

The obvious countermeasures for this are, sadly, inadequate. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. It turns out, though, that this only makes the problem slightly harder. Narayanan's and Shmatikov's de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it.

[1] https://www.schneier.com/blog/archives/2007/12/anonymity_and...

[2] http://www.securityfocus.com/brief/286

[3] http://www.securityfocus.com/news/11497

[4] http://crypto.stanford.edu/~pgolle/papers/census.pdf

Scaevolus 1 hour ago 0 replies      
Anonymization projects should really invest in an hour of consulting time with a cryptographer-- they would be able to see these flaws instantly.

Nit: this is a lookup table, not a rainbow table. Rainbow tables involve a clever optimization that compresses multiple passwords (in a chain) into a single entry in the table, saving a great amount of disk space.

ddlatham 1 hour ago 1 reply      
Suppose they had generated a random unique ID for each driver and used that instead of a hash throughout. If you had a record of a single ride you made with a taxi driver, you could still find that ride in the database (start location, time, stop location, time). Then you can take your driver's ID and track all other trips that driver has made. Is that truly anonymous?
lumpypua 1 hour ago 0 replies      
Stop rainbow attacks peeps, salt your hashes.


walterbell 3 hours ago 2 replies      
Would you consider anonymizing the data properly and re-publishing as canonical torrent for future analysis?
Torch7 Scientific computing for LuaJIT torch.ch
4 points by ot  33 minutes ago   2 comments top 2
ot 29 minutes ago 0 replies      
I came across about Torch in this FB comment by Yann LeCun:

    I have moved to Torch7. My NYU lab uses Torch7, Facebook AI    Research uses Torch7, DeepMind and other folks at Google use    Torch7.
Apparently it is used in many deep learning labs. I think that in Toronto they mostly used Matlab and Python, does anybody know if this is still true?


srean 18 minutes ago 0 replies      
Upvoted the story because I am keen to see some discussion around it. I have kept an eye on Torch7 but there have been a few things that make me a little wary. A prominent one is the sheer number of unreplied to posts on their mailing list. Numpy/Scipy and Julia lists seem much more welcoming and prompt. Even if people there cannot help they do respond.

Now if we let the social side of it aside, I would really like to know why Torch7/Lua and not say Numpy/Scipy/Theano or Julia for that matter. To me scratching an itch is a fine enough excuse, just wanted to know if there are any compelling technical advantages to do it this way.

Freshmeat.net, 1997-2014 jeffcovey.net
212 points by ingve  16 hours ago   63 comments top 23
liedra 14 hours ago 2 replies      
I was an editor for freshmeat back in the early 2000s and it was a lot of fun. I was there when fm acquired themes.org and was one of the main people tasked with ensuring the HUGE db of themes was sanely migrated into the fm backend. We had stupidly high standards, and I don't think a lot of people really knew how much we threw out over the years, or how much we sanitised the entries (so much broken English!). We also had to (to a certain degree) sanity check the projects - make sure they looked like they did what they did. One of the best projects I ever had to say no to was a "next-gen compression tool" which came during a bit of a fad for these in the early 2000s and basically converted everything to binary and got rid of the 0s. (Not surprisingly, there wasn't an "unzip" tool!) Nice try, guy!

Another story I remember is all the flak we got when we opened the osx.freshmeat.net section - we got so much criticism about how we'd sold out etc. etc. but it actually turned out to be quite a good repository for OS X apps for a while until iTunes kinda took over.

Good times :D

stonogo 12 hours ago 0 replies      
Freshmeat stopped being useful when they converted the well-organized and useful software trove into a worthless pile of tag garbage, and deleted a third of the information in the process.

Before, it was possible to find, for example, a TUI email client written in perl with a BSD license, thanks to the ability to drill down into the trove. After the redesign, it was goddamn near impossible to find anything -- especially things with specific licenses.

I, and just about everyone I know who used it, stopped using it not long after they started focusing on toy web programming more than information curation. I'm sad they mismanaged it to death, but I'm not going to miss it in its terminal state.

js2 15 hours ago 4 replies      
For the LNUX IPO, VA made F&F shares available to anyone who had contributed to Linux. They were fairly liberal in how they interpreted this, and I think the contribution I used to justify my purchase was the "-e" switch to chpasswd.

Anyway, as I recall I was able to purchase 140 shares at $30. The day of the IPO it hit $300+ and I was too stupid to sell (gotta get those long term capital gains rates..doh). I finally sold those shares years later at something like $1.

Oh well. You win some you lose some.

Gracana 15 hours ago 3 replies      
Here's an old archive.org snapshot for anyone who (like me) had trouble recalling what freshmeat was all about: https://web.archive.org/web/20050301154742/http://freshmeat....
Zelphyr 10 hours ago 2 replies      
"always heard that ThinkGeek, the online retailer, was by far the most (only?) profitable part of the business for years on end"

As a former employee of [VA [Research[ Systems]|Linux|Software] SourceForge] I also heard this. Though I believe Slashdot was also a profit center for awhile.

matt__rose 15 hours ago 1 reply      
Interesting perspective from esr on the rationale behind the takeover of Andover in the comments
zeruch 4 hours ago 1 reply      
I used to check FM every morning for years as part of my ritual to look for "neat stuff" to install and play around with on my various boxen. It was great while it lasted. I worked at VA and a common form of watercooler talk in the late 90s was akin to "hey, I found $APP and I think it might really let me do $USEFULTHING or at least be an interesting waste of time"
blablabla123 15 hours ago 2 replies      
Actually the site used to be really useful. Somehow I forgot using it though. One day Ubuntu came out, the package manager (including dep mgmt) worked really well and there was a ton of great software in the repository.

But in the tar xzf ... ; ./configure && make && sudo make install time it was really nice.

I wished something like that existed for JS libraries.

taspeotis 6 hours ago 0 replies      
I'm not sure what Dice Holdings thought they'd achieve by buying Geeknet. Last time I checked, they were busy curating slashvertisements and Business Intelligence "insight" articles for Slashdot.

I wonder how that's working out for them...


macintux 11 hours ago 1 reply      
I recall my then-wife looking over my shoulder as my web browser auto-completed "freshmeat.net". Unsurprisingly, she immediately grew suspicious and expected to see a porn site pop up.

An institution for a very long time, definitely something from a different era. Farewell old friend.

dspeyer 14 hours ago 2 replies      
ESR is talking about building a replacement


ben1040 13 hours ago 2 replies      
It's amusing how VA has gone from a company that made and sold Linux systems, to a company that makes and sells funny T-shirts to people who use Linux systems.
adulau 14 hours ago 1 reply      
The dataset/database of freshmeat/code is quite interesting on a historical perspective of free software. Do you think that the owner would be able to share freely the database? Someone in contact with them?

This would be a nice addition to http://ckan.org/.

booleanbetrayal 3 hours ago 0 replies      
RIP ... i actually did one of the earlier freshmeat logos back before it had its own proper domain.


jimwalsh 15 hours ago 0 replies      
That was an interesting article to read. I have many fond memories of the very early days of Freshmeat and my introduction to OSS and Linux. Freshmeat helped people share their products and was a great place to find new and upcoming projects to check out or even help on.

I feel like the late 90s was such a Wild West time for Linux. Linux is in a great spot now, best it has ever been, but for whatever reason the community just feels incredibly different for me now. It's probably just me aging.

BillyParadise 6 hours ago 0 replies      
You know, it's bad when you remember going to the site, but forgot what you went there for. I thought I remembered it as a daily wacky-news site like fark. I guess I was wrong :)

I'll echo several other replies - it was great - nay, essential - before package managers became good.

Maybe they should have bought and changed their name to yum.com or apt-get.com (instead of freecode) and then more of us would still remember why they went to the site.

rg3 11 hours ago 0 replies      
Wow. I was still using it to monitor new releases of iotop. I think this is going to be shocking news for anyone who was running Linux or BSDs in the early 2000s.
jamespo 12 hours ago 0 replies      
Still remember the days when I used to religiously read their NNTP feed, until it got shut down. Still subscribe to the RSS, not for much longer I guess.
giis 13 hours ago 0 replies      
At-least farewell mail to users would have been nice. I had around 5-6 projects with 70-80 subscribes at freshmeat.net . I'll be extremely glad, If freshmeat.net allowed its users to inform their project subscriber a 'big thank you' note for their support.

Thank you Freshmeat.net (aka freecode.com)

im3w1l 13 hours ago 0 replies      
RIP. I have fond memories of freshmeat.
edwintorok 15 hours ago 1 reply      
Is something wrong with the CSS on freecode.com? It looks very different from how it used to be.
natch 13 hours ago 1 reply      
Writers, please don't assume your readers know what you are talking about. It is your job to explain it. The article should have started with a short explanation of what freshmeat was.
inanutshellus 14 hours ago 0 replies      
Oh thank god. I misread it as Red Meat had closed operations. (http://www.redmeat.com/max-cannon/FreshMeat)


Bare Metal Assembly Raspberry Pi Starfox Tribute youtube.com
80 points by thisisnkp  10 hours ago   10 comments top 9
userbinator 5 hours ago 0 replies      
Great work, it's always nice to see more Asm projects!

"Bare Metal" - does this mean the RPi can run blob-free?

A possible improvement I suggest is to gfx_draw_line in gfx.s - using a fixed-point algorithm could be simpler and faster: http://hbfs.wordpress.com/2009/07/28/faster-than-bresenhams-...

samwilliams 8 hours ago 0 replies      
This is extremely impressive - well done all!

There is a bare metal chess game [0] for the Pi that was presumably another teams entry for the same assignment (they are both from Imperial).

[0] https://github.com/xu-ji/assembly_chess/

JamesAn 33 minutes ago 0 replies      
A welcome return to the Acorn/RISC OS days where "100% ARM assembler" was a back-of-the-box boast for many games and applications (Sibelius).
kator 8 hours ago 0 replies      
Wow that brings back memories of building games on a TRS-80 Model I in z80 assembly!

Good show, I've often felt low level was a dying art, perhaps I'm wrong and stuff like this will push people to learn what a register is and what "flags" are.. :-)

thisisnkp 10 hours ago 0 replies      
marcosscriven 4 hours ago 0 replies      
Very impressive. How things have changed - we did nothing quite so fun and practical in first year computing at Imperial back in 1995!Plus, now I feel old :)
retroencabulato 5 hours ago 1 reply      
I'm impressed first year students can write such clean assembly. Also that they can write both driver code and a higher level rasterizer.
parley 9 hours ago 0 replies      
Nicely done! I remember getting the original game for Christmas one year as a kid, and it was lots of fun. This brings back memories. Kudos!
SSilver2k2 8 hours ago 0 replies      
This is amazing!-Shea
Fragments of an Anarchist Anthropology (2004) [pdf] abahlali.org
29 points by gwern  5 hours ago   9 comments top 3
gwern 5 hours ago 1 reply      
It's a long piece, so one might wonder why I found it interesting; I've excerpted some of the interesting parts at https://plus.google.com/103530621949492999968/posts/Ez6Xh8Ha... - I particularly found interesting the discussion of how egalitarian tribes regard politicians & power.
eggy 2 hours ago 2 replies      
He makes the assertion that slavery (pg 71, section 3), where somebody rents you out to another, and wage labor, where you rent yourself out are an equivalent arrangement. This is an emotional appeal, rather than a logical one, and this type of 'reasoning' seems to be at the root of more assertions in the paper. You take the job that is available under the circumstances, primarily to put food in your mouth and a roof over your head, and secondarily, you work and hope for better to reach your goals and dreams.
dethstar 2 hours ago 0 replies      
Thank you this is great.

About the first section where it talks about lack of anarchism in academia. My friend is actually working on her master thesis (on history) about anarchism (here, in Mexico). She's always telling me about how she struggles with her teachers and how some of them just want tell her off with no arguments like just telling her that anarchists are violent.

Rise of the American Professional Sports Cartel systemsandus.com
32 points by jonnyy  6 hours ago   2 comments top 2
mathattack 2 hours ago 0 replies      
These truly are cartels. The big change that happened years ago was the owners realized that they weren't in competition with each other on the field - the important competition was with other leagues and recipients of entertainment dollars. Then they started to cooperate, collude, exclude entry....
tarre 3 hours ago 0 replies      
This article could be applied to the organizers of huge international sporting events like FIFA and Olympic committee. I would love to see the books of those organisations checked.
Secret Trade in Services Agreement (TISA) Financial Services Annex wikileaks.org
98 points by jamesbritt  11 hours ago   32 comments top 3
lifeisstillgood 9 hours ago 2 replies      
Secret agreements are automatically suspect but this seems like more of the same rather than a change in direction.

This is partly a result of the move to multi-lateral trade talks (bad idea but when WTO rounds take a decade it's not unexpected), and partly inertia - there is little common agreement on how to prevent the next crash, so without a better idea we seem to follow the inverse of Einsteins quip - "the definition of insanity is repeating the last mistakes but hoping for a different outcome

Minor thought : this leak is still a pretty big deal. But it feels like wikileaks is the wrong place for this - like the main media organisations should have already got their investigatory acts together and made wiki leaks irrelevant.

Just wondering ...

cb3 8 hours ago 1 reply      
Who are the people negotiating and agreeing to these 'agreements'(this, ACTA, etc) and what could their justification for keeping them secret possibly be?
javert 9 hours ago 5 replies      
> proponents of TISA aim to further deregulate global financial services markets.


> The draft Financial Services Annex sets rules which would assist the expansion of financial multi-nationals mainly headquartered in New York, London, Paris and Frankfurt

These multi-nationals are purely a product of regulaiton (in this case, more specifically, regulatory capture). Without regulation, there would be thousands of healthy medium-size banks in the US, as there apparently used to be.

Overall, to characterize this as "deregulation" is completely sloppy thinking. We are never going to get a better situation when people think sloppily like this. To do so is, in practice, a moral crime. It supports maintaining the status quo through confusion.

Automating Formal Proofs for Reactive Systems ucsd.edu
6 points by jervisfm  1 hour ago   discuss
Drone.vc Request for Companies #1: Better Audio dronevc.tumblr.com
4 points by dweekly  1 hour ago   1 comment top
Butchering HQX pixel art scaling filters pkh.me
71 points by ux  12 hours ago   10 comments top 2
byuu 11 hours ago 3 replies      
> Unfortunately, the reference code is more than 12.000 lines of unrolled C code. This code is basically duplicated without a thought in every single project.

Not by me. Overly bulky code is one of my pet peeves[1]. Here is my implementation of HQ2x in 6KB, 188 lines of code (not including the header file, which is 1KB and used for external linking.)


I devised a few observations to reduce the code. First is that the 256 patterns have four cases for each corner. If you rotate the eight surrounding pixels twice, the case blending rules match another corner (sans two cases which appear to be typos in the original algorithm. I asked the author, he apparently made the entire unrolled table by hand.) Now that the algorithm is cut down by 75%, put the switch statement into a lookup table.

To speed up the algorithm (and which further reduces its size), I use an old 16<>32-bit expand/compact trick, a YUV lookup table, and a really clever mask compare that blargg came up with. At this point, my version is already significant faster than the original algorithm. But I also added in OpenMP parallelization support, which really makes things run fast on multicore systems.

But anyway ... this guy clearly bested me. 560 lines with HQ3x and HQ4x included is even more impressive. Hats off to him!

([1] I also have inflate in 8kb of code, unzip in (inflate+2KB) of code, PNG decompression in (inflate+8kb) of code, SHA256 hashing in 4KB of code, an SMTP client with authentication+MIME attachments+HTML in 8KB of code, an XML parser (sans DTDs) in 6KB of code, etc.)

madlag 8 hours ago 0 replies      
Nice stuff ! I am wondering how we could use it to improve some of our stuff with this ...
At 20 Amazon is bulking up. It is not yet slowing down economist.com
56 points by rahimnathwani  11 hours ago   16 comments top 3
mathattack 4 hours ago 1 reply      
I'm amazed by Amazon. Some of their moves follow a playbook. Books -> Clothes -> Diapers. A non-technical CEO turning the company into an API machine that outsources it's own technology... Every textbook would say crazy, but he's on to something. Makes one wonder about the drones.
jessaustin 5 hours ago 1 reply      
Interesting: how little was said about AWS.
Zombieball 8 hours ago 2 replies      
I think I know what you are trying to say, but my understanding is Amazon fails to report profits because they are heavily reinvesting into the company. I don't think it is because they are taking a loss on sales.
Show HN: An open distributed search engine for science juretriglav.si
44 points by juretriglav  10 hours ago   7 comments top 2
Blahah 7 hours ago 2 replies      
Jure, your projects never cease to impress me. Really looking forward to talking in depth at OKfest. This idea is so close to what we've been doing that it's a real shame we didn't talk earlier, but the parts of what you're doing that are unique are also truly awesome.

At ContentMine we're doing something totally complementary to this. Some of the tools will overlap and we should be sharing what we're doing. For example, I've been working on a standardised declarative JSON-XPath scraper definition format and a subset of it for academic journal scraping. I've been building a library of ScraperJSON definitions for academic publisher sites, and I've converged on some formats that work for a majority of publishers with no modification (because they silently follow undocumented standards like the HighWire metadata). We've got a growing community of volunteers who will keep the definitions up to date for hundreds or thousands of journals. If you also use our scraper definitions for your metadata you'll get all the publishers for free.

Our goal initially is to scrape the entire literature (we have TOCs for 23,000 journals) as it is published every day. We then use natural language and image processing tools to extract uncopyrightable facts from the full texts, and republish those facts in open streams. For example we can capture all phylogenetic trees, reverse engineer the newick format from images, and submit them to the Tree Of Life. Or we can find all new mentions of endangered species and submit updates to the IUCN Red List. There's a ton of other interesting stuff downstream (e.g. automatic fraud detection, data streams for any conceivable subject of interest in the scientific literature).

I have a question. Why are you saying you'll never do full texts? You could index all CC-BY and better full texts completely legally, and this would greatly expand the literature search power.

yid 8 hours ago 2 replies      
It seems like nothing like this currently exists in a centralized, non-distributed way. Why add the complexity of a p2p network into an unproven concept? Is it purely to save on the cost of indexing and serving queries?

> Scraping Google is a bad idea, which is quite funny as Google itself is the mother of all scrapers, but I digress.

It's not really "funny"/ironic/etc -- Google put capital into scraping websites to build an index, and you're free to do the same, but you shouldn't expect Google to allow you to scrape their index for free.

EDIT: just saw this:

> Right now, PLOS, eLife, PeerJ and ScienceDirect are supported, so any paper you read from these publishers, while using the extension, will get indexed and added to the network automatically.

Yeah, they're not going to like that. You might want to consult a lawyer.

Convolutional Neural Networks in your browser github.com
63 points by abhikandoi2000  12 hours ago   8 comments top 2
karpathy 7 hours ago 2 replies      
Author here. ConvNetJS is a project I maintain on a side for fun. A part of my motivation was that I wanted to make these algorithms and techniques more approachable, and easier to understand, play with and apply. One issue right now is that I think I plunged into development and wrote a whole bunch of code and visualizations without fully supporting it with the necessary tutorials for a complete beginner. I'm hoping to change that over the next few weeks. Sometimes it's a little hard to juggle this with research, and all the things I'm actually supposed to be doing.

I'd be happy to answer questions or get feedback on the project!

nomnombunty 12 hours ago 1 reply      
Even after taking several machine learning classes and learning about neural networks several times, I still don't have a good intuition on how these network works in practice. Being able to visualize how learning algorithm evolves is super helpful. Awesome work karpathy!
Show HN: Spendy a simple money tracker (iOS app) dropboxusercontent.com
37 points by wingerlang  9 hours ago   37 comments top 18
brianwillis 3 hours ago 0 replies      
I've been using the YNAB[1] app for expense tracking, and I'm pretty happy with it.

It has a nice feature where it records the retailer's location using the iPhone's location services, so when you enter new transactions for a retailer you've been to before it can automatically guess the payee and category. After a week or two, the app had learnt the payees I used the most, so entering transactions became super fast.

The downside with YNAB is that you must sync it with a desktop version of their app. The iOS/Android apps are free, but only support a small subset of the desktop app's functionality.

With YNAB there's also a whole financial management system that they want you to buy into. It works well for me, but might not meet everyone's mental model of how personal finance should work.

[1]: http://www.ynab.com/

wingerlang 9 hours ago 3 replies      
So I've been working with iOS for over a year. Some employed time, freelance and I've hung around a lot in the jb-community.

Despite that I did not have one finished application of my own, lots of prototypes, but nothing in the App Store. So this is my first one, it is highly targeted to me, which you will notice if you read some of the text in the link. But I hope it is also something others will like.

Link to app in the App Store: https://itunes.apple.com/us/app/spendy/id872831308?mt=8

Anyway, I thought I'd just post it here because why not.

kylec 4 hours ago 0 replies      
I almost closed the tab when I saw the screenshots of the "other apps". They all look pretty terrible and because I hadn't actually read anything on the page yet when I saw them, I assumed they were screenshots of your app. I would suggest that you lead with screenshots of your app, especially since it looks a lot nicer.
markdown 7 hours ago 2 replies      
I miss these Show HN project writeups. They're what drew me to HN in the first place, but these days I'm lucky to see one good one per week.

Excellent work, OP. It looks like you're a unicorn.. or could be one with a bit more design polish :)

hrrsn 2 hours ago 0 replies      
Great looking app. I would use this so much if there was two way Dropbox sync.
philiphodgen 8 hours ago 1 reply      
Downloaded. Entered my first transaction. I like the fact that it attempts to do only one thing. Simple is good.
8ig8 7 hours ago 1 reply      
Excellent overview. I love these.

One feature I may have missed is reconciliation. That is, some way to review my manually entered transactions with statements provided by my bank.

jedmeier 1 hour ago 0 replies      
This looks great. Works well and very simple to use. What about a way to take photo of the receipt?
cstrat 5 hours ago 1 reply      
Nice work.

I have tried heaps of apps for this, 'cost' and 'spendee' are the ones I found to be the best so far.Will give yours a try soon.

jak1192 6 hours ago 1 reply      
Great app, I've been looking for a simple, bare bones budget tracking app.

I think you need to fix the flow for the first time users. If every first time user must add a currency, why not have first time user screen be the add currency screen? Also, the add currency screen should have suggestions. It wasn't initially clear that I needed to type 'USD' instead of 'dollar'. It also wasn't clear that the currency needed to be in capital letters. I had usd in lowercase and clicking the check mark did nothing. There should at least be feedback.

The app is great once getting past the first screen. Nice job.

johnpowell 5 hours ago 0 replies      
This is great and might just replace my trusty Moleskine.
ragsagar 2 hours ago 2 replies      
I was looking for a similar app for android. Is there one?
elitrium 8 hours ago 1 reply      
One of the first things I did after installing, is turned off the notifications and badge icon in iOS settings.

It doesn't look like this can be changed within the app. It only provides the option of changing the content that's displayed. This was an initial turn off for me, but other than that I'm really happy with this!

larrywallace 1 hour ago 0 replies      
This app looks great, but the only negative for me is that it does offer the ability to take a camera snapshot of a real receipt. Some other apps have this receipt snapshot feature, but not the simplicity of your app. For me this snapshot feature alone would complete the perfect app in this category, and for this reason alone I am holding off on the purchase.
aaronm14 8 hours ago 1 reply      
Thanks for the write up, I enjoyed the explanation of even the little details throughout the app.
j-rom 7 hours ago 1 reply      
This looks pretty cool. One question though: Is the "Tap again to confirm" button active before the animation ends? For example, what if the user accidentally double taps the delete button?
vmiroshnikov 7 hours ago 1 reply      
Take a look at http://coinkeeper.me/
abhididdigi 6 hours ago 1 reply      
Is there an API exposed? BTW, I'm loving this app! Thanks!
Missing E-Mail Is the Least of the IRS's Problems bloombergview.com
54 points by luu  11 hours ago   41 comments top 10
patio11 8 hours ago 3 replies      
I buy widespread incompetence in government IT management in general and the IRS in particular. (Not commenting on the political valence of this, in observance of the HN politics rule.)

Last year, after requesting an ITIN for my wife (who is not a US citizen and thus requires an SSN-esque replacement to be able to file joint taxes), we received two letters from the IRS. One had the ITIN. The other denied our request for the ITIN on the basis that "We have already issued her an ITIN." Apparently, the IRS explains, that second letter should have immediately set my accountant and I to doing forensic debugging of their protocols, because it means that something "seriously wrong" happened to our returns.

What? Glad you asked. See, the IRS had lost my return. "What?" The paper was "in the building somewhere" -- we had gotten a receipt -- but they were unaware of what desk it was at. (Their first hypothesis was "You failed to file", and they threatened penalties for that, until being confronted with a Post Office return receipt. Which is, by the by, why you should always get a receipt.)

My accountant took over yelling at them for a while to find the return, and they eventually did, and -- miracle of miracles -- they typed it into the computer. Twice. Thus generating two separate and equal returns going through non-idempotent processes, such as ITIN generation.

When those two returns met up in the reconciliation stage, they blocked each other from processing. No one at the IRS noticed this, for approximately 7 months, until my third call to their CS line got someone to actually look at the file. She hit "delete" on the duplicate. (I really hope she was simplifying that for me, because it scares me if they can actually delete anything.) Return processed almost immediately, refund check cut 48 hours later.

I almost feel sorry for them on being unprepared to unearth potential political malfeasance, because that is after all a distraction from the day to day administration of the Revenue Code, but processing returns is, as the saying goes, "their only job."

bane 4 hours ago 0 replies      
A few years ago I interviewed for a senior architect position on an IRS contract. To get the job I had to interview with the subcontractor who was filling the position, then with the prime contractor and finally with the IRS. You had to be accepted by all 3 layers, plus get through salary negotiations to get the position.

The principle job was to build a fraud investigation system by integrating a COTS analysis tool into the IRS systems to automatically generate cases for fraud investigation officers to review. One hitch, as I found out after getting through the gauntlet of interviews, the COTS product that they wanted to use wasn't built to support this kind of integration and the vendor wasn't interested in forking off a special build just for the IRS.

So I asked them a simple question "knowing what I know about this product, and the fact that it can't be integrated as desired, it seems that this is an impossible task, as the senior architect, I'd want to be clear that I have powers to restart the selection process of the tools and systems so I can build a solution that would actually perform as required, would I have this authority?"

At this point, the senior manager from the prime and the pm from the government side got very agitated. You see, there was only 2 months to get a basic system functional and as a result the selection and purchase process had already been completed.

"Without anybody leading the process?"

This was apparently the wrong question, as I had hit some sort of embarrassing point I shouldn't have dug into. They pretty much just wanted somebody in the role to rubber stamp the crap decisions they had already made. They became very defensive, there was some raised voices. I told them I wasn't interested in that kind of position and walked out.

2 years later I found out that they had scrapped the program completely after spending goodness knows how much money and were restarting the entire thing from scratch.

My gut feeling is that this new program too will fail since any working system would likely detect the fraud in their IT acquisition and management.

autokad 5 hours ago 1 reply      
I worked for a company that sold EMC products. one of those was an email product that made sure companies captured ALL emails for ETERNITY. these were mainly banks, where (pardon my spelling) sarbanes oxley required banks to keep them (always and forever). emails from say exchange would be copied before it went to their in box, stored on disk and also immediately stored to optical media, which would also automatically be copied. not getting into the other parts that prevented emails from being lost,

I havent worked at that job since 2007. I find it unbelievable that any major government organization such as the IRS does not make sure ALL emails are never lost.

edit: the fact that they lost emails is a huge scandal in its own right IMO

x0054 1 hour ago 0 replies      
Doesn't the federal government already have an entire organization dedicated to preserving and archiving every email sent by government employees, as well as members of the public? The NSA?
nospecinterests 8 hours ago 1 reply      
None (Edit: I'm talking about digital data retention) of this even matters when you take into account the fact that it is IRS policy to print out each and every e-mail sent and received by their employees. One would assume that this is required to maintain a permanent record of all communications for the Federal Archives and for legal matters that arise for confidential taxpayer cases.

"The Treasury Departments current email policy requires emails and attachments that meet the definition of a federal record be added to the organizations files by printing them (including the essential transmission data) and filing them with related paper records."[0]

[0] http://www.irs.gov/irm/part1/irm_01-010-003.html

ChuckMcM 7 hours ago 1 reply      
I find it interesting she was archiving on her local hard drive in 2011. Especially given the stories of things people found on hard drives bought at scrap auctions. A better question might be "is it better now?" or not, I'm guessing not.
seacious 7 hours ago 0 replies      
"Such policies indicate either an agency that is not concerned with preserving good audit chains or one that has an extremely penny-wise, pound-foolish approach to IT policy."

I think AND is more appropriate than OR here.

hamiltonkibbe 6 hours ago 2 replies      
What happened to the hard drives of everyone she sent emails to, certainly parts of the email chains in question are somewherre on the hard drives of the people she corresponded with
sliverstorm 8 hours ago 1 reply      
The IRS's policies on e-mail storage were primitive even by the standards of 15 years ago

What? I'm almost certain the hard drive I had in 2000 was less than 10GB. At 500MB per email user, that's 20 users per entire disk...

mathattack 9 hours ago 1 reply      
It's a moronic policy, but it is the IRS.
The Early History of Smalltalk (1993) worrydream.com
34 points by throwaway344  9 hours ago   7 comments top 4
rdtsc 2 hours ago 1 reply      
> The mental image was one of separate computers sending requests to other computers that had to be accepted and understood by the receivers before anything could happen. In today's terms every object would be a server offering services whose deployment and discretion depended entirely on the server's notion of relationship with the server.

1993 called and wants its micro-services. That is great. At least it seems like he is describing Go's channels, Rust's tasks, and most of all Erlang's processes.

It is interesting, perhaps it is a reflection on how some of these concepts are abstract that anything can be read into it, but it seems that the initial design and motivation behind OO has been perverted by C++ and Java.

I started with C++ and Java in college. To me OO was inheritance, composition, and polymorphism, and so on. Years later, when distributed and scalable computing is talked about other languages and platforms seems more OO than the classic OO.

And finally one more excerpt:



  1. Everything is an object  2. Objects communicate by sending and receiving messages (in terms of objects) Objects have their own memory (in terms of objects)  3. Every object is an instance of a class (which must be an object) The class holds the shared behavior for its instances (in the form of objects in a program list)

Pretty funny. Replace 'object' with 'process' and you have Erlang, the last language you'd call OO : 1) everything is a process 2) processes sending messages to each other. Each process has an isolated heap (won memory). 3) Functions in modules hold the shared behavior of many possible process instances spawn from it.

pholbrook 5 hours ago 0 replies      
Glad to see someone posted this again. This is without a doubt one of the best papers I've ever read. Drives home that Alan Kay is truly brilliant.
Glench 6 hours ago 1 reply      
Here's a better one, with a github for corrections: http://worrydream.com/EarlyHistoryOfSmalltalk/
quink 8 hours ago 1 reply      
I want to enjoy reading this as much as it deserves to, but there are so many typos ;_;

Nevertheless, I shall persist and so should you.

New New Fatherhood in the Inner City thedailybeast.com
27 points by gwern  5 hours ago   1 comment top
rdtsc 3 hours ago 0 replies      
> motherhood provided the crucial, emotionally satisfying transition into adult life.

I know someone who worked at a treatment center for disadvantaged inner city girls (drug treatment, mostly court mandated). From what I understood motherhood is a status symbol, also a right of passage. A way to earn respect. All of the sudden people pay more attention to you. It also affords a pass into a club of other unwed mothers, many older ones, that might have served as role models growing up. Like the observation said, it is seen as the next step in life.

One deeper level, I think it also provides companionship and family where there is none. Deadbeat deads or moms, girlfriends and hookups who are abusive, come in an out, but this one little person, will be there looking up to them never going anywhere, providing love and attachment that they never got much. That is at least my interpretation of it. It is unfortunate because in most cases these children and parents will have a hard time. It is very selfish to bring children into the world just to be used a status symbol or someone to provide companionship when there are just no resources to raise safely.

On an even deeper, perhaps unconscious, level, maybe having children can be seen as giving up on accomplishing more in life and instead choosing to procreate, hoping maybe the offspring might have a better shot at it.

> The good father is somebody like your friend.

I can see how that would be an attempt at reversal or mend their own experience with their fathers growing up. Their father wasn't there. Their father wasn't their "friend". Their father used to beat them and be harsh. So they vow to be the opposite.

The one hope in this is that it would also reverse some of the stereotypes about men. Men are the default guilty party in family disputes. They are the stereotypical predator and abuser while women are given a a great leeway and only with concrete and absolute evidence will they be considered as unfit to take care of the child. This mentality has permeated the court system, the school system, the culture in general. Hopefully this leads at least to a re-evaluation of those stereotypes.

Tweet at this bot to make games using emojis for scripting sparklinlabs.com
18 points by elisee  7 hours ago   6 comments top 2
elisee 7 hours ago 1 reply      
I tried submitting this bot I made yesterday (https://news.ycombinator.com/item?id=7924323) but didn't realize it was the middle of the night in the US. Trying again in the hopes that it will catch some more attention.
teamonkey 3 hours ago 1 reply      
Fantastic stuff. Can you give some more info on how it was created, what libraries it uses etc.?
Accuracy of three major weather forecasting services randalolson.com
45 points by rhiever  11 hours ago   19 comments top 9
joelthelion 9 hours ago 4 replies      
This curve is not enough to evaluate the value of a weather forecasting service: if it rains, say 30% of the days in a specific area, you could forecast a 30% chance of rain every day and have good "accuracy". And yet that would be of no practical value.

I think a better metric would probably be something from information theory like mutual information, but I'm not sure which one exactly.

emkemp 7 hours ago 0 replies      
The plot in the article is an example of a "reliability diagram" frequently used in weather forecast verification. See, e.g., http://www.bom.gov.au/wmo/lrfvs/reliability.shtml. Reliability is considered separate from accuracy in meteorology -- the former evaluates success conditioned on what was forecasted, while the latter is an unconditional evaluation of success or failure.

Other facets of forecast "goodness" exist and are often considered in meteorology. A seminal paper on the subject was penned by Allan Murphy, who identified three types of "goodness" (consistency, quality, and value) and ten subsets of quality (including reliability and accuracy). See http://www.glerl.noaa.gov/seagrant/ClimateChangeWhiteboard/R.... [PDF warning]

A popular companion to the reliability diagram is the Relative Operating Characteristics (ROC) curve. Here different forecast probability thresholds are tested to calculate likelihood of success if the event occurred, and likelihood of error if the event did not occur. This evaluates what Murphy calls discrimination (forecast quality conditioned what was observed) which complements reliability. See, e.g., http://www.bom.gov.au/wmo/lrfvs/roc.shtml.

Curiously, accuracy tends to take a back seat in forecast verification to other aspects of quality, particularly in rare event situations. This trend first began in the mid 1880's with the "Finley Affair", a series of published articles debating how to evaluate tornado forecasts issued by the US Army Signal Corps. Murphy published a fascinating literature review on the subject and showed that many of the skill scores and debates born during the Finley Affair are still active today. See http://www.nssl.noaa.gov/users/brooks/public_html/feda/paper.... [PDF warning]

kator 7 hours ago 1 reply      
I have enjoyed http://darkskyapp.com/

In my totally un-scientific opinion weather.com has slowly gotten worse over the past several years. I think it's become a lot hard to monetize weather and the weather service has really stepped up the game on providing public outlets that are digestible by the general public.

My guess is in the early days what weather.com and the weather channel where really doing is translating the difficult to understand NWS messages and helping the general public quickly understand: "Do you I need a rain coat today?". That said over time as NWS has stepped up their public interfaces that value add is sliding backwards and getting harder to maintain.

That's my $0.02CPM worth.. :-)

simmons 9 hours ago 1 reply      
For a while, I've been thinking about doing exactly this sort of analysis. Thanks for putting in the effort!

Does anyone have experience getting a feed of the raw NWS forecast data for many points in a large region (e.g. a state or the whole country)? I was thinking the other day that it would be great to have a web site that showed the forecasted chance of precipitation across a region, to answer questions like "Where in the Colorado high country should I go camping this weekend?"

bcl 6 hours ago 0 replies      
If you're interested in the science (and some of the problems we have in the US) read UW Professor Cliff Mass' blog - http://cliffmass.blogspot.com/
jloughry 10 hours ago 2 replies      
It would be interesting to repeat the comparison for many specific locations, then plot the measured variance geographically as a "heat map" or landscape of error [1]. Are there patterns visible that could be attributed to local geography, population density, or other factors?

[1] It could be done, for weather.gov [2], using only data available from the web site [3].

[2] I don't really care about the other forecast sources.

[3] To trust rainfall observations obtained from weather.gov in order to compare them to predictions made by weather.gov seems vaguely wrong but there is no other comparable source of observations. They are physical measurements, after all.

[4] Some geographic areas probably have a coarser net of observation points. In some places, e.g., San Diego, the weather is inherently easier to predict. Some local forecast offices may be more skilled than others.

Theodores 9 hours ago 2 replies      
This is a quite cynical take on how weather forecasting works written by someone that quite clearly does not know one single weather forecaster.

First of all there are only two agencies on the planet that do the number crunching to work out some reasonable forecast data that encompasses the whole globe. These are the NWS and the UK Met Office. As well as having to have a lot of big computers these agencies also need source data, this data - observations - comes from airports and plenty of other places where things like wind speed, precipitation temperature and so on is actually measured. At times the observations are wrong - imagine the baking tarmac of that big airport and how that differs from the tranquil yet noisy houses close to a nearby river.

The NWS differs from the Met Office in that they don't charge for the GRIB data. The tax payer has paid for it already in the USA so they don't have to pay for it again. Hence the proliferation of things like The Weather Channel that use NWS rather than Met Office data.

One thing that outsiders to weather forecasting do not realise is what it is that weather forecasters actually do. They imagine them to be very scientific - which they are - but they don't realise that they are essentially in the 'betting shop' business. To take an automotive example, if you had perfect knowledge of every car that is entering tomorrow's F1 race and you had perfect knowledge of the well-being of every single driver, mechanic and tea lady involved in the event, can you actually predict which of the 22 drivers is going to win? Will it be the guy on pole? The guy who has one most of the races so far? The guy who consistently comes second? Or some random outsider?

The GRIB data is far from perfect knowledge, it is a forecast of what is going to happen and the accuracy depends on the time window going into the future. The data is fully 3 dimensional, think of it as lots of onion layers going around the whole planet. Data points are on a grid - what happens if your town is next to some huge mountain with 'your' data point on that grid being several thousand feet higher than where your town is? The GRIB data for your town is not actually for your town, it is for the mountain. A meteorologist will have rules of thumb plus the science to arrive at a more accurate guess than the GRIB gives - this is interpretation of the data, not some sixth sense, however, it is still nonetheless a gamble/guess.

As well as the GRIB data there are things like satellite images - from lots of different flavours of satellite - plus there is radar data. This can all be layered up on top of GRIB data and pretty maps to create an interpreted forecast. The 'wet bias' is more likely to be rookie mistake meteorology rather than devious ploy to get viewers watching. Look at any satellite image and see the low level haze from things like jet plane 'contrails' plus coastal fog etc. There is an awful lot of it on satellite images and it is very easy to permanently be predicting rain from seeing such cloudy greyness. Hence at the local weather station this is more likely to happen. On the Weather Channel where they have excellent interpretation tools for their forecasters this is less likely to happen, not so much because of the tools but because of the forecasters - they are more experienced gamblers.

The other thing to remember with weather forecasting is that today's predictions can be checked against tomorrow's observations. Things can be consistently wrong for a given town/area due to the way the GRIB data works (i.e. does not factor in local topology), and it can take a while before this error in the model is discovered and fixed. There may not be observation data available for smaller towns so some errors might never be fixed.

The weather prediction industry is fairly ripe for disruption. The tools that meteorologists have used to require big workstations to run, nowadays a Google Earth type of app would suffice, if someone could be bothered to write it.

Amongst themselves meteorologists know a lot more about the current factors influencing the big picture of the weather. For instance, the storms that start off on the west coast of Africa, cross the Atlantic and 'bounce back' to the UK, losing energy on the way to end up as mere rain. Clearly such weather patterns take weeks to do there thing, however, for a gardener in the UK it would be good to know if rain was on its way over the next few weeks. Yet the demands of forecasting format mean that the forecaster has to tie that down to 'rain expected teatime next Tuesday' (or whenever). Returning to the 'app' idea, it would be great for everyone if they could explore the raw data and have these bigger events pointed out by an expert, so that the raw data can be interpreted in a meaningful way. Instead we have banal 'insights' such a this article (that probably did not intend to be banal or naive but that is the way things sometimes happen despite trying hard).

samirmenon 5 hours ago 0 replies      
I'd love to see this done with temperatures, other kinds of precipitation, etc. I think I'll have to make it a weekend project...
jloughry 11 hours ago 0 replies      
Anecdotally, I can corroborate the observation: weather.gov slightly underpredicts rain; a "30% chance" is associated with rain more than half the time.
A Whirlwind Tour of ARM Assembly coranac.com
21 points by ANTSANTS  8 hours ago   2 comments top
danellis 4 hours ago 1 reply      
This seems outdated. No mention of Thumb 2 and unified assembly, for example.
ProductHunt is the Hot New Destination for Sourcing Startup Investments mattermark.com
31 points by dmor  10 hours ago   17 comments top 9
jlees 1 hour ago 0 replies      
Hmm, I was expecting the end of the article to be an analysis of which investors are most active among ProductHunted-then-funded companies, not a list of the companies themselves. Unless I'm missing something, that's the most interesting piece of data in the whole analysis: effectively, "who reads and pays attention to PH?".
willu 6 hours ago 1 reply      
I am a fan of ProductHunt and look forward to their emails. BUT I do have to gripe a bit about the current very limited, "exclusive" commenting system. I want to get real insights from people who have used the product/service being featured and instead it's usually just the guy who runs ProductHunt plus maybe one other friend or colleague who takes a couple of minutes to poke around providing pretty shallow feedback...and it's very rarely critical. Then an investor in the featured site/product chimes in about how awesome the team is. Not a lot of value there. Both consumers and founders would benefit from more openness.
staunch 6 hours ago 1 reply      
I've checked the site a few times. The sidebar popup comments thing drives me nuts and I leave. People make fun for HN for being so simple but too clever is far worse.
return0 6 hours ago 0 replies      
Sounds like an echo chamber
pbreit 6 hours ago 1 reply      
What does "sourced" mean exactly? For example, MoveLoot was in YC W14 so hard to think it was sourced through PH.
kirillzubovsky 6 hours ago 1 reply      
Let's be honest though, saying "potentially sourced" is really saying nothing at all. </grumpy grandpa>
alixaxel 5 hours ago 0 replies      
I actually didn't knew about PH till they picked up on my HN submission (namegrep.com), I must say that I really like their daily links, some great stuff there - for instance, how awesome is this http://theorangechef.com/? :O

I do have to agree with @willu tho, their commenting system is way too elitist. =(

7Figures2Commas 7 hours ago 1 reply      
> ProductHunt Is Quickly Becoming the Hot New Destination for Sourcing Startup Investment Opportunities

> ...Product Hunt is an excellent sourcing tools for VCs looking to discover little known early stage startups.

If ProductHunt is "quickly becoming the hot new destination for sourcing startup investment opportunities" it cannot also be "an excellent sourcing tools [sic] for VCs looking to discover little known early stage startups." Lack of awareness of startups listed on ProductHunt is inversely correlated with ProductHunt's popularity.

sparkzilla 7 hours ago 0 replies      
Did they open it up from Beta yet?
Will Google Enter The Insurance Industry? techcrunch.com
4 points by bushido  2 hours ago   3 comments top 2
zaroth 6 minutes ago 0 replies      
If Google is selling insurance, it's only because they have created an AI compelling enough to fully automate the servicing of those policies.

If there's one thing Google will not stand for, it's highly trained customer support reps.

In that context, insurance is an interesting automation problem which is totally solvable.

theworst 37 minutes ago 1 reply      
Everybody needs insurance, in one form or another (at least in the US).

I wonder if a company like Google that can possibly determine non-obvious risk factors, has a major advantage here.

E.G., imagine if people who searched for the term "dui attorney" were 40x more likely to be involved in a vehicular homicide as a defendant. Google could refuse to insure those searchers, and, as such, significantly cut everyone else's premiums (giving them a major competitive advantage).

I have no special knowledge about the distribution of insurance payouts, but I would guess it follows a power distribution. If so, removing the top 10% of payout insurees could HUGELY decrease insurance payouts.

Does anyone know if the major costs to auto/home/health insurers are payouts?

Excluding healthcare, do people care if folks who exhibit risky behavior pay higher premiums? In the US, young men (actuarian-proven to be higher risk) pay higher auto insurance, and everyone seems fine with it.

Finding Compiler Bugs by Removing Dead Code regehr.org
114 points by mehrdada  21 hours ago   18 comments top 3
petercooper 14 hours ago 1 reply      
I knew this reminded me of something, and it turns out it was one of his older posts which is also worth a read: http://blog.regehr.org/archives/970 Finding Undefined Behavior Bugs by Finding Dead Code
pedrocr 17 hours ago 2 replies      
It's an interesting method. Does anyone know if these kinds of torture tests get collected into a common library to help any future compilers or if they just result in a paper and bug reports to current ones?
michaelfeathers 15 hours ago 1 reply      
Has metamorphic testing been exploited for property-based testing?
Should Oregon fund college through equity? marginalrevolution.com
21 points by gwern  9 hours ago   24 comments top 11
shelf 23 minutes ago 0 replies      
This is pretty much how it works in my country (Australia). Those earning under 55k are not required to make repayments, and those who never earn that salary do not repay a cent.

The government of course loses money on this scheme, but the people would tear down any government that tried to revoke it. It's an important and fair middle ground between national free ride and a USA-style 'sins of the father' system. Also note that the repayment amount is nowhere near the true cost of the degree, due to Conmonwealth-supported places.

If you have time to work your way through school at current US tuition, you are probably taking courses below your level.

trothamel 7 hours ago 2 replies      
If after 20 years, a student (or the total body of students) has not paid off the cost of college, who pays the difference - the school or the taxpayers?

A program like this makes sense if the school has skin in the game. Ideally, the potential to lose its investment would make the school only extend this offer to students with a realistic chance of breaking even.

On the other hand, if the state extends this to all students without concern for ability to repay, it seems likely to be just another taxpayer subsidy for education - which may be good or bad, but it's not clear why a roundabout method like this is better than a direct tuition discount.

mathattack 4 hours ago 0 replies      
Twenty years is a long time and I fear the implied selection mechanism embedded in that time horizon. At the margin I would expect this to attract people who dont have a vivid mental image of the distant future. Furthermore the terms of the program discriminate against those who expect high earnings or for that matter those who expect to finish. In other words, the drop out rate of the marginal students here may be relatively high. And what are the payback terms for dropouts? Do they get off scot free? Pay proportionately for what they finished? Pay much much less to reflect their lower expected wages?

This captures my intuition. English and Sociology majors will choose this, not Math majors and Computer Science.

I had heard that Yale had experimented with a similar program. The problem is the distribution. It sounds great for the median student. The low end don't make enough to pay their way. The high end make a ton of money, and push back on paying. In Yale's case they may not have had all the forceful levers of the state. (And Yale wouldn't want to push too hard on the top 1%, alienating their best alums)

darklajid 7 hours ago 3 replies      
No. Instead, you use the budget that the huge amount of taxes leaves for education and let students study for free (or close to free).

Really, the headline makes me cringe, the idea that some people consider this a good idea makes me cry and the domain name ("marginalrevolution") is a joke in this setup.

What a sad idea..

Edit: The state/the country already HAS equity (Ignoring the bullshit and that I think that this link only made HN because of this trigger word). It's called 'taxes'. If you flip burgers, the state gets little money. If you work for Google, FB, Apple, MS (yeah, all not in Oregon. But stay with me) the state makes quite a nice sum, every year, for ~40 years that you're supposed to work (k, feel free to reduce that number).

I haven't seen so many bullshit alarms related to a HN story going off for for quite some time. I would love to sit down and talk to someone that seriously considers this 'cool' and 'a nice idea' and try to understand how that is even possible.

joshu 2 hours ago 0 replies      
Does this map to offering different loan rates for different degrees?
zw123456 8 hours ago 0 replies      
This is a very interesting idea, I hope more states and university try out different approaches. I got my first degree in 1978 and I did it by "working my way through school" which I think is no longer possible for most students, unfortunately. I wish there was a way of promoting that approach, I honestly think that I grew and learned in many ways through the combination of working and going to school. It is very sad that young people today cannot work and go to school and end up with a good education and a job at the end the way I did. I applaud the innovativeness of Oregon, but I do not think it is the best approach.
inanutshellus 4 hours ago 0 replies      
> Yet the European systems of higher education are generally worse than those in America

Wait, what? Can I get a source on that, bub?

hawkice 8 hours ago 0 replies      
The rising cost of education coupled with systems that work best with future-low-income-earners will likely push the smartest people out of schools.

I am concerned that making it all the way through a degree program has already started to indicate you aren't world-class, and things like this will only accelerate this trend.

angersock 7 hours ago 2 replies      
So, we're going back to indentured servitude by bits and pieces? Progress indeed.
judk 3 hours ago 0 replies      
This is not a revolutionary idea. Public college has traditionally been heavily subsidized by the state, and paid back by income and property taxes , aka an equity stake.
was_hellbanned 1 hour ago 1 reply      
Western Oregon University is around $3,000 per term for 15 credit hours. Let's call it $12,000 per year with books. OSU estimates resident tuition at $9,123 and $1,965 for "books & supplies".

I worked part-time during school and full-time during summer and put myself through college at Western, graduating with no debt. I lived with my parents and commuted to school. I don't see why this program is necessary.

Melting Yukon ices reveal 5,000-year-old archaeological treasures macleans.ca
42 points by curtis  13 hours ago   33 comments top 2
sounds 11 hours ago 2 replies      
More than anything I am ecstatic to hear that First Nations are playing a key role in the archaeological work. It's a great way to integrate cutting-edge technology while simultaneously encouraging teenagers to become interested in their heritage.
coldcode 10 hours ago 9 replies      
It's amazing how people who deny climate changes have no good answer to why ancient glaciers keep melting. But this melting allows for great archeology.
       cached 22 June 2014 07:02:01 GMT