hacker news with inline top comments    .. more ..    27 Nov 2013 Best
home   ask   best   5 years ago   
1
Basic Data Structures and Algorithms in the Linux Kernel stackexchange.com
656 points by jackhammer2022  3 days ago   59 comments top 16
1
incision 2 days ago 2 replies      
Very nice summary.

I encountered many of these while reading through Understanding The Linux Kernel [0] and The Linux Programming Interface [1].

Both are great books which are primarily about the "how" of the kernel, but cover a lot of the "why" of the design and algorithms as well.

0: http://www.amazon.com/dp/0596005652

1: http://www.amazon.com/dp/1593272200

2
bcjordan 2 days ago 3 replies      
A Coding for Interviews [1] group member mentioned that reading through the Java collections library [2] was the most valuable step he took while preparing for his Google interviews.

In addition to getting a better understanding the standard data structures, hearing a candidate say "well the Java collections library uses this strategy..." is a strong positive signal.

[1]: http://codingforinterviews.com

[2]: He suggested reading the libraries here: http://www.docjar.com/html/api/java/util/HashMap.java.html

3
netvarun 2 days ago 2 replies      
On a (slightly) related note, you should also check out the author Vijay's (http://www.eecs.berkeley.edu/~vijayd/#about) answer on the benefits of learning Finite Automata:http://cstheory.stackexchange.com/questions/14811/what-is-th...
4
eshvk 3 days ago 1 reply      
The stack exchange comment was amazing. You can't get a better raison d'etre for why studying algorithms is important.
5
joshguthrie 2 days ago 0 replies      
These are great resources. Best advice I was ever given when starting CS and learning C was from the headmaster (hi RR!) asking me "What about Linus's linked-list? Have you looked at them?".

Up to that point, this (new) headmaster was seen by the students as "that non-tech guy here to administrate the school" and he was opening my eyes on the biggest codebase residing on my own computer that I never bothered looking through: the Linux kernel code.

As someone says further down the comments, this is not a specific Linux thing: looking at how Java HashMaps works or how Ruby implements "map" are great resources and you'll always get bonus points in an interview for referencing algorithms from "proven" source codes.

6
mrcactu5 2 days ago 0 replies      
I like reading these off stack-exchange since I am often to lazy to read the textbook.

My other problem with algorithms textbooks is that I get into arguments with other developers about how much we need them. At least here, I can say "Look bucko, the Linux kernel itself uses them."

I decided we can do programming at the API level and never have to think to how that API gives us the right answer. Lower-level programming is responsible for optimization when our number of data points gets larger.

And we could go even lower level and ask why the algorithms work in the first place - which is the computer science aspect. I routinely deal with developers who feel they do not have time for this.

Also, if the data is small enough scale, we can brute-force it and nobody will notice.

7
almosnow 2 days ago 1 reply      
Amazing answer!, unfortunately 'this is not a good fit for our Q&A format'.
8
avisk 1 day ago 0 replies      
Awesome answer. This is a treasure for anybody wanting to read data structures & algorithms. I always felt bored to read data structures for the sake of reading them or for interviews with some made up examples. I am sure we can quote many other open source projects with interesting uses of these data structures. This is way more interesting than reading source code of data structure libraries in programming languages.
9
alok-g 2 days ago 1 reply      
Wow! Does anyone know more about the author Vijay D.? Is this the person: http://www.eecs.berkeley.edu/~vijayd/
10
aceperry 2 days ago 0 replies      
Excellent, I love reading this stuff. Very helpful and informative for those of us who are interested in computer science but studied in a different field.
11
chintanp 3 days ago 3 replies      
My favorite algorithm has been the linked list implementation, pretty useful for implementing list on embedded platforms.
12
topynate 2 days ago 2 replies      
Could someone explain what the utility of bubble sort is? I've read that even in cases where an O(nlogn) sort is impractical, insertion or selection sort is preferred.
13
timsally 3 days ago 2 replies      
Great material, but it's been directly taken from the source material (http://cstheory.stackexchange.com/questions/19759/core-algor...) with no added content. I imagine Vijay (the author of the source material) put a lot of work into assembling this information. Vijay's CS Theory answer should replaced as the URL for this HN submission.

EDIT: Removed part of my comment, per the blog author's response below.

14
blahbl4hblahtoo 3 days ago 0 replies      
Wow. That's so cool.
15
jackhammer2022 3 days ago 2 replies      
More implementations listed at: http://cstheory.stackexchange.com/a/19773
16
ExpiredLink 2 days ago 0 replies      
I'd be interested in Basic Data Structures and Algorithms in C that are published under a non-viral license.
2
High Frequency Dating robrhinehart.com
652 points by Seldaek  5 days ago   182 comments top 37
1
benjaminwootton 5 days ago 5 replies      
I'm torn on this.

On the one hand high frequency dating is a good thing because it adds liquidity to the market.

On the other, it raises the risk of of increased volatility and flash crashes (when your partner finds out).

2
timje1 5 days ago 4 replies      
This is simply brilliant. The escalation from a typical nerd's "I've optimised my social life" post to absurdity had me in stitches.
3
jcarpio 5 days ago 1 reply      
This is fantastic satire. Especially, since I've just read The Circle. Judging by the other comments here, a lot of us were going along believing it was true. Until robot.

And, why not? The pieces were believable: OpenCV, NLTK, some scripting and API twiddling. The virtual assistant wasn't much of a stretch either.

Especially if you're familiar with modern online dating sites now. Still thinking that online dating is like browsing an organized list of potential dates where an online host helps you with searching is naive. Craigslist personals are still like that, stripped down, no profile, anonymous and no algorithms.

OKCupid, like other dating sites, makes money via ad revenue, not by connecting you with a partner, so what's their priority? Who knows if your experience is affected by:- how often you visit the site- if you use an adblocker (they know, and they let you know they know)- if you're on a free account- message response rate- if you use their features (quickmatch, etc.)- how many questions you've answered (at a tech talk recently, Sam Yagan co-founder said answering more than 10 questions was pointless)- your quantcast/cookie/tracker profile- sentiment analysis of your profile/messages

Here's a fun anecdote: As a new user of their iPhone app, I was interested in using the Locals feature (to see who was available on short notice for a date). The first day it worked, let me see those in my vicinity. The next day it was completely removed from the app. No warning. Something (I was a new user) must've decided that that feature wasn't for me.

This goes beyond dark design patterns which attempt to influence your behavior (i.e. on another dating site, you have to pay to send messages, and attractive people send you collect messages, that you have to pay to read.). With dark design, if you're aware, you know what the site wants you to do. If your online dating success is controlled by black box methods without feedback, they silently judge.

So, how soon before hackers decide they're tired of being gamed and start using tools they're familiar with defensively? Could this be the start of a new arms race?

4
chollida1 5 days ago 3 replies      
EDITI'll leave my post as two people were kind enough to point out that I was just flat out wrong.

I had originally thought that the below post was a parody. I'm told it wasn't, though in my defense it definitely reads like a parody... I mean the perfect cutlery... the most meaningless item in anyone's house??

This reminds me of another parody post here a while ago about someone who said they'd bought the perfect cutlery.

They went a bit further and beat the joke to death talking about the difference between several cutlery sets. It was bit better because it started out with some good points bout optimizing your life and buying the best and then it jumped into how to buy what is probably the least important thing in anyone life...cutlery

I think this hacking your life is starting to jump the shark:)

Here is the other parody post:

http://dcurt.is/the-best

5
wyclif 5 days ago 4 replies      
This is hilarious and as a bonus it induces the warm, smug feeling I get when reminded I'm thankfully out of the dating game and happily married to a beautiful, smart woman. Good luck, kids.
6
Nursie 5 days ago 2 replies      
I like it!

But I think we can take this further, surely she's into automation too? So he-bot and she-bot are the ones that actually get together.

But then why bother with the physical world if it's all software? The entire exchange can be virtualised and simulated at high speed, then you only need to actually bother the meatspace human if the whole thing has been electronically predetermined to be acceptable to all parties.

That way you can find the perfect match in seconds. Unless, of course, they were a little creative or devious in their parameter settings, but nobody would ever do that, right?

7
awjr 5 days ago 0 replies      
Beautiful article. Well played sir. I went from "this could work" to "huh" to "W T F" loving every single sweet paragraph. So much win. Thank you :)
8
loser777 5 days ago 2 replies      
This is a great read. The OpenCV part hit a bit too close to home though: I was stuck for a minute trying to think how he managed to segment faces well enough to compute ratios (not just as a rectangle or a blob) given all of the possible conditions/perspectives of the photos.
9
grogenaut 5 days ago 1 reply      
This article is a joke but I have a mildly sociopathic friend who does the first section of this. Has an app that just replies to everyone on craigslist / dating site with a standard greeting that he has statically determined over 5 years as being the most successful. It does filter for undesirable terms to him. It also does one round of banter using a trained data set of responses. He says he gets around 30 actual profiles to look at and personally contacts the ones he's interested in.

Must work, guy goes on 2-3 dates a week.

10
lotsofcows 5 days ago 1 reply      
Up to the fourth paragraph I was going to post something patronising about it being a great way to get a fuck buddy but a bad way to form a relationship.

However, having finished the post, I now think that a long term relationship leading to marriage and children would be possible. Some tweaking might be required. Ideally, a long and meaningful relationship could develop with 0 physical contact.

11
gadders 5 days ago 3 replies      
The first comment is sort of a buzzkill:

"I guess I see whats supposed to be funny here, Rob, but I dont think everyone will. As the man behind an awfully high-profile startup, I dont think this is likely to attract any beneficial attention to you, and may very well attract some negative attention. Even if this is meant in good fun, Im not sure its in your best interests."

12
tambourine_man 5 days ago 0 replies      
Not too far from the truth:

Amy Webb: How I hacked online dating

http://www.ted.com/talks/amy_webb_how_i_hacked_online_dating...

13
alcari 5 days ago 0 replies      
It seems to me that FABIO could be massively improved upon by offloading the computation to a remote (cloud?) server, allowing the date to continue until screams of pleasure are recorded.

Additionally, the robot self destruct seems like overkill. It would be better to simply wipe them and start over. After all it, it wasn't a hardware failure that resulted in a bad date, but a software problem!

14
Houshalter 5 days ago 1 reply      
This isn't too implausible. I remember a story about a guy who was fooled into dating a chatbot for 2 months.

http://www.radiolab.org/story/137466-clever-bots/

15
pdog 5 days ago 0 replies      
It took me until the Double Robot to realize this was a joke.
16
drpancake 5 days ago 0 replies      
I'm reminded of a story told by Tim Ferris about outsourcing his dating for a bet:

http://www.youtube.com/watch?v=7eim8J0NIpQ

17
AsymetricCom 5 days ago 0 replies      
I love the thinly veiled threats from other startup hustlers in the comments. Yeah Rob, you might suffer from some difficulty for deflating our bubble a bit... these people are pathetic excuses for humans, maybe we can replace startup founders with a simple Perl script that uses a simple genetic algorithm to find the best combination of cloud technologies that get investors to part with their money at the highest ratio.
18
batiste 5 days ago 0 replies      
This doesn't use MongoDB therefore this is not web scale.
19
anon4 5 days ago 2 replies      
I think this can work well as a startup. You sign up and create a profile. Then the system matches you up with as many other people as it can and runs several simulated dates based on your profiles. After 3 successful simulated dates, you are both booked a room and given a transcript of your conversation this far, plus a list of fetishes.
21
napolux 5 days ago 1 reply      
LOL! I just got at the robot part that this was a fiction :P
22
vdaniuk 5 days ago 0 replies      
First they laugh at the high frequency dating, then they fight it, then it wins.
23
coldtea 5 days ago 0 replies      
I've tried high frequency dating once, but had to stop when all of my glassware broke.
24
digitalzombie 5 days ago 3 replies      
... Oh it's a joke.

I actually know a few programmers, who are also pick up artists, that do something similar but less complex.. they write scripts that spam msg to girls on dating site and just shot gun approach.

I think I've found my next project...

25
alexfarran 5 days ago 0 replies      
Tim "4 hour workweek" Ferris actually did something similar using virtual assistants. http://blog.timferriss.com/1/post/2009/07/how-to-tim-ferriss...
26
cLeEOGPw 5 days ago 3 replies      
Automated bot actually makes sense for a first or maybe even second message. Things would become even more interesting if girls would write their own bots too. Someone should build an API for that.
27
Houshalter 5 days ago 0 replies      
28
patrickmclaren 5 days ago 0 replies      
I would be left feeling quite sorry and embarrassed for the partner in the case that they were actually a warm body. They would essentially be interviewing to play the submissive within a hegemony.
29
ninasaysso 5 days ago 1 reply      
This made me sad, mostly because the base variable is facial attractiveness. Gotta love living in a culture so saturated in image worship that dating sites have nearly boiled off text bios entirely. Have fun chatting with people you have next to nothing in common with!
30
jff 5 days ago 0 replies      
It may not have been intended, but it came off as a damn fine parody of the idiot "pick up artists".
31
yohann305 5 days ago 0 replies      
After reading the first 2 paragraphs, I started looking for a "download" button to get the source code! You got me!
32
danmaz74 5 days ago 0 replies      
Then, at some point, the female starts sending her robot too...
33
ph0rque 5 days ago 0 replies      
Now, the only improvement left is to set up the same system from the perspective of the "female", and have two robots go on a date, etc.
34
topbanana 5 days ago 0 replies      
A friend of mine set up a micro to repeatedly click on the thumbs up - and I thought that was bad!
35
cookingrobot 5 days ago 0 replies      
Ironically, the tinder app is already completely overrun with chatbots.
36
kimonos 5 days ago 0 replies      
This is an awesome idea!... But I guess this type of dating has its advantages and disadvantages, just saying... (",)
37
queryly 5 days ago 0 replies      
Who will be regulating it? Government?
3
How Hacker News ranking really works: scoring, controversy, and penalties righto.com
613 points by jseip  18 hours ago   143 comments top 31
1
swombat 15 hours ago 7 replies      
Hilarious that the original article was flagged off the front page, but this one isn't...

I find it very disheartening that the negative voices are being given so much weight. Everything that's worth doing will have detractors, and when it's something really worth doing it will have vocal detractors. Back when I had comments on my blog, every article I wrote that was any good had at least one person commenting that I was a moron or some equivalent statement.

Great things arouse passion - on both sides.

Giving 10x the power to the people on the negative side just creates an environment where new ideas are discouraged, where important but difficult discourse is pushed aside, where things of true import are penalised out of the group's attention by a few detractors.

There does need to be a system for flagging and removing spam articles, but if this system can (as it plainly regularly is) be co-opted to remove articles from sight just based on not liking them much, then it is broken. The people who have flagging powers are not responsible enough to use them wisely, perhaps.

I see at least one simple solution: lift the flagging privileges so it only becomes available to a much smaller segment of the population. Perhaps making the limit 10'000 instead of 500 would do that. That would still include hundreds of people, based on a quick extrapolation from https://news.ycombinator.com/leaders ). An even better model would be to make it dynamic - perhaps the top 200 commenters...

2
sethbannon 13 hours ago 4 replies      
I find it really disheartening to learn that any article with "NSA" in the title is pretty severely penalized by HN's algos. This seems like one of the seminal issues of the decade, for this community in particular.
3
flexie 17 hours ago 5 replies      
The avoidance of controversial topics when talking together is one of those things we Europeans are typically not so good at. I know from many Europeans who like me lived in the US for a while that they had to learn the art of talking without touching controversial subjects. At first it seemed superficial but then I realised that it makes discussions that are not controversial but nevertheless important possible and I came to appreciate it every now and then.

Anyways, it would be nice if we in the settings could apply our own penalizing to subjects that we don't care about or that we find controversial instead of having others decide for us. But that would mean that submissions ranked differently for different users, of couse...

4
david927 15 hours ago 2 replies      
The post this morning, "Ask HN: What kind of side projects are you working on?"* got a lot of great responses and was killed because of it. There was nothing "controversial" about the post; it was merely popular with comments of what people are working on. This algorithm needs to be tweaked or HN risks losing its base.

*https://news.ycombinator.com/item?id=6799694

5
mgunes 16 hours ago 2 replies      
How HN page rankings really work: you vote stuff up, and then the flag mob and hidden moderators axe right off the front page whatever irritates the pro-capitalist internet-libertarian techno-optimist idelogical sensibilities of the white male Californian HN hive mind even in the subtlest of ways.
6
grey-area 7 hours ago 3 replies      
This article was more interesting than I anticipated. While I admire the tinkering which goes on with moderation here in an attempt to keep discussion civil and interesting, sometimes it has counter-productive effects. In particular this rule doesn't seem to work very well:

In order to prevent flamewars on Hacker News, articles with too many comments will get heavily penalized as controversial. In the published code, the contro-factor function kicks in for any post with more than 20 comments and more comments than upvotes.

Is a vigorous discussion bad? Should everyone commenting also upvote?

7
ChuckMcM 9 hours ago 0 replies      
Its a wonderful analysis, kens if you would ever like to come work for me identifying robot search clients just give a shout :-). I chafed a bit though at calling it a 'penalty'. Isn't it really a 'moderation' ? The scoring is adjusted by the moderators to be more the site they want to have and so they moderate articles that they feel aren't appropriately more heavily than those that are appropriate?

I understand that for some people the moderation choices offend them, I think that is unavoidable, but the goal is, I believe, to make a 'better' collection not to shoot down particular articles.

8
minimaxir 18 hours ago 3 replies      
Not sure why the HN submission didn't link to the original post: http://www.righto.com/2013/11/how-hacker-news-ranking-really...

Discussion here: https://news.ycombinator.com/item?id=6755071

9
tehwalrus 15 hours ago 0 replies      
Interestingly, the best strategy for keeping an article you like up high is to upvote it and not comment (or, if you must comment, do so only once.)

If you do comment, however, you can be as verbose as you like (as long as you are bland enough not to provoke replies.)

I wonder if this will change the strategy some post authors have of "hosting comments on HN" (and replying to every comment, even just to say "thanks".)

EDIT: and to edit your posts instead of replying.

I think this is penalisation of comments is a shame - I certainly come to HN for the comments, not the articles (although they're interesting stimulus for discussion).

10
jader201 13 hours ago 0 replies      
Assuming this article is correct regarding the penalization of comments, I'm a bit surprised (maybe even disappointed) that it is assumed that discussion is a sign of controversy. And maybe it is, historically?

It's a shame for those articles sparking insightful discussion though.

It seems like a weighted penalization could be implemented, potentially looking for red-flag words like "pedantic", or "not to be *". Or maybe it already is.

Hope I didn't just set it off. :)

11
lnanek2 13 hours ago 1 reply      
Interesting. So if you want to get rid of the stories on the front page, and see more stuff, you should comment a lot. Because on HN a lot of comments is a death sentence for an article.

On the other hand, if you are an article writer and add a "discuss this on HN" link in your articles, you should remove the link as soon as you get a good ranking. Or actually don't ask people to discuss at all, because it is harmful, just ask them to vote and have your own comment system for discussion.

HN basically reinvented "sage", the concept from 4chan and its Japanese origins where people sometimes comment on a thread just to get it closer to the comment limit before it would no longer be bumped up to the front page when replied to.

12
twotwotwo 2 hours ago 0 replies      
Not everyone has the same criteria for what content they want to read. Nor does it really help the world for all HN visitors to read the same content.

Would love to see ideas that broke from the model of a single ranked list: let folks tune their personal penalty amounts and gravity; add random jitter to rankings and throw a couple random new stories onto each list; classify/cluster users by their votes, so people who vote for jokes or NSA articles or their neighbors' articles (automatically) see more of those things.

It's maybe a bit much to ask PG and co. to architect radical alternatives to HN, because HN is a handful as it is and, besides, I hear they have day jobs. It could be cool to let a thousand flowers bloom: publish most of the now-hidden ranking data (maybe not all, because it can be useful to obscure how anti-spam algorithms work); let users opt in to publishing anonymous votestreams for clustering, etc.; then let other folks use all of this to make their own homebrew HN frontends within certain limits.

I suppose that, too, is kind of a pipe dream, because opening HN up for people to easily build their own frontpages is far-from-trivial for both tech and policy reasons. But it's a nice pipe dream.

13
brador 9 hours ago 0 replies      
Blog spam link. Real content from the article at http://www.righto.com/2013/11/how-hacker-news-ranking-really...
14
DanielBMarkham 13 hours ago 1 reply      
At some point over the past 2 years HN has stopped being my friend. The folks here? Great people. Very happy to have gotten to know many of them. But the system itself? Not so much.

People expect machines they interact with to behave in some kind of logical manner. After 2 or 3 times of submitting an article that HN has traditionally liked -- and watching it tank -- just not that motivated to submit more. After submitting my own articles, having people stop me in the hall and tell me they liked it and voted up for it on HN, only to see it have no votes? Not so motivated to submit more. After the tenth conversation about how people expect HN to act one way and instead it acts another? Not so crazy about it.

I think the problem here is that PG wants folks to participate, but only to a certain extent. People want to interact with the system, but on some kind of mutually-fair terms. I'm not sure PG's goals line up with the average user any more. There are good reasons for this, and I'm not trying to trash the entire effort. It's just that this is a tough problem. I don't think you can code your way out of dealing with messy human issues at scale. If you could, we'd all be managed by computers in 50 years, and that's not a future I would wish for my children.

15
Samuel_Michon 7 hours ago 2 replies      
TL;DR: If an article has more comments than votes, dont add your comment to it or you may kill it off entirely!

Rings true to me and, if indeed accurate, it seems like a good practice for HN.

16
vijayboyapati 6 hours ago 0 replies      
What I find quite strange is that I have posted articles which show up in the new section, then later when they make the front page, it shows them posted as someone else. Either dupe detection isn't working (these are very simple URLs typically, like from the New York Times) or HN is rewarding the post to someone else after I've posted it. Weird.
17
hmsimha 16 hours ago 0 replies      
According to this article, the link currently in #3 [Vote Now: Who Should Be TIMEs Person of the Year? Edward Snowden](https://news.ycombinator.com/item?id=6800145) must be the victim of a very harsh penalty. I'm guessing 'Snowden' is a heavily penalized term as well, and having less than twice as many upvotes as comments might not be helping either.
18
sixtypoundhound 7 hours ago 0 replies      
Using score adjustments as automatic community management makes a lot of sense; there are certain topics which are more likely to upvote/rank than others. Similarly, there's also a good bit of research that controversial articles generally do better on Reddit and other social news sites.

Applying an automatic penalty to certain topics / tactics which are likely to gather excessive upvotes, due to the nature of the content vs. it's quality, helps ensure you've got a diverse mix of content occupying the front page. Which is generally good for the overall user experience.

Otherwise, the front page will be a massive list of shock jock posts about the NSA.... [since controversial posts about those subjects will get sympathy votes, regardless of their actual contribution to the community...]

19
analog31 1 hour ago 0 replies      
Beautiful MatPlotLib work. ;-)
20
damon_c 13 hours ago 0 replies      
The question is: Deep down, whether we realize it or not, are these unspeakable manipulations the reason we come here?
21
cloudflare 12 hours ago 0 replies      
It would be interesting to know if there are 'flagging rings' in the same way there are 'voting rings' and whether HN actively detects the former as it does the latter.
22
alexkus 13 hours ago 0 replies      
If someone wants a 'weekend project': an interesting browser plugin would be one that undoes the effect of the penalties and reorders the page accordingly, it could even allow the user to choose the level of penalties enforced against stories in their view in case they did want some to disappear off the front page.
23
alok-g 9 hours ago 0 replies      
Instead of using ad hoc scoring rules like that, HN could use a machine-learning based system. This can also help solve another issue -- automatically determining the initial score of the new stories.
24
michaelochurch 7 hours ago 0 replies      
I get a personal penalty, because someone with moderation powers (possibly PG, but I'll give him the benefit of the doubt) is a wolfbagging pissant.

Apparently being brutally honest about VC means that everything I say is of low value.

For more, go here: http://michaelochurch.wordpress.com/2013/11/03/heres-why-pau...

25
pearjuice 6 hours ago 0 replies      
I think what is more interesting, is what drives people to upvote certain things.
26
mikeevans 12 hours ago 1 reply      
Interesting that github.com links are automatically weighted down, especially since almost every blog post they make hits the front page, and often projects hosted there show up fairly regularly as well.
27
brudgers 16 hours ago 0 replies      
This article is worse than the original, and the original article was crap because it never addressed flagging. I have on occasion flagged articles on a particular topic...sometimes I just get frustrated with recycled topics. It also doesn't address the possibility that penalties are applied when HN's heuristics suspect vote rings.

And given that the hackaday article is blossoms, don't be surprised to see it fall.

28
franstereo 11 hours ago 1 reply      
As other folks have mentioned there is a raw feed if you want to see a non-penalized version.

It would be interesting if it somehow incorporated other elements to determine article "value":- Open rate- Ratio of comments to opens- Time spent on article or comments- Depth of comments

29
muyuu 13 hours ago 0 replies      
I see this is being penalised already >:-D
30
brosco45 7 hours ago 0 replies      
And censorship
31
001sky 14 hours ago 0 replies      
On average, about 20% of the articles on the front page have been penalized, while 38% of the articles on the second page have been penalized. (The front page rate is lower since penalized articles are less likely to be on the front page, kind of by definition.) There is a lot more penalization going on than you might expect.

==Why there will never be a Flat tax...

4
Someone just made a $147,239,214 Bitcoin transfer blockchain.info
609 points by a3voices  4 days ago   450 comments top 46
1
tokenadult 4 days ago 3 replies      
If this was an actual transfer of ownership of Bitcoin at all near that value, this would trigger money-transfer reporting requirements under the laws of most countries,[1] especially if this was an international transfer of ownership. I see that all the other comments here are speculating about what exactly happened here, and one astute comment before this one pointed out that the actual owner of the Bitcoin may still be the same individual person both before and after this blockchain transfer. It will be interesting to see how the regulatory environment keeps up with the implementation of Bitcoin, which so far is a very tiny percentage of the world economy.

There were also statements in some previous comments that this transfer was made for free. It is true enough that a Bitcoin transfer doesn't inherently incur a processing charge from a merchant payment processor, but as merchants learned back in the Middle Ages when charging interest was formally illegal, the price of a transaction can hide financing and processing costs. We don't know what was agreed with whom by whom to make this transfer happen. The transfer may have occurred at a higher than list price for something that was bought, to make up for the ongoing inconvenience of receiving a payment using the new Bitcoin payment mechanism.

[1] One example, among many: http://www.consumerfinance.gov/remittances-transfer-rule-ame...

2
fragsworth 4 days ago 24 replies      
Consider this: They paid $0.00 for the transfer of $150 million dollars.

A direct (i.e. not based on third party credit, regulations, etc.) transfer of wealth of this magnitude between two entities usually consists of a heavily guarded, insured, physical shipment of cash or gold. Depending how safely you want to make the transfer, and how far the entities are on the globe, it can cost hundreds of thousands to millions of dollars.

Bitcoin has real value. It solves problems on an incredible scale. I wish I realized this months ago.

3
dmix 4 days ago 2 replies      
4
aroch 4 days ago 2 replies      
As pointed out on Reddit, this wallet has made several large transactions since September: https://blockchain.info/address/1HBa5ABXb5Yx1YcQsppqwKtaAGFP...
5
nly 4 days ago 2 replies      
Hardly. If you tried to sell 195,000 coins you'd wipe out all the exchanges and cause a crash.

Talking numbers, you could sell them all on BTC-E right now, bagging you just $7M, and take the price down to ~$36. You can spread that around the exchanges of course, if you're quick, but you're still nowhere near $147M at current market depth.

6
kmfrk 4 days ago 6 replies      
Should this be regarded as capital gains from currency speculation, when it comes to taxation, or how does something like this look to a tax attorney or accountant?
7
jluxenberg 4 days ago 1 reply      
Are those editorialized links added by Blockchain.info admins, or are they part of the block chain itself? ("gotcha" and "shit load of money!")
8
Pxtl 4 days ago 0 replies      
Were any of these previously-thought-lost "dark" bitcoins, or was it all live bitcoin currency?
9
politician 4 days ago 1 reply      
If you look at the tree view, the lump sum has already been broken down into smaller bits.

https://blockchain.info/tree/98324324

Click the yellow circles to expand the tree nodes.

10
tsaoutourpants 4 days ago 0 replies      
FBI clearing out DPR's accounts?

...or DPR associate moving around his money? ;)

11
davecap1 4 days ago 1 reply      
Must be someone buying a seat on Virgin Galactic
12
mynameishere 4 days ago 0 replies      
About 0.000028 times as big as daily forex trading.
13
seabrookmx 4 days ago 1 reply      
> Someone

This is a little misleading. This could be a business etc.

A mining pool or exchange transferring from one wallet to another (ie. to cold storage)?

You wouldn't want a typo in your address for that amount of cash! I'd probably break it up into a bunch of smaller transactions just to be safe.

14
mariusz79 4 days ago 8 replies      
Well, this is just another reason why bitcoin will not work - you can track money changing hands.. Certain three letter agencies would have not trouble tracking most of the transfers, all over the world.
15
dcc1 4 days ago 1 reply      
Is that right someone transferred millions with 0 fees? cheapskates :D
16
altoz 4 days ago 0 replies      
Someone wanted to be on the bitcoin 100 richest list.
17
rikacomet 3 days ago 0 replies      
Here is my analysis:

Time: 17:30 - 1738

roughly 5pm europe time, past the 9-5 general business hours . If all the parties or just the reciepients are based in europe, between legal and illegal.. I would tap this as illegal. It just doesn't make sense.. for someone doing a legal transfer of this size, after business hours, unless.. senders were based in US.

Going through the direct senders (1 step back), the time with least 1k BTC+ transactions was 0200, 0400, 0500.. while with most big transactions were made around 1300, 1500, 2100 .. suggesting that these transaction were roughly either made from US to Europe, if of legal nature.. or within europe.. suggesting something shady. Very very crudely speaking.

Yeah, everything is possible, and the above is not necessarily true, but it appears so, at least to me.

Amount: 194k bitcoins

Multiple connected accounts who sent to the above address seems to have had 220k+ in total recieved and looking at denomination, it didn't seem like lot of normal people were involved. Besides, with this kind of money involved it is wise to have 10 or less people involved who are highly trusted. This appear so also, with how countable no of bitcoin addresses were used to say "bounce" the trace of that money. Skill wise, these are not normal people.. but given HOW this ended up in public domain, they could have done slightly better.

I get the feeling you get from govt hired people doing such things, no offense.. but somewhere below the elite, but above the average joe. It is also possible that a bigger or equal (more likely) sum was also transferred elsewhere.

Movement seem to have stopped.. due to attention it has already garnered or is it the actual resting place? If it would be illegal, one would have had no problem keeping it in multiple bitcoin wallets in smaller amounts. But if it is someone, with legal force on side.. say govt agency, it would not have problem leaving the bitcoins in such limelight, for a long time, as long as they believe that bitcoin is actually 100% and not 99.9999999% impregnable.

Motive, as discussed slightly above, can be mainly either "convenience of centrality" > or > "hiding a bigger sum" .. well I at least won't be putting my money in BTC for my own reasons. I got that feeling today to use it.. but I resisted.

BTC is genuine, but the currently it seems like a pyramid fraud system which is perfectly legal strictly speaking. More people putting in money in BTC system.. already rich getting richer.. tickle down effect :? Well 150 million.. has really left a lot of questions .. or its nature, motive, ownership, moral, etc. Who would answer? No one.. ironically.. thats a big problem as well a big benefit of BTC. I'm sorry.. lot of things I said above are based on feelings, but to tell you the truth, before today, BTC was in a nutshell somewhat easy to understand.. but 150 million.. is a figure that can give any guy a lot of different vibes, speculative as well as wild goose chase type.

18
shocks 4 days ago 0 replies      
Someone trying to get #1 on http://bitcoinrichlist.com/top100 ?

Seems very reckless. All your eggs in one basket.

20
ck2 4 days ago 1 reply      
That's nothing, someone owns (owned) six million litecoins:

http://ltc.block-explorer.com/address/LTpYZG19YmfvY2bBDYtCKp...

21
Titusak 4 days ago 3 replies      
I still dont get how that kind of amount is cashed out.I mean, yeah, there is some brokers, but I dont think they have this kind of cash available...
22
this_user 4 days ago 1 reply      
If the map is correct, the first node that saw the transaction is located roughly in Frankfurt, Germany which is a major financial centre. The transaction was made at 5:38 local time which is right around the time the Frankfurt exchange closes. Might be this transaction was done by a larger financial institution closing up shop for the week.
23
eliben 3 days ago 0 replies      
Is it only me, but such things make Bitcoin seem somwhat... less private... than real money?
24
mkramlich 4 days ago 0 replies      
... and I go right back to building something to help Bitcoin mitigate risks
25
downandout 3 days ago 0 replies      
My guess: Bail money for DPR.
26
bvttf 4 days ago 0 replies      
Anyone know of any genius graphics programmers who have had access to lots of high-end GPUs, who might be retiring soon?
27
penguindev 3 days ago 1 reply      
My ignorant BTC question is this - does knowing that this 'address' has this much money make it a target for people to crack its key?
28
_prometheus 4 days ago 0 replies      
This stage of BTC growth is lovely. If you have a lot of it, you can issue transactions that might increase the total value. Not claiming this was the aim here, but certainly an effect :)

If this same owner splits the holdings up slowly, transfer wallets, and then recombine it again, it might spike up once more. People will definitely catch on, but it might bring up the value a couple hundred dollars in the hype.

Might make millions in the confusion.

29
Codhisattva 4 days ago 5 replies      
Is it possible to know the actual amount of money that exchanged hands?
30
fpp 4 days ago 0 replies      
and already on its way back to one of its original walletshttps://blockchain.info/address/1HBa5ABXb5Yx1YcQsppqwKtaAGFP...
31
hkbarton 4 days ago 0 replies      
wow, 7kb data value 150 million dollars, what a crazy world.
32
Datsundere 3 days ago 0 replies      
Might be the german govt. They've been advocating open source and linux for a while now. Maybe trying out buttcoins.
33
roasbeef 4 days ago 0 replies      
It was Richard Branson..most likely someone paying for their ride to space in BTC.
34
mswe 4 days ago 0 replies      
Everybody is suddenly a currency expert. Remember the days when everybody was a real estate investor? Yup. I'm gonna enjoy the show from the sidelines.
35
billions 4 days ago 0 replies      
With a significant number of casual PCs storing bitcoin the virus industry is about to become WAY more lucrative for the bad guys.
36
bigstueyc22 4 days ago 0 replies      
After recent fluctuations it's very hard to predict what, if any impact this will have on it's value.
37
EGreg 3 days ago 0 replies      
They couldn't spare even 0.5 for fees?
38
bhartzer 4 days ago 1 reply      
Is there not a limit to the amount of money that can be transferred via Bitcoin?
39
jyf1987 3 days ago 0 replies      
i am worried about sha256

and finally mathematicans could get rid of poor now

40
jedicoffee 4 days ago 0 replies      
This was obviously made by someone with very large botnet.
41
jhhn 4 days ago 0 replies      
OMG... is someone buying a nuke? !!!
42
fat0wl 4 days ago 1 reply      
Doesn't anyone find it odd that the price is quoted in USD and not BTC?
43
neakor 4 days ago 1 reply      
What does the "shit load of money!" mean on the transaction page?
44
squozzer 3 days ago 0 replies      
Wild-ass guesses:

1) Someone at the NSA2) Barack Hussein Obama3) Me

I hope the national-security apparati have a handle on this -- wouldn't want the evidence of a smoking gun to be a mushroom cloud...

45
chenster 4 days ago 1 reply      
Russian mobs.
46
dragontamer 4 days ago 2 replies      
OMFG, Bitcoin is so anonymous!
5
Id Software founder John Carmack resigns polygon.com
572 points by footpath  4 days ago   99 comments top 33
1
Arjuna 4 days ago 6 replies      
Wow, I'm just now seeing this news. Initially, I had that sinking feeling set in... I mean, like you, I have been impacted by his story, his games (not just the Wolfenstein/Doom/Quake franchises... I'm talking Commander Keen, boys and girls), his code, reading Masters of Doom, etc.

I can see my copy of Michael Abrash's Graphics Programming Black Book Special Edition sitting here, which was such a treat to read when it came out, because it has so many great chapters on the development of Quake and little stories about John's discoveries and thought processes throughout the development of the game.

But, then I thought... wait... this is a new beginning. I wrote about this previously, but, look for gaming to start heading in the direction of VR with technology like Oculus Rift. Also, with someone of the caliber of John Carmack involved (now totally focused on it because of the resignation announcement) with not only his passion and skill, but his ability to work with graphics hardware manufacturers and driver developers to effect change and garner the necessary support and backing, expect to see vibrant, compelling developments in this field.

In case you missed it, check this video out of John discussing some of his VR work. It is from E3 2012:

https://www.youtube.com/watch?v=NYa8kirsUfg

That momentary sinking feeling has faded away now... great things are ahead!

2
beloch 4 days ago 0 replies      
This is fantastic news.

I loved Id back in the day. When all it took for a game studio to be great was the most advanced code, Id was king! Then FPS games became more like movies, and Id became a bit like Michael Bay. They still pushed the technology forward, but almost everyone was making FPS's that had better plots, characters, etc.. The technologies Id licenses to other game studios are put to better use by them than in Id's own hands!

VR has been around for decades, but it has always sucked. Low resolution displays and poor head-tracking have historically been problems, but latency has long been a problem that trumped all others. Carmack and Oculus were already working on getting Rift's latency down to levels that would make VR a less nauseating experience for users.

This move just means Carmack is finding his work at Oculus more rewarding than at Id. That means we can probably expect great things from Oculus in the near future.

3
LandoCalrissian 4 days ago 4 replies      
I think this check had been in the mail for a while. He is clearly far more excited these days about VR and where it can go. I'm sure he has more than enough money too to never have to worry about working again if he wanted.

I really wish him the best of luck, truly one of my favorite people in tech. I hope we still can get his annual keynotes, because they are great to listen to.

4
leoc 4 days ago 3 replies      
Slightly testy tone in that iD statement, isn't there?
5
aryastark 4 days ago 2 replies      
First Winamp, and now John Carmack leaves id. This has been a brutal week.

On one hand, it's exciting to see John working on VR tech. I really do hope we see something amazing out of it. But it still feels wrong, an id Software without Carmack. Hopefully they can continue on and reclaim some of their former glory as well, and let's hope Carmack keeps in the spotlight.

6
untog 4 days ago 0 replies      
Very happy for John - his early days were at the very forefront of PC game development and while iD still does great stuff, video gaming is in a very stable, iterative place right now.

Hopefully chasing this VR dream will take him back to those early pioneering days.

7
venomsnake 4 days ago 0 replies      
This makes me happy. I have a feeling that iD were dragging John down. He could always make a brilliant tech that they somehow always failed to makes decent game of after q3 arena.

I really hope that he will be able to push the limits of possible about graphics technology once again.

8
eco 4 days ago 4 replies      
Off topic but why do so many people capitalize "id" as "iD"? I did myself years ago as well but I have no idea why I did. None of their logos use that capitalization and my memory of the early games is too poor to recall where, if anywhere, it was written like that.
9
melling 4 days ago 0 replies      
Carmack is going full-time and the company is doing a lot of hiring...

https://careers.oculusvr.com/jobs/

I'm not into VR, but this could be one of those "this changes everything" moments.

10
endgame 4 days ago 1 reply      
I find it interesting that iD and Carmack are still described in terms of Doom and Quake.
11
mkramlich 4 days ago 0 replies      
It's a little sad news but exciting as well. I'd rather see John's mind helping push VR/AR and 'cheaper/nimbler/entrepeneurial/hacker-maker/DIY' aerospace forward than churning out yet another 3D FPS game. We have tons of great games/engines of that type already to choose from, and lots of great people continuing to work in that space.
12
the_mat 4 days ago 1 reply      
This is the end for id.

The only thing id has had going for it are Carmack's engines. In recent years his stuff has been as amazing as ever, but so many commercial engines are only a fraction of a step behind, and the difference hardly matters.

Design-wise id is a complete mess. They're stuck back in the 1990s. RAGE appears to have had no leadership and no vision, and the actual design work that shipped is amateur-hour at best.

13
macspoofing 3 days ago 0 replies      
It doesn't seem like an amicable parting. You never want to have a guy like Carmack just leave. He's a giant in your industry, he's popular and highly respected and you gain a lot by having him be associated with your company. So at the very least you give him a honorific title and invite him to all the corporate parties. It didn't seem like this happened here.
14
danso 4 days ago 2 replies      
Obligatory mention of "Masters of DOOM", the biography of Johns Carmack and Romero:

http://www.amazon.com/Masters-Doom-Created-Transformed-Cultu...

Like reading iWoz... a lot of stories of brilliant engineering at an elite level.

15
_random_ 4 days ago 1 reply      
Basically confirms that VR is in the "Slope of Enlightment".
16
gagege 4 days ago 2 replies      
It's bittersweet for me. I grew up with id games and John Carmack has just always been there as id's genius programmer guy. Feels like the end of an era.

On the other hand, John Carmack is working full time for Oculus VR!

17
saturdaysaint 4 days ago 0 replies      
Sounds like good news - I'd rather see him working on core technologies that can benefit all games than working on iD's games, which I'd characterize as merely being "pretty good" (albeit very technically impressive).
18
nicholassmith 4 days ago 0 replies      
A developers developer taking the opportunity to flex his wings on something new, what a fantastic turn of events for us all.
19
akurilin 4 days ago 0 replies      
John can make real impact on the videogame industry a second time at Oculus, the same couldn't have been said about id. This is a win for everybody.
20
blah32497 4 days ago 0 replies      
What a strange move. Maybe he wasn't spending enough time at iD and was forced to leave?

You'd think his having a leg in gaming and a leg in VR would create a wonderful synergy. Knowing all the in's and out of both worlds he could have insured great integration of Doom 4 with the Oculus rift - making sure iD was on the technological forefront while the Oculus would have a great demo from day 1.

(see the Leap Motion for an example of what happens when you don't have a good demo day 1)

21
10098 4 days ago 0 replies      
I'm not sure how to feel about this... He's always been an inspiration to me. But I don't really care about VR tech, and would much rather see Carmack working on games (at id or any other company). But I wish him success anyway.
22
BuckRogers 3 days ago 0 replies      
This is a good thing. Carmack said at a recent Quakecon that he didn't let us have a light on any gun in Doom 3 because he didn't want another light source in his rendering..

this is a guy who has no business making games. And none of his games have been good for a long time (and they were always pretty bland, Quake was the peak).

Having Carmack out of id's games is a good thing. Having him geek out on technical problems without being allowed in game design decisions of any sort is also a good thing.

23
billyjobob 4 days ago 0 replies      
In the early days of PC gaming John Carmack was a genius, and Quake 3 was his masterpiece. I guess he is still a genius, but from an outsider's perspective the advancements he has made since then don't seem to changed the world in the same way.

Graphics get prettier, but gameplay stays the same, or even gets worse because the prettier graphics require higher budgets which require lowest-common-denominator appeal to recoup.

So it's good that he is trying something truly new now, where he has a chance to make a difference again.

24
atburrow 4 days ago 0 replies      
It will be interesting to see how the future pans out for both companies. John Carmack is a brilliant person and I think that Oculus VR will do very well with him on board full time.
25
mkramlich 4 days ago 1 reply      
smells like vesting and/or end of golden handcuffs period (in the context of the prior Betheseda -> iD acquisition)
26
benmorris 3 days ago 0 replies      
Initially this is depressing until you see where he is going. I think Oculus and VR in general will change the gaming industry.
27
marksands07 4 days ago 0 replies      
I guess I should feel dumb, because I thought Carmack left id when he joined OculusVR.
28
ogreyonder 4 days ago 1 reply      
Am I the only one surprised to find that Carmack was still working for iD? I had thought his taking a position with OculusVR implied his departure months ago.
29
avoutthere 4 days ago 0 replies      
This is truly the end of an era. John's work has given me countless hours of joy and I look forward to seeing what he produces next.
30
salient 4 days ago 1 reply      
> John's work on id Tech 5 and the technology for the current development work at id is complete.

So he's leaving just before starting to work on the voxel/polygon id Tech 6 hybrid gaming engine. Darn it!

http://raytracey.blogspot.com/2008/08/carmack-id-tech-6-hybr...

Hopefully id Software will continue that without him, but I doubt it.

31
BlackDeath3 4 days ago 0 replies      
Wherever he goes, he shall kick ass. Best wishes, Carmack!
32
na85 4 days ago 0 replies      
At first I was elated, but then I realized I was confusing Carmack with the egotistical John Romero.
33
squozzer 3 days ago 0 replies      
Ask not for whom the bell tolls.
6
My deadly disease was just a 23andme bug mntmn.com
489 points by mntmn  1 day ago   313 comments top 25
1
IanDrake 1 day ago 24 replies      
I'm surprised at the negativity here. There is nothing conclusive about this test and from what I've seen it does a pretty good job. So one guy got some bad indicators that proved to be nothing, who cares?

My father did 23 and me, without giving any family medical history and it concluded that my father was at a much higher risk factor for things that his mother was ultimately afflicted with. So, it works to some extent.

On the flip side, long story short, I had a CT scan done on my chest that checked out fine, but the doctor that reviewed it said I might have and unrelated problem - Patent Ductus Syndrome. Nothing heart surgery and a lifetime of supply of Coumadin couldn't fix. A few months later I got an Echocardiogram which conclusively said the doctor was wrong.

Quick, stop doing CT scans!!!??? Are we're all so afraid of our own shadow that heroes at the FDA need to protect us?

2
tptacek 1 day ago 6 replies      
Now consider that this is a marketing message that wants to have it both ways: it alerts clients to genetic "risks", which are very likely to be subjected to secondary testing, but does not want to be liable for the accuracy of its negative results, which are very unlikely to be challenged.

It would be better if 23AM presented medical genomic information in a neutral way, with external links to descriptions of genomic variations but no assertion of diagnostic significance at all, or even a mention of how to obtain a diagnostically significant result.

Unfortunately, the way 23AM is packaged (see the website), they are incentivized to do the opposite; positive results are recognized by their clients as valuable, and, whether 23AM likes it or not, so are the negative results.

3
sneak 1 day ago 7 replies      
If I were to ever use 23andme, I'd use a fake name, a disposable mailing address, a fake email address, and never, ever discuss it with anyone.

Your genome doesn't change, and who knows where that data leaks to later? Who knows what healthcare providers will vacuum up such leaks?

Who knows what kind of discriminatory practices will be undertaken by service providers, insurers, or even employers in the future?

Haven't y'all ever watched Gattaca?!

Keep your private information private. Nobody else will do it for you.

EDIT: Oh, and USA PATRIOT too.

4
kohanz 1 day ago 1 reply      
The frustration directed towards the FDA in the 23andMe threads is not surprising, but short-sighted, IMHO. Sure, there are likely inefficiencies in the way the FDA operates, in a micro-sense, but in the macro-view, this sort of stuff needs to be regulated.

Many simple take-home tests that are arguably much more time-tested, reliable, and accurate than 23andMe's DNA services are FDA-regulated. If your basic take-home pregnancy test is FDA regulated, why should 23andMe be exempt? Or would you prefer a world where all of these tests were unregulated?

Not everyone does their research. Not everyone is intelligent enough to understand the implications of medical tests, drugs, and devices - regulation protects the vulnerable among us.

As an aside, I've worked in medical devices for most of my career. All other things remaining equal, engineering teams in these companies are no different, talent-wise and quality-wise, than most teams you'd see in other industries. Take the sloppiest engineering team you've worked with, now imagine them working on a clinical product with the potential to cause real harm to people (physical or psychological).

It is the regulation (FDA and others) that ensures that these teams "raise their quality game", so to speak. It is not perfect by any means, but I don't dare imagine the alternative. Yes, the regulation is a massive pain. I experience it first-hand. It slows things down, has massive associated costs and can even stifle innovation. The general trade-off, however, is worth it, IMHO.

5
newnewnew 1 day ago 0 replies      
Thank goodness there are thousands of people being paid to prevent me from spitting in a tube and getting noisy data about my own body! I should only be able to get data about my body after running a gauntlet of specialists with millions of dollars in total education, for my protection, of course.
6
zallarak 1 day ago 0 replies      
23andme is a consumer genetic testing company. The testing is not meant to be clinically applied. I work for a company that is similar to 23andme, different in the sense that the test can be clinically applied; the catch is that everything you report must be very robustly researched and you must comply with more regulation (research, lab, medical, security, etc.) which drives up the cost but gives you much more actionable and medically relevant information.

I think there are benefits to both approaches of genetic testing; 23andme has a lot of data it collects and can do interesting statistical studies/reporting that other more research-oriented companies cannot. However, like I said before, you can't view a test like that as medically actionable. In the end though, the more testing there is (as long as the messaging is clear and consumers do their homework), the better oFf the world will be.

7
wissler 1 day ago 1 reply      
It is good to know of real-world misdiagnoses made via 23andMe.

Now, can we see some real-world statistics by the government-licensed medical profession? I wonder what a side-by-side analysis of the errors 23andMe has made vs. those the medical community has made might look like. I wonder how the real-life consequences might stack up.

As far as I know, 23andMe has never accidentally amputated the wrong limb, so it's at least go that going for it.

Interesting tidbit: you can always double-check the reports 23andMe gives to you by getting followup tests someplace else.

8
bparsons 1 day ago 1 reply      
Seems like the problem was addressed immediately. This seems like a success story of 23andMe.
9
rafeed 1 day ago 4 replies      
I'd never want to get my DNA tested by a service like this. It may be cool, but there's too many things at play here.

    - privacy    - security of data    - wrongful use of data    - spread of misinformation    - mind fucking that ensues after reading your results    - probably a lot more
The only benefit is for those who are so curious about what genes they carry, what mutations they have, and who they may or may not be related to. Really, why risk having your mind potentially fucked by knowing something about yourself that you're not ready to handle yet?

Sorry if I seem overly critical about a service like this, but it just doesn't seem worth it to me.

10
clavalle 1 day ago 3 replies      
Crazy idea: How about, instead of getting this report ourselves, we have it sent to our primary care physician.

That way we don't have to stress that there is some likelihood that we might get some dreaded disease but our doctor could talk to us if the risk is high enough or if we start having symptoms that match one of our genetic risk factors?

11
kevrone 1 day ago 1 reply      
Wow, this actually makes me pretty impressed with 23andme. Yeah, ok, maybe their analysis front-end is borked, but hell, the science seems pretty sound if he was able to get the "right" answer from the original sample.
12
aabalkan 1 day ago 3 replies      
Why your character encoding is broken on your blog post?
13
SwellJoe 1 day ago 0 replies      
I love 23andme. I don't make major medical decisions based on it (yet), and I know it has limitations (it didn't predict the pancreatic cancer that killed my dad, and it didn't find indicators for a number of other conditions in my family tree...). But, it's a fun thing to play with, it has connected me up with a number of folks in my family tree that I didn't know about, and it resolved some questions about the ancestry of my family. I'm an American mutt from poor white trash stock...no idea where everybody came from until 23andme. My sister had tried to do a family tree but ran out of steam just a couple of generations back.

Anyway, it's probably good they're being reined in from making medical claims. I don't think the tech is quite there yet. But, as a tool alongside many others, it's cool. And, I love that they're doing original research with their surveys...it's a super cool idea to combine lots of crowdsourced data and DNA results of thousands of people to find markers for diseases and traits.

I plan to keep recommending 23andme to folks who can afford it and folks who can grasp that a DNA test isn't a reliable indicator of disease and that it is merely a probability indicator.

14
robk 1 day ago 2 replies      
For a hundred bucks, I'm quite pleased with the results I get. Of course if I was flagged for something serious I'd go to a doctor immediately to get it checked and verified.
15
jisaacso 1 day ago 2 replies      
This will likely be downvoted to hell, but personally I'm a fan of the service 23andme provides. It has aggregated scientific knowledge across a large number of domains to provide an end-to-end pipeline for ancestry prediction, kinship analysis and phenotype prediction.

It's a service built on learning algorithms to correlate features (DNA SNPs) with diseases. As with any learning algorithm, results should be interpreted with care. That doesn't lessen the fact that predictions it provides can be helpful.

As an aside, 23andme has a solid API. It's opened up (with consent) a huge DNA dataset for developers to mine. I'm excited for the science that can be learned from this data.

16
gojomo 1 day ago 0 replies      
Plenty of assessment errors occur via other forms of health research, self-diagnosis, and professional diagnosis, too. That's why the more serious a result is, the more it should be examined and re-checked.

This incident seems to me like a big win for the 23andme model. They made an error but followup by a single individual means the same error won't happen again for anyone in their international clientele. Errors in other health systems don't get globally corrected so quickly.

17
adeptus 1 day ago 0 replies      
I'd love to use a service like 23andme, but am seriously concerned about privacy issues. Do they have any privacy conscious competitors? Any upcoming ones?
19
coldskull 1 day ago 0 replies      
Hypothetically, if i were in OP's shoes and 'really' scared, I would be going to a genetic doctor instead of conducting my own research. (i would still look up stuff, but that would be too risky to depend on)
20
peter303 1 day ago 0 replies      
Unexpressed disease genes has been an issue from the beginning of human sequencing. The 3rd human sequenced James Watson had almost 30 unexpressed disease genes, including retinitus pigmentosa. No one understands why this occurs.
21
ksk 1 day ago 0 replies      
The potential of pharmaceutical companies pushing meds onto unsuspecting people using this data is really frightening. Hopefully 23andme wont turn into an advertising subsidized business.
22
rohu1990 1 day ago 5 replies      
What could have happened on rest of life, if this guy just believed their result ? This is how you can ruin life of some one by coding !
23
patfla 1 day ago 0 replies      
24
pbhjpbhj 1 day ago 0 replies      
I wonder what financial value is placed on their database and how well secured it is.
25
JosephBrown 1 day ago 0 replies      
You are a VERY good science writer btw.
7
Web GL Ocean Simulation david.li
484 points by clukic  9 hours ago   106 comments top 34
1
randomdrake 9 hours ago 5 replies      
The code is quite cool to look at. Love seeing the extensive use of matrices and mathematics to create such a beautiful and mesmerizing display.

If anyone is interested in playing around with it, I threw it up at JSFiddle here: http://jsfiddle.net/zyAzg/

Excellent demo.

2
codeplay 4 minutes ago 0 replies      
I knew this is a bit irrelevant, just want to show a pure js ripple effect which I borrowed before: http://jsfiddle.net/esteewhy/5Ht3b/6/
3
computer 8 hours ago 0 replies      
4
bhouston 9 hours ago 0 replies      
Very nice and fully custom code too! The UI is really clean and fits nicely with the WebGL via CSS transforms I believe. Props to you.

BTW geistner waves reference here: http://http.developer.nvidia.com/GPUGems/gpugems_ch01.html

5
AsymetricCom 7 hours ago 3 replies      
I remember seeing this run smoothly on a P2 after a very small executable download in late 90's. How far we've come in a big, stupid circle back where we started.

Now instead of a small executable, we need a large executable to sit on top of a large API on top of the CPU before even touching the GPU, and a network connection to download all the dependant APIs and libraries every time the page is loaded.

The only impressive thing about this demo is how many YCombinator readers are impressed with blinkenlights

6
nspragmatic 9 hours ago 4 replies      
> Your browser does not appear to support the required technologies.

It would've been nice to have an 'I don't care, proceed anyway' button. The check excludes Safari 7, which runs the demo just as well as Chrome.

http://jsfiddle.net/bYHfh

^ removes the hasWebGLSupport() invocation.

Very nice demo, though!

7
alan_cx 9 hours ago 2 replies      
Sorry if this is a dumb question, but how hard would it be to add a boat that realistically bobs up and down with the water?
8
Impossible 8 hours ago 0 replies      
Reminds me of this shader toy shader. https://www.shadertoy.com/view/XdsGDB
9
dingdingdang 8 hours ago 2 replies      
Honestly very impressive, idea: if made into fullscreen (i.e. without edges visible) and with an added horizon and an emulated sun-rise/sun-set this would make for totally enthralling watching - the "live'ness" of it makes it a thousand times more interesting to the eye than images or pre-recorded video material.
10
Quiark 1 hour ago 0 replies      
I really like the layout of the controls, it's a mix between infographics & some movie-like GUI and it works pretty well.
11
kevincennis 8 hours ago 5 replies      
This runs at about 7 frames per second in Chrome on my 10-month-old 13" Macbook Pro at work.

Are people with better graphics cards seeing 60 (or even 30) fps? I'd love to be able to see this in all its glory.

12
krelian 8 hours ago 2 replies      
Where would one start if they wanted to learn the math needed to achieve something like this?
13
iguana 8 hours ago 0 replies      
Awesome demo, and a great way to turn your laptop into a heater. Still, performed quite well on my 15" retina mbp.
14
blahbl4hblahtoo 2 hours ago 0 replies      
You know...IE11 does support webgl...just saying. (I don't think it checked...)
15
niels_olson 3 hours ago 0 replies      
That may be the only wave in SoCal today. But can I surf it? :)
16
skylervm 6 hours ago 1 reply      
This is a really awesome demo. Great work.

I'd love to see it with different ocean floors to be able to see how waves break in different locations based on certain conditions. Someone please make this happen! :D

17
nawitus 9 hours ago 0 replies      
Crashed my browser (using Firefox 25 on Arch Linux). Maybe it would've worked, but I only waited for 25 seconds.
18
nitrogen 6 hours ago 0 replies      
I'd like to listen to the bottom edge of the simulated region played back as a waveform as the simulation progresses.
19
izietto 3 hours ago 0 replies      
Very realistic, with the foam it would be perfect
20
prembharath 4 hours ago 0 replies      
I am surprised how this runs smoothly even on lower end PCs. I was able to view it perfectly smooth on a old Dual Core, integrated graphics and 2GB RAM linux box.
21
wamatt 6 hours ago 0 replies      
Might be system dependent, though couldn't help but notice a non-trivial difference in the OpenGL rendering quality, between Firefox and Chrome.

Chrome 32 beta on OS X, produced an anti-aliased canvas, whereas Firefox 25 had the dreaded jaggies @ 1680x1050

22
meatsock 9 hours ago 0 replies      
excellent work, thanks for sharing. my wavyness simulation resulted in more literal results [1] so i'm glad to have code to study for improvements.

[1]: https://www.youtube.com/watch?v=EnG6I1nsHy4

23
colszowka 7 hours ago 0 replies      
Hitting ctrl-+/ctrl-- on chrome leads to interesting results :) Impressive demo, kept staring at it for a while pondering the exciting future the web platform has in it.
24
IvanK_net 6 hours ago 0 replies      
Too sad they are using "OES_texture_float" extension :( It would be more interesting to see it done with pure WebGL.
25
brokenparser 8 hours ago 1 reply      
Error on line 36: An body start tag seen but an element of the same type was already open.
26
zobzu 8 hours ago 0 replies      
Redundant but I just want to say it: this is well done:)
27
rocLv 1 hour ago 0 replies      
which browser can display?
28
nickthemagicman 9 hours ago 0 replies      
That is really cool.

Is that some sort of fluctuating perlin noise?

29
julien421 7 hours ago 0 replies      
That's super cool!
30
adamwong246 9 hours ago 0 replies      
sheesh, all these great blogs... Mine looks like it was made by a middle schooler.
31
circa 5 hours ago 0 replies      
wow this is great!
32
shobhitverma 5 hours ago 0 replies      
Love it!
33
scrdhrt 8 hours ago 0 replies      
Really cool!
34
jheriko 6 hours ago 1 reply      
computers have now become so powerful that this stuff is easy. you can implement it in a way which, aside from platform, is really quite naive and wasteful - and still get applause.

most programmers can come up with a much better solution to this problem if removed from google and forbidden access to gpu gems.

this is at least well presented though...

its a shame the code has been posted. whilst i normally assume that demos like this are unlikely to be smart or impressive these days - this time i know for sure. its actually a good deal worse than i ever would have imagined.

i'm still quite torn whether all this horsepower is a good thing or not.... on the one had we get a demo like this without much in the way of understanding or resourcefulness. on the other hand we have hundreds of man hours being wasted at dev studios because clever efficiency is rapidly becoming a thing of the past...

8
Jury: Newegg infringes Spangenberg patent, must pay $2.3 million arstechnica.com
483 points by lukeholder  21 hours ago   262 comments top 58
1
gkoberger 21 hours ago 3 replies      
The patent involved basic traffic encryption (SSL or TLS combined with the RC4 cipher), and the company already has made $45 million off it.

Here's some more info on the company they lost to:

http://www.techdirt.com/articles/20121109/02321120982/meet-p...

TL;DR: They're patent trolls.

2
sytelus 19 hours ago 11 replies      
Article starts with the most important two words:

MARSHALL, TX

From Wikipedia:

Marshall has a reputation for plaintiff-friendly juries for the 5% of patent lawsuits that reach trial, resulting in 78% plaintiff wins.

I've stopped myself getting surprised for any patent suits where troll gloriously wins and that decision comes from a court in Marshall. This town's economy probably runs on lawsuits that trolls bring in and jury members from the town seem to have special incentive to favor plaintiffs almost 4 out of 5 times!

3
bradleyjg 20 hours ago 2 replies      
The original Anglo-Saxon juries were chosen from among local people because they knew the witnesses who would be testifying. The idea was that they could judge the credibility of the witnesses based on their direct experience with them.

Then later on after the idea of an unbiased jury took hold, there arose a justifying theory that the jury could tell by careful observation whether or not a witness was telling the truth. This theory is dubious enough when applied to simple questions of outright lying. When it comes to judging expert witnesses testimony, it is totally bogus.

If they don't want to create a patent office court to adjudicate these cases, at the very least Congress should authorize the appointment of special masters to do fact finding in patent cases.

4
droithomme 20 hours ago 2 replies      
It says this patent covers using SSL with RC4. SSL dates back to Netscape, was released in 1995, and has no one involved in it has anything to do with the patent holder here. RC4 was designed in 1987 by Ron Rivest who also has nothing to do with this case.

Someone named Michael Jones patented using SSL with RC4. Which in seems was a known and used combination at the time he did so, as was testified by the expert witness? But the jury thought that not relevant.

The patent would seem to avoidable if say using AES instead.

Caution: I don't know what I am talking about and just looked the above up on wikipedia, which I probably misunderstood. Hopefully someone who understands this in more depth will post.

5
zellyn 20 hours ago 3 replies      
Perhaps someone who knows more about the legal system can help me understand something: at this point, it's well-known that these areas of Texas are good for patent disputes: they receive national attention, and a massive influx of spending as companies travel there to fight court battles.

Given that, surely any jury made up of locals has a huge incentive not to kill the golden goose and deter patent trolling by letting defendants win. Is there not a conflict of interest here?

6
taspeotis 21 hours ago 1 reply      
Optimism please, they've beaten patents on appeal before:

> "We're certainly very disappointed," said Cheng. "We respectfully disagree with the verdict that the jury reached tonight. We fully intend, as we did in the Soverain case, to take this case up on appeal and vindicate our rights."

> Soverain was the "shopping cart" patent that Newegg was ordered to pay $2.5 million for, but the company then knocked it out on appeal. Soverain's damage request was huge for Newegg: $34 million.

7
lifeisstillgood 13 hours ago 1 reply      

  "We've heard a good bit in this courtroom about public   key encryption," said Albright. "Are you familiar with   that?"  "Yes, I am," said Diffie, in what surely qualified as the   biggest understatement of the trial.  "And how is it that you're familiar with public key   encryption?"  "I invented it."
I think I see the trailer for the TV Mini series right there :-)

8
SwellJoe 21 hours ago 3 replies      
Disgusting. This is one of the reasons Texas has such a bad reputation: Ignorant juries and judges in the pocket of patent trolls.
9
saryant 20 hours ago 6 replies      
To those patting themselves on the back as we begin another round of Texas bashing (because, let's face it, for a number of people here that's their favorite part of patent stories), may I ask why you blame an entire state?

When things go wrong in California or New York or Massachusetts, those states aren't blamed: the individuals take the heat! (What a concept!) But whenever something bad happens in Texas, somehow all 26 million of us are involved and culpable.

Case in point: a few minutes ago there was a post here saying we should poison the water in East Texas to stop this. Thankfully, it has been deleted.

Battling bigotry with bigotry is not likely to work. When Hollywood pushes for another batch of draconian copyright laws no one here raises up there hands and hopes for the "big one" to knock LA into the ocean. When municipalities go after Uber or AirBnb no one begs to push that entire state out of the union. Why the double standard?

(I know why, no need to answer that question)

Certainly as a Texan and tech person I'm not a fan of this ruling but the vitriol displayed here towards an entire state verges on disgusting. FWIW, I grew up in the Bay Area and across California, I'm not some Pineywoods hick who never left the trailer park.

I am, however, quite tired of the hatred and, frankly, gleeful malevolence sometimes displayed on this site towards Texas.

10
natch 20 hours ago 5 replies      
Bring a California company in front of a Texas jury, and call wild-haired Stanford visiting professor Diffie as a witness? I'm afraid this says more about the bigotry of the jury members toward Californians than it does about the case. My hunch is they (wrongly in so many ways) thought they would teach the hippie a lesson.
11
meritt 20 hours ago 0 replies      
"Why Patent Trolls Worldwide Love Marshall, Texas" -- http://www.techdirt.com/articles/20060203/0332207.shtml

Nearly 8 years ago. Unfortunately nothing whatsoever has changed.

12
brianobush 20 hours ago 2 replies      
Makes me wonder why we allow juries to hear patent disputes. Do they really understand patents and their terse wording? I would rather have a jury of individuals that would be able to read, code and test the same concepts that they are deciding on.
13
batgaijin 20 hours ago 1 reply      
Doesn't this just motivate startups to incorporate somewhere where software patents aren't enforced, like New Zealand?

http://www.cnn.com/2013/10/01/business/10-best-places-to-sta...

I mean at the end of the day this lack of timely reform is fundamentally making people look for asymmetric ways to entirely avoid problems. Is that the way society should be driven? I think that is an unstable driver of future events --- a society that cannot reform itself in a timely manner, that cannot properly forecast events and repercussions, is a society that is forgetting it's responsibility for balancing itself.

I really do not like this behavior; it is abhorrent of a society that can be a seer. I mean there is the usual belief that we are all equal and deserve equality --- but that cannot happen as long as we inherit citizenship, wealth and networks. It is a nice belief but simply cannot be rendered in any sort of predictable manner.

This creates a situation. Their are private discussions on the ongoing nature of patents --- but I feel that more than anything people are forgetting that as the point of a corporation is it's superhuman predictable nature, that the further antagonization of new corporations will balance itself not with a mutated form of socialism but with an asymmetric alliance of corporations - one which favors unpredictability and an increased rate of change.

Wealth and the rate of innovation are separate --- and that fiction will reveal itself at a much faster rate if proper steps are not taken in a timely manner.

14
linuxhansl 20 hours ago 1 reply      
> "I feel fortunate to live in a country with a judicial system like this where a jury can decide these things," [Jones] said.

Of course he does. It's the very judicial system that presented him with an easy $45m. He is a parasite (quite literally) and he knows it.

15
micahgoulart 20 hours ago 3 replies      
I think Newegg was complacent, perhaps a bit cocky, bringing in the expert on encryption, pandering to the jury and going through a humorous exchange on his knowledge of it, thinking they had it in the bag after the shopping cart win.

And then the defense surprisingly declined at the end to rebut the damages claim of $5.1 million:

"Then came another stunner: Newegg rested its case. It did so without putting on its expert witness to rebut TQP's $5.1 million damage claimeven though documents in the court docket clearly indicate the company had such a witness."

[1] http://arstechnica.com/tech-policy/2013/11/newegg-trial-cryp...

16
davidw 20 hours ago 0 replies      
Depressing, but this might be a good time to donate to those fighting for patent reform, like EFF.org
17
defen 20 hours ago 0 replies      
It's getting late and I don't really have the energy to dig into this patent (http://www.google.com/patents/US5412730), but on a cursory reading I don't see how it differs from the encryption work that Claude Shannon and Alan Turing were doing during World War 2, later embodied as http://en.wikipedia.org/wiki/SIGSALY ... is it just because it transmits the data in "blocks"? Pretty low bar for novelty, there.
18
pbreit 20 hours ago 0 replies      
Kudos to NewEgg for the immense risk in fighting these.
19
Osiris 19 hours ago 1 reply      
Why are patent disputes resolved by laymen (juries) rather than experts in law and the area of study of the patent itself? I just don't see how people completely unfamiliar with the subject matter at hand can be expected to understand highly technical arguments necessarily to determine patent validity.
20
josscrowcroft 17 hours ago 2 replies      
How can your average startup and company owner rest assured that they're not unconsciously walking into a patent troll's lair? I might be unknowingly infringing patents nobody has heard of (except companies like these).

Business idea: a service that investigates your stack (with your permission) and verifies that you're not likely to be sued.

21
adamnemecek 21 hours ago 0 replies      
That's terrible but I feel like they will win once they appeal the decision.
22
CamperBob2 6 hours ago 0 replies      
I'm beginning to think that a sniper rifle is the best answer to these trolls.

Props to Newegg for fighting the good fight.

23
shmerl 20 hours ago 1 reply      
Is the jury required to explain the logic of the decision, or it's simply "we decide this"? Rejection of such obvious proof of patent invalidity and existence of prior art looks pretty bad.
24
scotth 21 hours ago 0 replies      
To put it bluntly, that fucking sucks.
25
mattlutze 17 hours ago 1 reply      
A few have commented how, internally to the IP law industry, the district is known to have a lot of specific domain experience with the argument of IP law. I can definitely understand why that would be attractive to patent-holding firms, in the way that Delaware is attractive to large corporations.

Most of us not in the IP industry think a lot of these suits are ridiculous, and it's because we don't make our lives by the reality of how IP law is structured.

These cases are ridiculous because IP law is ridiculous. It's not Marshall, TX's fault that IP law is ridiculous, and these juries very well may be the most knowledgeable jurours out there. That fact is dangerous, however, because this town's specialized experience makes it as if these companies are arguing cases in front of a jury of paralegals instead of representatives of the public, which absolutely will bias results.

Part of the reason we have juries is to balance the law with common sense. Common sense means something different when you're almost as knowledgeable about the law as the lawyers in front of you.

26
excitom 5 hours ago 0 replies      
As a person with a computer science degree and years of experience in the software industry, I would love to be chosen for a jury like this so I could do my part in smacking down a troll. However I realize this will never happen precisely because I actually know something about the subject. Sadly, jurors are chosen for their ignorance and gullibility.
27
mynegation 11 hours ago 1 reply      
Stupid question to those who know the US law: theoretically, would it be possible for Newegg to refuse to do business with residents of Texas, so it is impossible for other conmpanies to sue them in Texas?

To other commentators: no offence meant for people of Texas, if it is how it works, it is just cold-blooded business decision, nothing more.

28
awwducks 20 hours ago 2 replies      
Here's coverage from the local paper.

http://www.marshallnewsmessenger.com/news/online-retailer-ne...

Seems to be more of a TQP slant to it.

29
consultant23522 11 hours ago 1 reply      
Yesterday I was reading an article about this case that stated Newegg didn't even bother to call their witnesses to dispute the amount of damages that would've been caused by their patent violation. It gave the impression that they were so confident that they had roundly destroyed the plaintiff's arguments that they didn't even bother to follow standard operating procedure for how to fight these types of cases.

On one hand, it's yet another nail in the coffin of innovation in our country. On the other hand, shame on Newegg's lawyers for being so hubris.

30
ytNumbers 14 hours ago 1 reply      
Since we can't seem to stop these horrible patent trolls, perhaps they could be reined in with a law that limits how much damage they can do. If a new law limited these sorts of claims to a grand total of 10% of a company's gross sales for the year, a small business could survive these kinds of attacks without having to pay millions of dollars for lawyers. If a business was attacked by multiple patent trolls, then those trolls would have to fight each other in court as they each argued that they deserve the lion's share of the capped 10% of the company's gross sales for the year. Since our current patent system is being thoroughly abused, the question we should be asking is: What can be done to limit patent trolls so that small businesses can survive? Because right now it is very lucrative to be a patent troll, so in the future, we could wind up with a ton of them. It's not a bright future.
31
arbuge 13 hours ago 0 replies      
"I feel fortunate to live in a country with a judicial system like this where a jury can decide these things"

I still feel fortunate to live in this country but the dysfunctional patent system has nothing to do with it.

The status quo is this: When you receive a letter from a patent troll, you're already out at least $50k or so, possibly several $100k or even more if you decide to fight on longer. You can receive such a letter simply for scanning and printing a pdf file, or operating a shopping cart on your site.

This situation must be fixed.

32
RexRollman 15 hours ago 0 replies      
This is sad but not surprising. The game is stacked against the accused (they either pay a lot to defend themselves or pay in an effort to get the case to go away) and then they have to deal with venue shopping (Texas).
33
ck2 18 hours ago 1 reply      
I'm curious if these juries have any college education.
34
caycep 7 hours ago 0 replies      
Were they expecting to set up a the criteria/grounds for appeal? If Newegg accomplished that, then they hopefully met their objective. I think the goal is to get it out of the Texas court and onto some place that is more objective.
35
eliben 13 hours ago 1 reply      
Isn't $2.3m negligible for the size of companies involved and even compared to the legal expenses for this case?
36
garthdog 19 hours ago 0 replies      
Can we put up billboards in East Texas letting potential jurists know what the stakes for the country are?
37
ChikkaChiChi 11 hours ago 1 reply      
Marshall, Texas; population 24,751.

Definitely small enough for the entire town to know and understand that voting in favor of a plaintiff today brings more money to your town tomorrow.

38
zacinbusiness 20 hours ago 1 reply      
I've never had the pleasure of serving in a jury. How does it work? I have been under the impression that all jury members must agree on a single verdict? I know for a fact that I would not agree with this verdict. And I can't rationalize finding against Newegg here. Can someone who can see the other side (whether or not you agree) explain it here?
39
bane 12 hours ago 0 replies      
Just in time for me to throw some money at Newegg while I build out a new computer.
40
venomsnake 20 hours ago 0 replies      
Well ... expect some bad stuff to happen. Like a company subsidiary shell company developing all business method IP overseas and "selling" it to the parent and when a lawsuit emerges they will have just enough funds to mount a defense and then blow the fuse company. That way shell companies will fight shell companies in court ... the fun.
41
droopybuns 19 hours ago 3 replies      
Uhhh.... RC4 + SSL = broken cryptosystem, right?

We're not bummed about additional incentives to avoid this broken approach to TLS, are we? This is actually a fucking good thing.

42
CalRobert 16 hours ago 0 replies      
There is no way in hell I'd start a company in the US. This BS is absolutely ridiculous. I can't imagine the thought of constantly living in fear I'd get sued for millions because I was using a fax machine, or some other ridiculously common piece of technology.

It doesn't help that juries are apparently the dumbest people on earth.

43
beaker52 17 hours ago 0 replies      
Imagine where we could be if everyone openly shared everything they discovered or invented. Just imagine a world where we co-operated instead of making life harder for one another.
44
dec0dedab0de 9 hours ago 0 replies      
I planned on shopping at Newegg this week anyway, but now maybe a bit more than normal.
45
mkramlich 13 hours ago 0 replies      
I stopped reading a few pages in when kept talking about meta/social/gossip stuff rather that what the case was actually about, what the patents were about. Low S/N.
46
betterunix 14 hours ago 0 replies      
Great, another patent on abstract math upheld by our court system...
47
rwbt 21 hours ago 1 reply      
Does the jury give a detailed explanation beyond the judgement? Also how come all these patent trolls are incorporated in Texas?
48
smegel 20 hours ago 0 replies      
Why can't the world be just, I don't know, more decent and reasonable and just?
49
codygman 18 hours ago 1 reply      
Can we troll the patent trolls yet?
50
bovermyer 8 hours ago 0 replies      
So... when are we going to abolish patents?
51
dreamdu5t 7 hours ago 0 replies      
I'm confused by people who both hold the position that these case are "patent trolls" while simultaneously supporting patents and copyright? This verdict is perfectly consistent with the reasoning and intention behind copyright and patent law.
52
pezh0re 13 hours ago 0 replies      
It's cases like this that make me really miss Groklaw.
53
loganfsmyth 21 hours ago 1 reply      
We'll have to hope it goes better on appeal. I would be very interested to hear the jurors reasoning.
54
kunai 20 hours ago 2 replies      
Does the jury for a highly specialized patent case exist and function in largely the same way as a jury for other trials? Namely, "peers" instead of educated individuals on the particular topic at hand?

If that's the case, the United States needs some serious judicial reform.

55
charlysisto 19 hours ago 0 replies      
sadly, one word comes in mind : the new mafia.
56
Fishrock123 21 hours ago 0 replies      
Bullcrap.
57
wnevets 21 hours ago 0 replies      
what a disgrace
58
voltagex_ 21 hours ago 0 replies      
Oh. Crap.
9
How Money Moves Around The Banking System gendal.wordpress.com
478 points by BitcoinNews_io  1 day ago   60 comments top 15
1
jackgavigan 1 day ago 2 replies      
No mention of cross-currency payments/settlement which, given how Bitcoin is currently being used, is more relevant than bank transfers within the same country.

An overview of CLS is a good starting point: http://www.snb.ch/en/mmr/reference/continuous_linked_settlem...

2
brotchie 1 day ago 1 reply      
Interesting how payments work in different countries. For example, in Australia when dealing in AUD, banks can participate in end-of-day multilateral netting. That is, all the banks' inflows and outflows to all other banks are netted and sent to the Reserve Bank of Australia (RBA).

The RBA is the banks' bank which holds bank's AUD denominated deposits in their separate accounts. Once the bank-to-bank net amounts have been calculated the RBA shuffles around a few numbers in databases to execute all the interbank transfers.

It sounds like in the USA banks have to have accounts with each other in order to affect transfers. The RBA effectively adds a layer of indirection and in some ways ensures banks are meeting their capital requirements.

Edit: I didn't read far enough. The UK has an impressive implementation of a Real-Time Gross Settlement system. My thoughts on the US interbank transfer system were coloured by a recent NPR Planet Money podcast (http://www.npr.org/blogs/money/2013/10/04/229224964/episode-...) where they compare the US and UK systems.

3
znowi 1 day ago 6 replies      
Bitcoin resembles a lot more a one bank operation where accounts are settled immediately and at zero cost. Not an interbank, expensive, RTGS system.

I would also rather ask not how Bitcoin can squeeze into the banking system jigsaw puzzle, but how this puzzle can be optimized to be as effective as Bitcoin.

4
Patient0 1 day ago 5 replies      
What would also be interesting would be an explanation of how this then ties in to the "headline" short term interest rate that the Fed, ECB, Bank of England etc. set.

All the ground work has been laid - and I think many people would be interested to know the mechanics behind what happens when the Bank of England "cuts rates" or "raises rates".

The only other place I've seen this explained well is at the start of the book "Pricing Money" by J.D.A Wiseman - but that's not available online.

5
atmosx 19 hours ago 0 replies      
Using todays technology SWIFT (or any other online messaging system for the matter) does not have to be expensive.

I did not study extensively the banking system, but I figured that the central bank was the missing part of the post right away. How did I do it? Because the central bank should do exactly that: CONTROL THE BANKS IN ORDER TO AVOID THEM GOING BURST without anyone noticing (rings a bell?). Now the fact that they NEVER do, is a nice topic for discussion, why exactly to we pay them? Just to guess what the right monetary policy for the next 6 months will be?

This post explains why sending money from a banks perspective might be expensive. But what it really shows is that it does not have to be if everyone was doing their job right (central bank included) and be held accountable when shit hits the fan.

As of today exchanging BTC (an asset anyway) to currency is expensive and not straight-forward. How exactly is someone going to exchange 150m BTC in USD/EUR without getting noticed? In what amount of time???

BTC is not optimal for this kind of transactions. An inflationary e-currency, widely acceptable and easily exchangeable would be one hell of an option, but good luck persuading people to use it if you re not a government.

6
romain_g 1 day ago 1 reply      
Interesting ! An excellent introduction is also available as a coursera class (by Perry Mehrling from Columbia U) at https://www.coursera.org/course/money and https://www.coursera.org/course/money2 !
7
mattchamb 17 hours ago 0 replies      
Very interesting article. I work in writing some financial software (consumer facing) and I have never actually read about how the banking system works. Just the other day I was writing some code to make SWIFT transfers.
8
ohwp 1 day ago 1 reply      
"in my expecience, almost nobody actually understands how payment systems work"

This was my experience as well. I can't understand why everybody fell for the hype. Loads of money are transfered every hour.

And we don't know anything about the $150m Bitcoin transaction. Maybe the owner just moved it to one of his other wallets.

9
Havoc 1 day ago 0 replies      
To quote Liar's Poker: "How does money move around the world...any which way it likes".
10
optymizer 1 day ago 1 reply      
This was a puzzling diagram: http://gendal.files.wordpress.com/2013/11/single-bank-settle...

Why not put Alice and Bob side by side?

11
amiune 1 day ago 0 replies      
Related course on Coursera https://class.coursera.org/money-001/class It changed my vision about the complexity of the banking system.
12
ncourage 22 hours ago 0 replies      
This was a fantastic read, and very educational. I was most surprised to see it put as we are lending money to our banks. Thought there might be mention of FDIC but didn't see any.
13
shreyas056 1 day ago 1 reply      
I doubt if bitcoin really resembles RTGS; for one thing there is no central agency involved. Or may be it does if you consider distributed network of nodes which does "proof of work" for your bitcoin transaction as a Central Bank
14
seanhandley 1 day ago 2 replies      
Well worth watching the videos on http://www.positivemoney.org
15
guyinblackshirt 1 day ago 0 replies      
no mention of DTC/Gray screen?
10
End the N.S.A. Dragnet, Now nytimes.com
479 points by jedwhite  16 hours ago   158 comments top 20
1
dalek_cannes 13 hours ago 10 replies      
Surveillance.

It always starts with a desire to be safe. And that comes from fear. It seems Americans today are afraid of more things than ever: pedophiles, guns, terrorists, lawsuits. Some news reports are ridiculous by foreign standards: teachers not being allowed to shake hands with students out of fear of sexual harassment allegations, boys suspended from school for drawing guns, bystanders not administering first-aid to accident victims out of fear of lawsuits, and of course the terrorism hysteria for which I have no words. I'm fortunate enough to have visited the US and have met mostly great people, but going by news reports the entire society seems paralyzed by fear.

I always thought of freedom as inversely proportional to safety. If you want to be perfectly safe, you'll never leave your house in case you catch a germ, get in a car accident or even slip on a banana peel. You'll never eat store bought food without first running it through a spectrometer. You'll want everything controlled, predictable, seen ahead of time so that nothing unexpected gets thrown your way.

I guess this is what surveillance is trying to do. Rather than accepting a level of risk as the price for being free and handling disasters when they do occur, we seem to be increasingly trying to avoid danger at all costs. And the cost seems to be freedom.

It's almost as if the author of the US national anthem knew this when he ended it with "land of the free and the home of the brave" (correct me if I got that wrong). Maybe he knew you couldn't have one without the other. I guess the brave isn't home anymore...

/disjointed philosophical rant

2
spodek 14 hours ago 3 replies      
Senators have to go to the press to try to stop the government from doing what clearly breaks the law -- the Constitution, no less -- using up billions of taxpayer funds and undermines American business for no clear benefit.

Conventional wisdom says the Cold War was between the doctrines of Capitalism and Communism and that the doctrine of Capitalism won.

It doesn't look like that view was right.

The doctrine of the KGB and Stasi is winning over both of them.

3
Lagged2Death 12 hours ago 2 replies      
Our first priority is to keep Americans safe from the threat of terrorism.

It's like a feller can't even write a serious editorial in support of American liberty without kowtowing to irrational fear-mongering anymore.

The battle to keep us jumping at shadows has been won so conclusively that no one even bothers to stand up and say anything like:

You are safe. Your family is safe. You are safer now than you would have been at nearly any other time in American history. Your children will probably view these years of The Terrorist Menace in much the same way we view McCarthyism and the excesses of J. Edgar Hoover - a humiliating betrayal of everything that was supposed to make America different from the rest of the world.

There is nothing patriotic about being afraid all the time.

But that won't sell, and the Senators who wrote this know it. I don't fault their judgement, but it makes me really sad.

4
geuis 9 hours ago 1 reply      
I disagree with this quote, "There is no question that our nations intelligence professionals are dedicated, patriotic men and women who make real sacrifices to help keep our country safe and free."

I argue that the weight of evidence says the opposite. First the "there is no question" bit is wrong, because clearly there is a huge question. Further, people that want power tend to be attracted to positions that give it to them. We see this in things like police, lawyers, and politicians.

Note that I do no mention the military, because in the US the military is largely about subservience and not about control.

There is also little evidence suggesting that the men and women working for the NSA are patriotic. I argue that they are not. Patriotism involves holding up the rights of citizens as defined by the Constitution, especially against those who would change or remove these rights. Further, patriotism involves defining new law, as needed, explicitly in the spirit of the Constitution. Under this definition, it is very unclear that the people working at the NSA have been remotely patriotic. Quite the opposite, in my view.

Last, I believe we are fundamentally less free and less safe now than we were 13 years ago. The erosion of freedom and safety is often a very gradual process. When I say we, I do not refer to We as in The United States. I refer to all the people living under it, both citizens and non-citizens alike.

I have to be more cautious of what I say at 33 than I did at 21. I seriously consider alternatives to flying during the holidays because "safety" has become a physical impediment to travel. I have to think twice about what I should pack in my luggage, for the certainty that someone will search my belongings.

When I see police, I do not feel safe. I get more nervous and afraid. These are people walking around with weapons who can hurt, imprison, and murder people almost at will and we as citizens have almost no recourse to defend ourselves without being further harassed and harangued.

That is not how someone should view their police departments. Yet I do, because in my short life I hear more about police brutality than stories of police helping people. My own experiences were particularly forged by being arrested at a peaceful protest (FTAA) and trying to watch the inauguration parade in DC in 2005. I stopped respecting police officers a long time ago, though I view them as a necessary evil.

So to wrap. We are less free and less safe now than before, the people working for the NSA are working towards their own ends or the ends of people wanting power, and there is nothing patriotic going on. We are in pot being slowly boiled.

5
nateabele 13 hours ago 1 reply      
> "Our first priority is to keep Americans safe [...]

And herein lies the problem. Their job is not to keep us safe, it's to keep us free.

(I'm aware others in the thread have pointed this out, but less directly).

6
dpweb 13 hours ago 2 replies      
"Our first priority is to keep Americans safe from the threat of terrorism." There's the problem. Even by those protesting sweeping NSA collection, we're obsessed with this.

When the axis powers threatened to plunge the world into 1000 years of darkness, the only thing we had to fear is fear itself, now - that's not good enough - we must fear the unending threat of terrorism. Letting the NSA run wild is a logical result from this mentality.

7
monsterix 14 hours ago 4 replies      
And what about the remaining 7 billion law-abiding souls on the planet? I resonate with the intent of this article but how come protecting only an Americans' privacy be of concern to this voice on NYT? I believe this approach is not only insular (apart from being stupid) but also destined to fail.

If I were to run dragnet, I'd accept protecting the interest and privacy of all Americans back home, but strike a deal with GCHQ or some other Government agency and provide them with all the tools and tech to snoop on my fellow citizens. No legal hassles, no constitutional violation. Cost? Well that could be worked out given the advantage the data gives me to remain in power.

8
znowi 12 hours ago 0 replies      
> "Severing ties with the NSA" started off with a NSA penalty but was so hugely popular it still got the #1 spot. However, it was quickly given an even bigger penalty, forcing it down the page. [1]

Supposedly, "N.S.A." will not trigger HN's keyword penalty :)

[1] http://www.righto.com/2013/11/how-hacker-news-ranking-really...

9
petejansson 12 hours ago 0 replies      
What this editorial gets right is that the oversight regime for domestic surveillance is inadequate. What it misses, however, is that big piles of data are inevitable with the current trajectory of technology. It will not be possible to have the piles of data not collected. As others have written, the government surveillance agencies essentially saw what private industry was doing and said "I want a copy of that."

I think we need to rethink some things:

1. In the short term, one of the biggest changes that has to be addressed is the current court doctrine that privacy has not been violated if no people are actually looking at the data. Given that much of the surveillance is directed by automation, we need to recast that doctrine to include some of the automated analysis of the data. It's a thorny question, and one that will take some time and effort to get right, but there's no time like now to start.

2. We need more forceful and more transparent oversight of surveillance. There is a risk that the surveilled might change their tactics based on lessons from oversight reporting, but it seems clear at this point that the trade off is necessary. To quote the editorial: "The usefulness of the bulk collection program has been greatly exaggerated. We have yet to see any proof that it provides real, unique value in protecting national security." Trade-offs are only worth making if you get something. Time to revisit the trade-off.

3. We need to address both the big piles of data in the government's hands and those in private hands. This is going to require rethinking ownership of the data, and probably moving the US more towards an EU-style privacy directive. Again, a longer process, but one that needs to start now.

4. As a country, we need to start toward a more rational view of terrorism risk. Plenty has been written about how disproportional our response has been. Time to rebalance the scales.

In the end, we're going to continue to have big piles of surveillance data as long as we continue our technology trajectory. We need to start figuring out how to work with it, rather than try to stop it.

10
001sky 14 hours ago 4 replies      
Ron Wyden of Oregon, Mark Udall of Colorado and Martin Heinrich of New Mexico, all Democrats, are United States senators.

== Why isn't this adressed to Dianne Feinstein?

11
pilker09 14 hours ago 2 replies      
> "Our first priority is to keep Americans safe from the threat of terrorism."

Uh, no. The first job of a good, decent government is protect the rights of its citizens. It now seems that the first job of a citizen is protect him/herself from the government.

12
segmondy 13 hours ago 0 replies      
Ending the NSA Dragnet is not the solution. The solution is to assume that there will always be a rogue agency wanting to spy and to come up with solutions that make it hard or impossible.
13
AsymetricCom 14 hours ago 4 replies      
The thing about the NSA dragnet is that if the NSA wasn't doing it, then corporations would (are) doing it. You can't stop technology from moving forward. Someone is going to be sniffing your packets now until the end of time.
14
grej 8 hours ago 0 replies      
I think the quote from William Pitt is appropriate:

"Necessity is the plea for every infringement of human freedom. It is the argument of tyrants; it is the creed of slaves."

15
balabaster 11 hours ago 0 replies      
Unfortunately, for this opinion to make it into the mainstream conscious, it needs to be broadcast with the same gravity on Fox News and CNN where the large majority of uninformed Americans are spoon-fed their beliefs and opinions.
16
logfromblammo 10 hours ago 0 replies      
This is a Murphy's Law matter. The disaster cannot be prevented until it is technically impossible for it to continue.

If legislation were to declare that the names and numbers used to identify a computer on a network could not be legally used to identify either the physical location of the computer or the human that might have been using it, I think it likely that the number of VPN access points and Tor exit nodes would increase wildly overnight.

End-to-end encryption of all electronic traffic, everywhere, is the only reasonable solution.

17
andyl 12 hours ago 0 replies      
Surveillance is a tool for oligarchs to control their citizens. The terrorist threat is theatric misdirection.
18
mrobot 4 hours ago 0 replies      
I'm glad these guys are here for us. I honestly can't believe some of the comments on the times site.

My main issue is that this has not become a debate, it's still an order. And it's an order that violates our fourth amendment right. This right was part of handshake for a new system, and it cannot be violated save for some rare situation we could all agree is reasonable.

No one should think this is reasonable... security is lax, control of the data is lax ("corporate store"? Are you kidding me?). The situation is flipped here. Without leaks, we would actually be suffering more. Security clearance is not protecting us, it's using and abusing us. It's being used to hide things that would harm us more if they were never leaked. And FISA courts are used to give us some illusion that rules will be followed while having it waved in our face that we're lucky to have them. This is crazy.

Try to accommodate any warrantless surveillance in the fourth amendment's text without creating either a comical contradiction that violates its entire spirit or removes it entirely. We know that being ok with these citizen data programs amounts to being ok with not having this right, but we're still talking about it. I want to keep my right. And since the amendment was added in response to writs of assistance, unchecked delegation of authority so scarily similar to this reasonable articulable suspicion thing we are seeing today in both this and Stop and Frisk, i think we'd all be better suited to start with our right and add any exceptions as-needed, not have them added for us. I'm assigned a threat score even before i'm suspicious? To find out whether i'm suspicious? To then act on me because of this suspicion? All while making money off of me based on my actions? You want to buy my actions? Ok, name a price, i'll consider it.

I don't want to start this privacy war this gang wants me to. I'd rather we follow the law and consider those who don't criminal. Privacy is a buffer against abuse, not a place to hide dirty secrets. We can't predict or even see or notice all of the horrible loss of self control that might come about because of this collection. The chorus of "Nothing to Hide" in response rings eery in my ears.

19
netman21 13 hours ago 0 replies      
So, no dragnet, but our first priority is terrorists.
20
gchokov 11 hours ago 0 replies      
Over? Well, okay, I trust you this time.
11
Walmart Node.js Memory Leak joyent.com
447 points by btmills  4 days ago   75 comments top 17
1
davidw 3 days ago 2 replies      
I looked at node.js for a system I'm involved with creating, but ultimately we went with Erlang just because it's been around a lot longer and is more stable in terms of things like this. We're working on a semi-embedded system that will not always be on-line or accessible for debugging. We also considered Go, which probably would have been more familiar to C++ guys, but it was also deemed a bit immature even if it seems like a very pleasant language to work with.

Cool writeup though!

2
diminoten 3 days ago 6 replies      
I'm actually looking into a segfault issue deep in the bowels of a C++ addon we have in node.js (anyone in #node.js will have seen me over the past few weeks ask about it), but what reading this makes me realize is how woefully underequipped I am to hunt for problems of this nature.

My problem is likely in one of our addons, but this kind of debugging, this whole genre of problem solving is entirely beyond me. How do I get to this level? What do I need to learn? To study?

It's just a little depressing to read something like this and see how far the road ahead goes, despite how far I've already traveled...

3
ambirex 4 days ago 0 replies      
Thank you, I really enjoy detailed write-ups like this. It is fascinating to see how an engineer approaches an elusive problem.
4
jzwinck 3 days ago 0 replies      
I'd like to read more about how we can prevent this class of error going forward. Could stronger typing or RAII or some other feature or trick have made the bug apparent at compile time?

I made a very basic Node.js module in C++ with V8 and it was surprisingly difficult to make a good (idiomatic JS behaviour, believably bug-free) wrapper for a straightforward class and factory method. I say this coming from Boost Python and Luabind, where there are some tricky parts to bind complex classes, but simple ones are easy enough, and once written, obviously correct.

5
city41 3 days ago 0 replies      
I've been running an extremely simple Node application on 0.10.18 for a while now and it has a very gradual memory leak. My code is just a few dozen lines, and it all seems pretty innocent. I am also using Hapi, so I thought maybe Hapi has a leak in it somewhere. Now I wonder if I have the same leak as Walmart here. I just now upgraded to 0.10.22 and am curious to see where I end up. If the leak goes away then hot damn, I got lucky :)
6
aaronbrethorst 3 days ago 0 replies      
Wonderful blog post; major props for the engineering time expenditure. But, why do you have an Olark chat widget that says "Contact Sales". I don't want to have anything to do with those schlubs! If anything, I want to talk to serious engineers like you!

Perhaps a better call to action would be:

* Talk to us about how we can solve your problems

* Chat with us

* We can help you too

* What's up?

7
ryanseys 3 days ago 1 reply      
And a one-line fix. Damn that must be satisfying.
8
charlieflowers 3 days ago 0 replies      
FYI, a typo -- "illusive" -> "elusive". (haven't read further yet, just wanted to let you know).
9
rcthompson 3 days ago 2 replies      
Ironically, this page hangs Chrome indefinitely when I try to load it. Luckily it only hangs the tab so I can still close it. I guess I'll fire up Firefox to see if I can actually read the article.

Edit: Actually, it loads fine in a private browsing tab, so it must be a bad interaction with some extension. Oh well.

10
patrickg_zill 3 days ago 0 replies      
That is pretty impressive - I love how they could use DTrace to scope out what was going on.
11
batbomb 4 days ago 1 reply      
Can anyone tell me if there is reason for this in bash?

     DEST=~~/public/walmart.graphs

12
retr0h 3 days ago 0 replies      
I've always loved the debugging tools in solaris (smartos or whatever now).
13
atomical 3 days ago 1 reply      
I assume that they can restart the server at intervals or use load balancing. A few months of developer timer for something like this seems excessive unless he was working on something else as well.
14
joeblau 3 days ago 0 replies      
Excellent details on the sleuthing that went on to find this error. I think it's great that there are great tools available to debug errors like this and your write up helps me in learning more about how to go about properly debugging my Node apps.
15
ilaksh 3 days ago 3 replies      
I think there are still quite a few C and C++ programmers out there. To me this is a great example of why it is better software engineering to write a server in something like Node.js. Because rather than having a million code bases with potential memory leaks like this one, there is just the Node code. In ordinary JavaScript code its impossible to cause a problem just that.
16
jnazario 2 days ago 0 replies      
cool writeup. while not a node.js user, i love these sorts of tours of system internals - i always learn a lot, both specific tools and also processes of using them.

thanks for the details, very articulate and useful stuff.

17
jokoon 3 days ago 0 replies      
we know that node.js is a bad piece of software, you don't need to remind us about it all the time

(down vote me)

12
Machine learning is easier than it looks insideintercom.io
430 points by jasonwatkinspdx  6 days ago   165 comments top 41
1
xyzzyz 6 days ago 8 replies      
I'd like to chime in here as a mathematician.

Many people here express their feelings that math or computer science papers are very difficult to read. Some even suggest that they're deliberately written this way. The truth is that yes, they in fact are deliberately written this way, but the reason is actually opposite of many HNers impression: authors want to make the papers easier to understand, and not more difficult.

Take for example a page from a paper that's linked in this article. Someone here on HN complains that the paper talks about "p being absolutely continuous with respect to the Lebesque measure on En", hundreds of subscripts and superscripts, and unintuitively named variables, and that it makes paper very difficult to understand, especially without doing multiple passes.

For non-mathematicians, it's very easy to identify with this sentiment. After all, what does it even mean for a measure to be absolutely continuous with respect to Lebesgue measure. Some of these words, like "measure" or "continuous" make some intuitive sense, but how can "measure" be "continuous" with respect to some other measure, and what the hell is Lebesgue measure anyway?

Now, if you're a mathematician, you know that Lebesgue measure in simple cases is just a natural notion of area or volume, but you also know that it's very useful to be able to measure much more complicated sets than just rectangles, polyhedrals, balls, and other similar regular shapes. You know Greeks successfully approximated areas of curved shapes (like a disk) by polygons, so you try to define such measure by inscribing or circumscribing a nice, regular shapes for which the measure is easy to define, but you see it only works for very simple and regular shapes, and is very hard to work with in practice. You learned that Henri Lebesgue constructed a measure that assigns a volume to most sensible sets you can think of (indeed, it's hard to even come up with an example of a non-Lebesgue-measurable set), you've seen the construction of that measure, and you know that it's indeed a cunning and nontrivial work. You also know that any measure on Euclidean space satisfying some natural conditions (like measure of rectangle with sides a, b is equal to product ab, and if you move a set around without changing its shape, its measure shouldn't change) must already be Lebesgue measure. You also worked a lot with Lebesgue measure, it being an arguably most important measure of them all. You have an intimate knowledge of Lebesgue measure. Thus, you see a reason to honor Lebesgue by naming measure constructed by him with his name. Because of all of this, whenever you read or hear about Lebesgue measure, you know precisely what you're dealing with.

You know that a measure p is absolutely continuous with respect to q, if whenever q(S) is zero for some set S, p(S) is also zero. You also know that if you tried to express the concept defined in a previous sentence, but without using names for measures involved, and a notation for a value a measure assigns to some set, the sentence would come out awkward and complicated, because you would have to say that a measure is absolutely continuous with respect to some other measure, if whenever that other measure assigns a zero value to some set, the value assigned to that set by the first measure must be zero as well. You also know, that since you're not a native English speaker (and I am not), your chance of making grammatical error in a sentence riddled with prepositions and conjunctions are very high, and it would make this sentence even more awkward. Your programmer friend suggested that you should use more intuitive and expressive names for your objects, but p and q are just any measures, and apart from the property you're just now trying to define, they don't have any additional interesting properties that would help you find names more sensible than SomeMeasure and SomeOtherMeasure.

But you not only know the definition of absolute continuity of measures: in fact, if that was the only thing you knew about it was the definition, you'd have forgotten it long ago. You know that absolute continuity is important because of a Radon-Nikodym theorem, which states that if p is absolutely continuous with respect to q, then p(A) is in fact integral over A of some function g with respect to measure q (that is, p(A) = int_A g dq). You know that it's important, because it can help you reduce many questions about measure p to the questions about behaviour of function g with respect to measure q (which in our machine learning case is a measure we know very, very well, the Lebesgue measure).

You also know why the hell it's called absolutely continuous: if you think about it for a while, the function g we just mentioned is kind of like a derivative of a measure of measure p with respect to measure q, kind of like dp/dq. Now, if you write p(A) = int_A (dp/dq) dq = int_A p'(q) dq, even though none of the symbols dp/dq or p'(q) make sense, it seems to mean that p is an "integral of its derivative", and you recall that there's a class of real valued functions for which it is true as well, guess what, the class of absolutely continuous functions. If you think about these concepts even harder, you'll see that the latter concept is a special case of our absolutely continuous measures, so all of this makes perfectly sense.

So anyway, you read that "p is absolutely continuous with respect to Lebesgue measure", and instantly tons of associations light up in your memory, you know what they are working with, you have some ideas why they might need it, because you remember doing similar assumption in some similar context to obtain some result (and as you're reading the paper further, you realize you were right). All of what you're reading makes perfect sense, because you are very familiar with the concepts author introduces, with methods of working with them, and with known results about them. Every sentence you read is a clear consequence of the previous one. You feel you're home.

...

Now, in alternate reality, a nonmathematician-you also tries to read the same paper. As the alternate-you haven't spent months and years internalizing these concept to become vis second nature, ve has to look up every other word, digress into Wikipedia to use DFS to find a connected component containing a concept you just don't yet understand. You spend hours, and after them you feel you learned nothing. You wonder if the mathematicians deliberately try to make everything complicated.

Then you read a blog post which expresses the idea behind this paper very clearly. Wow, you think, these assholes mathematicians are really trying to keep their knowledge in an ivory tower of obscurity. But, since you only made it through the few paragraphs of the paper, you missed an intuitive explanation that's right there on that page from an paper reproduced by that blog post:

Stated informally, the k-means procedure consists of simply starting with k groups each of which consists of a single random point, and thereafter adding each new point to the group whose mean the new point is nearest. After a point is added to a group, the mean of that groups is adjusted in order to take account of that new point

Hey, so there was an intuitive explanation in that paper after all! So, what was all that bullshit about measures and absolute continuity all about?

You try to implement an algorithm from the blog post, and, as you finish, one sentence from blog post catches your attention:

Repeat steps 3-4. Until documents assignments stop changing.

You wonder, but when that actually happens? How can you be sure that they will stop at all at some point? The blog post doesn't mention that. So you grab that paper again...

2
eof 6 days ago 7 replies      
I feel I'm in a somewhat unique position to talk about easy/hardness of machine learning; I've been working for several months on a project with a machine learning aspect with a well-cited, respected scientist in the field. But I effectively "can't do" machine learning myself. I'm a primarily 'self-trained' hacker; started programming by writing 'proggies' for AOL in middle school in like 1996.

My math starts getting pretty shaky around Calculus; vector calculus is beyond me.

I did about half the machine learning class from coursera, andrew ng's. Machine learning is conceptually much simpler than one would guess; both gradient descent and the shallow-neural network type; and in fact it is actually pretty simple to get basic things to work.

I agree with the author that the notation, etc, can be quite intimidating vs what is "really going on".

however, applied machine learning is still friggin' hard; at least to me; and I consider myself a pretty decent programmer. Naive solutions are just unusable in almost any real application, and the author's use of loops and maps are great for teaching machine learning; but everything needs to be transformed to higher level vector/matrix problems in order to be genuinely useful.

That isn't unattainable by any means; but the fact remains (imho) that without the strong base in vector calculus and idiosyncratic techniques for transforming these problems into more efficient means of computations; usable machine learning is far from "easy".

3
hooande 6 days ago 3 replies      
Most machine learning concepts are very simple. I agree with the author that mathematical formulae can be unnecessarily confusing in many cases. A lot of the concepts can be expressed very clearly in code or plain english.

For example, a matrix factorization could be explained with two arrays, a and b, that represent objects in the prediction:

  for each example     for each weight w      prediction += a[w] x b[w]    err = (prediction - actual_value)    for each weight w      a[w] += err x small_nuumber      b[w] += err x small_number
It's that simple. Multiply the weights of a by the weights of b, calculate error and adjust weights, repeat.

K-Nearest Neighbor/KMeans are based on an even simpler operation:

  dist = 0  for each weight w: dist += (a[w] - b[w])**2
Then make predictions/build clusters based on the smallest aggregate distance.

There are more advanced concepts. There are some serious mathematics involved in some predictors. But the most basic elements of statistical prediction are dead simple for a trained programmer to understand. Given enough data, 80% solutions can easily be achieved with simple tools.

We should be spreading the word about the simplicity of fundamental prediction algorithms, not telling people that it's hard and a lot of math background is required. Machine learning is very powerful and can improve all of our lives, but only if there is enough data available. Since information tends to be unevenly distributed we need to get the tools into the hands of as many people as possible. It would be much better to focus on the concepts that everyone can understand instead of keeping statistics secrets behind the ivy clad walls of academia.

4
munificent 6 days ago 4 replies      
This was a great post because I've heard of "k-means" but assumed it required more math than my idle curiosity would be willing to handle. I love algorithms, though, and now I feel like I have a handle on this. That's awesome!

However, the higher level point of the post "ML is easy!" seems more than a little disingenuous. Knowing next to nothing about machine learning, obvious questions still come to mind:

Since you start with random points, are you guaranteed to reach a global maximum? Can it get stuck?

How do you know how many clusters you want? How do I pick K?

This assumes that distance in the vector space strongly correlates to "similarity" in the thing I'm trying to understand. How do I know my vector model actually does that? (For example, how does the author know "has some word" is a useful metric for measuring post similarity?)

I like what I got out of the post a lot, but the "this is easy" part only seems easy because it swept the hard part under the rug.

5
syllogism 5 days ago 0 replies      
I write academic papers, and I've started writing blog posts about them, and I think this post doesn't cover one of the main reasons that academic papers are less accessible to non-specialists.

When you write an academic paper, it's basically a diff on previous work. It's one of the most important considerations when the paper first comes out. The reviewers and the people up-to-the-minute with the literature need to see which bit is specifically new.

But to understand your algorithm from scratch, someone needs to go back and read the previous four or five papers --- and probably follow false leads, along the way!

It's another reason why academic code is often pretty bad. You really really should write your system to first replicate the previous result, and then write your changes in on top of it, with the a _bare minimum_ branching logic, controlled by a flag, so that the same runtime can provide both results. And you should be able to look at each point where you branch on that flag, and check that your improvements are only exactly what you say they are.

When you start from scratch and implement a good bang-for-buck idea, yes, you can get a very simple implementation with very good results. I wrote a blog post explaining a 200-line POS tagger that's about as good as any around.[1] Non-experts would usually not predict that the code could be so simple, from the original paper, Collins (2002).[2]

I've got a follow-up blog post coming that describes a pretty good parser that comes in at under 500 lines, and performs about as accurately as the Stanford parser. The paper I wrote this year, which adds 0.2% to its accuracy, barely covers the main algorithm --- that's all background. Neither does the paper before me, released late last year, which adds about 2%. Nor the paper before that, which describes the features...etc.

When you put it together and chop out the false-starts, okay, it's simple. But it took a lot of people a lot of years to come up with those 500 lines of Python...And they're almost certainly on the way towards a local maximum! The way forward will probably involve one of the many other methods discussed along the way, which don't help this particular system.

[1] http://honnibal.wordpress.com/2013/09/11/a-good-part-of-spee...

[2] http://acl.ldc.upenn.edu/W/W02/W02-1001.pdf

6
j2kun 6 days ago 0 replies      
The author clearly didn't read the page of the math paper he posted in trying to argue his point. It says, and I quote:

Stated informally, the k-means procedure consists of simply starting with k groups each of which consists of a single random point, and thereafter adding each new point to the group whose mean the new point is nearest.

Admittedly, it's not the prettiest English sentence over written, but it's just as plain and simply stated as the author of this article.

The article itself is interested in proving asymptotic guarantees of the algorithm (which the author of the article seems to completely ignore, as if it were not part of machine learning at all). Of course you need mathematics for that. If you go down further in the paper, the author reverts to a simple English explanation of the various parameters of the algorithm and how they affect the quality of the output.

So basically the author is cherry-picking his evidence and not even doing a very good job of it.

7
Daishiman 6 days ago 0 replies      
It's easy until you have to start adjusting parameters, understand the results meaningfully, and tune the algorithms for actual "Bit Data". Try doing most statistical analysis with dense matrices and watch your app go out of memory in two seconds.

It's great that we can stand on the shoulders of giants, but having a certain understanding of what these algorithms are doing is critical for choosing them and the parameters in question.

Also, K-means is relatively easy to understand untuitively. Try doing that with Latent Dirichlet Allocation, Pachinko Allocation, etc. Even Principal Component Analysis and Linear Least Squares have some nontrivial properties that need to be understood.

8
myth_drannon 6 days ago 2 replies      
On Kaggle"The top 21 performers all have an M.S. or higher: 9 have Ph.D.s and several have multiple degrees (including one member who has two Ph.D.s)."

http://plotting-success.softwareadvice.com/who-are-the-kaggl...

9
tptacek 6 days ago 7 replies      
Is k-means really what people are doing in serious production machine-learning settings? In a previous job, we did k-means clustering to identify groups of similar hosts on networks; we didn't call it "machine learning", but rather just "statistical clustering". I had always assumed the anomaly models we worked with were far simpler than what machine learning systems do; they seemed unworthy even of the term "mathematical models".
10
kephra 6 days ago 0 replies      
The question "do I need hard math for ML" often comes up in #machinelearning at irc.freenode.net

My point here is: You don't need hard math (most of the times) because most machine learning methods are already coded in half a dozen different languages. So its similar to fft. You do not need to understand why fft works, just when and how to apply it.

The typical machine learning workflow is: Data mining -> feature extraction -> applying a ML method.

I often joke that I'm using Weka as a hammer, to check, if I managed to shape the problem into a nail. Now the critical part is feature extraction. Once this is done right, most methods show more or less good results. Just pick the one that fits best in results, time and memory constrains. You might need to recode the method from Java to C to speedup, or to embed it. But this requires nearly no math skills, just code reading, writing and testing skills.

11
tlarkworthy 6 days ago 2 replies      
I find newbs in ml don't appreciate cross validation. That's the one main trick. Keep some data out of the learning process to test an approaches ability on data it has not seen. With this one trick you can determine which algorithm is best, and the parameters. Advanced stuff like Bayes means you don't need it, but for your own sanity you should still always cross validate. Machine learning is about generalisation to unseen examples, cross validation is the metric to test this. Machine learning is cross validation.
12
sieisteinmodel 6 days ago 1 reply      
Also: aerodynamics is not really hard, anyone can fold paper planes!Or: programming 3D games is easy, just build new levels for an old game!Or: I don't know what I am doing here, but look, this photoshop effect looks really cool on my holiday photos!

etc.

Seriously: The writer would not be able to write anything about K-Means if not for people looking at it from a mathematical view point. This angle is of tremendous importance if you want to know how your algorithm behaves in corner cases.

This does not suffice, if you have an actual application (e.g. a recommendation or a hand tracking or an object recognition engine). These need to work as good as you can make it because every improvement of it will result in $$$.

13
Ihmahr 6 days ago 0 replies      
As a graduate in artificial intelligence and machine learning I can tell you that machine learning IS hard.

Sure, the basic concepts are easy to understand. Sure, you can hack together a program that performs quite well on some tasks. But there are so much (interesting) problems that are not at all easy to solve or understand.

Like structural engineering it is easy to understand the concepts, and it is even easy to build a pillow fort in the living room, but it is not easy to build an actual bridge that is light, strong, etc.

14
upquark 6 days ago 0 replies      
Math is essential for this field, anyone who tells you otherwise doesn't know what they are talking about. You can hack together something quick and dirty without understanding the underlying math, and you certainly can use existing libraries and tools to do some basic stuff, but you won't get very far.

Machine learning is easy only if you know your linear algebra, calculus, probability and stats, etc. I think this classic paper is a good way to test if you have the right math background to dive deeper into the field: http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pd...

15
pallandt 6 days ago 0 replies      
It's actually incredibly hard, especially if you want to achieve better results than with a current 'gold standard' technique/algorithm, applied on your particular problem.

While the article doesn't have this title (why would you even choose one with such a high bias?), I presume the submitter decided upon this title after being encouraged by this affirmation of the article's author: 'This data indicates that the skills necessary to be a data wizard can be learned in disciplines other than computer sciences and mathematics.'.

This is a half-baked conclusion. I'd reason most Kaggle participants are first of all, machine learning fans, either professionals or 'amateurs' with no formal qualifications, having studied it as a hobby. I doubt people with a degree in cognitive sciences or otherwise in the 'other' categories as mentioned in the article learned enough just through their university studies to readily be able to jump into machine learning.

16
amit_m 6 days ago 1 reply      
tl;dr: (1) Author does not understand the role of research papers (2) Claims mathematical notation is more complicated than code and (3) Thinks ML is easy because you can code the wrong algorithm in 40 lines of code.

I will reply to each of these points:

1. Research papers are meant to be read by researchers who are interested in advancing the state of the art. They are usually pretty bad introductory texts.

In particular, mathematical details regarding whether or not the space is closed, complete, convex, etc. are usually both irrelevant and incomprehensible to a practitioner but are essential to the inner workings of the mathematical proofs.

Practitioners who want to apply the classic algorithms should seek a good book, a wikipedia article, blog post or survey paper. Just about anything OTHER than a research paper would be more helpful.

2. Mathematical notation is difficult if you cannot read it, just like any programming language. Try learning to parse it! It's not that hard, really.

In cases where there is an equivalent piece of code implementing some computation, the mathematical notation is usually much shorter.

3. k-means is very simple, but its the wrong approach to this type of problem. There's an entire field called "recommender systems" with algorithms that would do a much better job here. Some of them are pretty simple too!

17
apu 6 days ago 0 replies      
For those wanting to get started (or further) in machine learning, I highly recommend the article, "A Few Useful Things to Know About Machine Learning," by Pedro Domingos (a well respected ML researcher): http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf. It's written in a very accessible style (almost no math); contains a wealth of practical information that everyone in the field "knows", but no one ever bothered to write down in one place, until now; and suggests the best approaches to use for a variety of common problems.

As someone who uses machine learning heavily in my own research, a lot of this seemed like "common sense" to me when I read it, but on reflection I realized that this is precisely the stuff that is most valuable and hardest to find in existing papers and blog posts.

18
misiti3780 6 days ago 1 reply      
I disagree with this article, although I did find it interesting. Replace k-means with a supervised learning algorithm like an SVM, and use some more complicated features other than binary and this article would be a lot different.

Also - maybe "article recommendation" is "easy" in this context, but other areas such as computer vision, sentiment analysis are not. Some other questions I might ask

How do you know how well this algorithm is performing?

How are you going to compare this model to other models? Which metrics will you use? What statistical tests would you use and why?

What assumptions are you making here ? How do you know you can make them and why?

There are a lot of things that this article fails to address.

Disclaimer: I realize more complex models + features don't always lead to better performance, but you need to know how to verify that to be sure.

19
panarky 6 days ago 1 reply      
Sure, some ML concepts are intuitive and accessible without advanced math.

But it would help to highlight some of the fundamental challenges of a simplistic approach.

For example, how is the author computing the distance between points in n-dimensional space?

And does this mean that a one-paragraph post and a ten-paragraph post on the same topic probably wouldn't be clustered together?

20
rdtsc 6 days ago 0 replies      
A lot of concepts are easier when you know how they work.

CPUs were magical for me before I took a computer architecture course. So was AI and machine learning. Once you see the "trick" so to speak you lose some of the initial awe.

21
pyduan 6 days ago 1 reply      
As someone who works in machine learning, I have mixed feelings about this article. While encouraging people to start learning about ML by demystifying it is a great thing, this article comes off as slightly cocky and dangerous. Programmers who believe they understand ML while only having a simplistic view of it risk not only to create less-than-optimal algorithms, and might instead create downright dangerous models:http://static.squarespace.com/static/5150aec6e4b0e340ec52710...

In the context of fraud detection (one of the main areas I work in these days), a model that is right for the wrong reasons might lead to catastrophic losses when the underlying assumption that made the results valid suddenly ceases to be true.

Aside from the fact the techniques he mentioned are some of the simplest in machine learning (and are hardly those that would immediately come to mind when I think "machine learning"), the top comment on the article is spot on:

> "The academic papers are introducing new algorithms and proving properties about them, youre applying the result. Youre standing on giants shoulders and thinking its easy to see as far as they do."

While understanding how the algorithm works is of course important (and I do agree that they are often more readable when translated to code), understanding why (and when) they work is equally important. Does each K-Means iteration always reach a stable configuration? When can you expect it to converge fast? How do you choose the number of clusters, and how does this affect convergence speed? Does the way you initialize your centroids have a significant effect on the outcome? If yes, which initializations tend to work better in which situations?

These are all questions I might ask in an interview, but more importantly, being able to answer these is often the difference between blindly applying a technique and applying it intelligently. Even for "simple" algorithms such as K-Means, implementing them is often only the tip of the iceberg.

22
ronaldx 6 days ago 1 reply      
I'm cynical about how machine learning of this type might be used in practice and this is an illustration of why: the stated goal is a "you might also like" section.

There is no reason to believe the results are any better than a random method in respect of the goal (and it's reasonable to believe they may be worse) - we would have to measure this separately by clickthrough rate or user satisfaction survey, perhaps.

I believe you would get far better results by always posting the three most popular articles. If you want to personalise, post personally-unread articles. A lot less technical work, a lot less on-the-fly calculation, a lot more effective. The machine learning tools do not fit the goal.

The most effective real example of a "you might also like" section is the Mail Online's Sidebar of Shame. As best as I can tell, they display their popular articles in a fixed order.

Machine Learning seems to make it easy to answer the wrong question.

23
mrcactu5 6 days ago 0 replies      
The equations look fine to me - I was a math major in college. Honestly, I get so tired of humanities people -- or programmers, bragging about how much they hate math.

Except:

https://gist.github.com/benmcredmond/0dec520b6ab2ce7c59d5#fi...

I didn't know k-means clustering was that simple. I am taking notes...

  * pick two centers at random  run 15 times:   * for each post, find the closest center  * take the average point of your two clusters   as your new center
This is cool. It is 2-means clustering and we can extend it to 5 or 13...

We don't need any more math, as long as we don't ask whether this algorithm converges or how quickly

24
aidos 6 days ago 1 reply      
Most of the comments on here are from people in the field of ML saying "this is a toy example, ML is hard."

Maybe that's the case. And maybe the title of the submission ruffled some feathers but the thrust of it is that ML is approachable. I'm sure there's devil in the detail, but it's nice for people who are unfamiliar in a subject to see it presented in a way that's more familiar to them with their current background.

I have a university background in Maths and Comp Sci so I'm not scared of code or mathematical notation. Maybe if I'd read the comments on here I'd get the sense that ML is too vast and difficult to pick up. I'm doing Andrew Ng's coursera course at the moment and so far it's all been very easy to understand. I'm sure it gets harder (I even hope so) and maybe I'll never get to the point where I'm expert at it, but it would be nicer to see more of a nurturing environment on here instead of the knee jerk reactions this seems to have inspired.

25
agibsonccc 6 days ago 0 replies      
Wait till you have to hand craft your algorithms because the off the shelf ones are too slow ;). In the end you can stand on the shoulders of giants all day, but until you actually sit down and write an SVM or even something more cutting edge like stacked deep autoencoders yourself, machine learning isn't "easy".

In the end, libs are there for simpler use cases or educational purposes. Realistically, that's more than good enough for 90% of people.

That being said, it's not impossible to learn. Oversimplifying the statistics, tuning, and work that goes in to these algorithms you're using though? Not a good idea.

26
outworlder 6 days ago 1 reply      
I don't get all negative comments.

From my limited text comprehension abilities, the author did not say that the whole field is trivial and that we should sack all academics.

Instead, the argument is that basic Machine Learning techniques are easy and one shouldn't be afraid of applying them.

27
BjoernKW 6 days ago 0 replies      
The fundamentals of linear algebra and statistics are indeed quite easy to understand. Common concepts and algorithms such as cosine similarity and k-means are very straightforward.

Seemingly arcane mathematical notation is what frightens off beginners in many cases, though. Once you've understood that - for instance - a sum symbol actually is nothing else but a loop many things become a lot easier.

However, the devil's in the details. Many edge cases and advanced methods of machine learning are really hard to understand. Moreover, when 'good enough' isn't just good enough any more things tend to become very complex very quickly.

28
samspenc 6 days ago 0 replies      
Upvoted this for an interesting read, but I agree with the sentiments in the comments that (1) ML is in general hard (2) some parts of ML are not that hard, but are likely the minority (3) we are standing on the shoulders of giants, who did the hard work.
29
hokkos 6 days ago 0 replies      
Matrix multiplication, orthonormal basis, triangular matrix, gradient descent, integrals, Lebesgue mesure, convex, and the mathematical notation in the paper are not harder than the code shown here. It is better to have solid prof of what you are doing is sound and will converge before jumping into the code.
30
pesenti 6 days ago 0 replies      
Of the two methods described - search vs. clustering - the first one - simpler and not involving ML - is better for this use case. The only reason it seems to give worst results is because it's only used with the titles and not the full body (unlike the clustering approach). So I guess machine learning is easier to mis-use than it looks...
31
gms 6 days ago 0 replies      
The difficult aspects take centre stage when things go wrong.
32
Irishsteve 6 days ago 0 replies      
The post does do a good job of showing how easy it is to implement knn.

The post doesn't really go into centroid selection or evaluation, or the fact that clustering on text is going to be painful once you move to a larger dataset.

33
danialtz 6 days ago 0 replies      
I recently read a book called "Data Smart" [1], where the author does k-means and prediction algorithms literally in Excel. This was quite eye opening as the view to ML is not so enigmatic to enter. However, the translation of your data into a format/model to run ML is another challenge.

[1] http://www.amazon.com/Data-Smart-Science-Transform-Informati...

34
adammil 6 days ago 5 replies      
It is nice to read about this in plain language. But, can someone explain what the X and Y axis are meant to represent in the graph?
35
Toenex 5 days ago 0 replies      
I think this is one of the reasons why it should become standard practise to provide code implementations of described algorithms. It not only provides an executable demonstration of the algorithm but as importantly an alternative description that may be more accessible to other audiences. It can also be used as conformation that was is intended is indeed what is being done.
36
mau 6 days ago 0 replies      
tldr: the ML algorithms look hard reading the papers, while the code looks simpler and shorter, also you can get pretty decent results in a few lines of R/Python/Ruby so ML is not that complex.

I disagree in so many ways:1. complex algorithms are usually very short in practice (e.g. dijkstra's shortest path or edit distance are the firsts that come to mind)2. ML is not just applying ML algorithms: you have to evaluate your results, experiment with features, visualize data, think about what you can exploit and discover patterns that can improve your models.3. If you know the properties of the algorithms you are using then you can have some insights that might help you on improving your results drastically. It's very easy to apply the right algorithms with the wrong normalizations and still get decent results in some tests.

37
Rickasaurus 6 days ago 0 replies      
It may be easier to do than it looks, but it's also harder to do well.
38
kamilafsar 5 days ago 0 replies      
Some while back I implemented k-means in JavaScript. It's a really simple, straight forward algorithm which makes sense to me, as a visual thinker and a non-mathematician. Check out the implementation here:

https://github.com/kamilafsar/k-means-visualizer/blob/master...

39
m_ke 6 days ago 0 replies      
This is as valid as someone stating that computer science is easy because they know HTML.
40
dweinus 6 days ago 0 replies      
They should try using tf-idf to create the initial representation of the keywords per post...also, I find there are many cases where applying machine learning/statistics correctly is harder than it looks, this single case not withstanding.
41
fexl 6 days ago 1 reply      
I like the simple explanation of K-Means, and I like the contrast with the dense set-theoretic language -- a prime example of "mathematosis" as W.V.O. Quine put it.
13
Magnus Carlsen is World Chess Champion fide.com
416 points by jordanmessina  4 days ago   251 comments top 25
1
kadabra9 4 days ago 6 replies      
Reading more about this match and Magnus in general, I learned of a measure termed "Nettlesomeness" which has been used to measure which players do the most to make their opponents to make mistakes. Magnus, with his highly creative style of play and unexpected moves, not surprisingly ranks the highest in this measure.

He seems to have this remarkable gift of making moves which aren't just strong, they get inside his opponent's head and cause them to either overthink/break down. I'm interested in the technical details behind this metric. Has anyone heard of it before?

Regardless, congrats Magnus. You are truly a generational talent, and I'm excited to see what your win will do for the game.

http://marginalrevolution.com/marginalrevolution/2013/11/net...

2
realrocker 4 days ago 1 reply      
Congrats Magnus Carlsen! You finally unseated our beloved Vishwanathan Anand and made the beautiful game even more beautiful.

Allow me to go on a tangent to let me tell my personal story with chess. I began playing at age 7 when my elder brother borrowed a chess board from a friend. It was a nice break from the physical altercations between us(read mat fights). My maternal grand ma called it "Satan's Game". And my mother toed the line. Why? I don't know the exact reason, but I guess it was an amazing time sink. Or maybe they both had watched this Hindi movie by Satyajit Ray: The Chess Players(http://www.imdb.com/title/tt0076696/). When the game s between my brother and me became violent(You moved it when I was off to the toilet...) it was banned from our home. But we didn't give up. Our summers were spent playing chess in a nearby mango orchard or the graveyard a mile away. The chess board made out of paper with plastic pieces was the only "toy" we never broke. Those were the best days of my life. And it's still safe 20 years later. With every piece intact. What a game.

3
sethbannon 4 days ago 3 replies      
I'm super excited to see the impact this will have on our noble game. I think it could see a real surge in popularity in the years ahead. And at the age of 22, Magnus is only just getting started.
4
anuragramdasan 4 days ago 0 replies      
60 minutes from last year. pretty cool stuff right here http://www.youtube.com/watch?v=Qc_v9mTfhC8
5
girvo 4 days ago 1 reply      
After spending the last 18-months immersed in the professional StarCraft 2 scene, I can totally appreciate a lot of the meta-stuff around Chess now. I always enjoyed Chess, and was not too bad at it (compared to those around me, certainly nowhere near even an amateur-pro!), but for some reason SC2 "clicks" better for me (I think being addicted to Brood War while spending 6 months in South Korea probably has something to do with it).

The discussion of "mind games" {"nettlesomeness" here) is something that SC2 has an obsession with, and certainly can play a massive part in pro tournaments, and I'd never considered it applying to chess... but now that I think about it, everything in SC2's meta really came from Chess to begin with, only applied in real-time with 300+ actions per minute and hundreds of pieces with few illegal moves. And yet I struggle more with grokking the advanced strategies of Chess than I do for StarCraft!

6
aaronetz 4 days ago 27 replies      
<blasphemy alert> Does anyone know some good alternatives to chess, as a game that mixes deep thought and aesthetic variety? I tried Go, but found it somewhat boring compared to chess, because of its uniformity (which, on the other hand, has the advantage of beautiful simplicity and symmetry.) On another note: it is unfortunate, in my opinion, that chess has a special standing among board games. I would love to see some more variety in world-class intellectual matches, similar to what exists in physical sports. Something like a "board game Olympics".

Edit: Thank you for all the useful replies! In reply to some of you, I am a complete beginner at Go. Maybe the word 'boring' was not carefully chosen. As a programmer, I should have known better - that things may seem boring (tiresome?) until you become more fluent with them. I should certainly give Go another shot...

7
McUsr 4 days ago 6 replies      
I am Norwegian and fucking proud of it right now, due to Magnus Carlsen.

He comes from a Nation consisting of 5 mill. people, compared to Anand's billion people.

This is probably the greatest sports achievement our country will ever make, as there are really no comparable sports achievements in the world, not now, anyway.

IMHO: They should knight him the second he gets of the plane when he returns home. Because no other Norwegian has ever accomplished anything close to this, with regards to bring honour to our nation.

Gratuler Magnus!

8
mattivc 4 days ago 0 replies      
It's quite fun to see the media attention he has gotten here in Norway. For the last few weeks the sport segment of most news show spent as much time devoted to chess as football, which is not something i ever expected to see.

I'm not much a chess player myself but it still very satisfying seeing so much attention brought to a intellectual sport. I hope at least some of it will stick around.

9
pdknsk 4 days ago 0 replies      
I'm not a particularly good player, but the match was rather boring IMO, other than game 9, which Anand cut short with his blunder. I wonder if the dull first game, described by Anand as a "satisfactory draw with black pieces", set the tune for the remaining games.
10
ktd 4 days ago 3 replies      
This is actually a good example of why I'm not particularly interested in chess anymore-- a game that's that heavy on draws and where so many of the situations are adaptations of well-known positions simply isn't that thrilling. I really enjoyed chess when I was a kid, but the better I became and the more I learned about it the less I found it a compelling game.
11
3327 4 days ago 7 replies      
Chess is amazing it blows my mind why simple tools and games like this are not incorporated in some 'fun' way into the education system. By 'fun' I mean that if children were told to play chess they would not. A system would be need to be designed so that they look forward to chess class as they do for PE and art.
12
jordanmessina 4 days ago 1 reply      
Press conference is live right now for anyone interested: http://chennai2013.fide.com/fide-world-chess-championship-20...
13
wavesounds 4 days ago 0 replies      
I wish they gave the girl announcer access to the laptop as well so she could describe what she's saying using the screen just like the guy can.
14
xfax 4 days ago 0 replies      
A well-deserved win. Can't wait to see what else Magnus goes on to accomplish.
15
JonFish85 4 days ago 0 replies      
But has he played Judah Friedlander?
16
eneveu 4 days ago 0 replies      
It seems like the official commentary is of sub par quality. According to the /r/chess subreddit, this commentary is pretty good and fun: http://www.twitch.tv/chessnetwork/profile/pastBroadcasts
17
rikacomet 4 days ago 3 replies      
Now what the heck happened in the end? Anand gave up a knight advantage, purposedly for a clear cut draw. I have no clue why he did that this time.

He took queen with queen, clearly, knowing it would be lost to king, and then again the pawn with knight. He had a knight, of all things!

18
KedarMhaswade 4 days ago 1 reply      
Brilliant! Magnanimous. I am a Vishy fan, but this match was really one-sided when Vishy faltered at critical moments. Does it mean age matters? Will Vishy rebound? I hope so, but perhaps it's the sad reality that I acknowledge -- better player won and the problem with the chess world (the number 1 elo-rated player was not the WC for so long) got corrected.

Where do we go from here?

19
reidmain 4 days ago 4 replies      
As someone who played chess as a child but gave up after high school what are some apps that would get me back into the game?
20
oconnor0 4 days ago 0 replies      
Is the site down for anyone else?
21
fedvasu 4 days ago 0 replies      
Honest Question : So now Chess will be more fashionable game?
22
lukekarrys 4 days ago 0 replies      
You can watch IM Danny Rensch & GM Ben Finegold review the game right now on http://www.chess.com/tv
23
mmwanga 2 days ago 0 replies      
Move #14 (Nxc6) does not make sense. Why did Magnus do that?
24
RLC 4 days ago 2 replies      
Magnus seems more like a guy I could invite over for a couple of beers. No offense on Anand he seem to be more like a KOOL-AID type of kid and always a boy scout but a douche!
25
RLC 4 days ago 3 replies      
Of course he won the name alone speaks for itself "Magnus!" Just fucking HUGE at anything you can think of! Compared to Viswanathan which sounded like a vegtable ready to be consumed or a rubbing oil or even like a dip for your prata.
14
Why open-office layouts are bad for employees, bosses, and productivity fastcompany.com
414 points by jtoeman  5 days ago   236 comments top 62
1
pvnick 5 days ago 12 replies      
The best environment I've ever worked in was a combination open office, private space hybrid. You had your desk, whether you wanted a sitting desk or standing desk, you could choose from either, and you were by default in the open office area. However, surrounding this large room were a dozen or so closed offices where you could pop in and have a meeting or do some coding in private.

However, one of the organize-all-the-things guys on the internal operations team once caught me in a coding marathon in one of those offices and sent an email to the entire company "reminding" everyone that those offices were for God-knows-what-he-thought-they-were-for, not for work. So I returned to my ergonomic island and toiled away, surrounded by the noise of a hundred private conversations.

I've always thought since then that if that had panned out, that you could choose at any moment if you wanted to be in the open room or in a private room in the perimeter, that would have been the ideal layout.

2
pasbesoin 5 days ago 1 reply      
Perhaps the worst aspect of all this, is the purposeful, or even casual, ideologue. An arrangement works for them, or they think it does -- or, BEST PRACTICES dictate that it should... and viola, a dictate.

I am one who needs some control over his environment. In the majority of cases, these means peace and quiet particularly/mostly from human noise, as well as a lack of visual distraction. (Although there are times when I work well -- best even -- in a frenetic environment; however, these are limited in both type and frequency.)

I'm a bit older, and I fell into a generation that was subscribing to and prescribing whole-heartedly the "open", "collaborative" environment.

It did not work for me. Yet I received unrelenting pressure, including from medical professionals, that I was the one who... "simply" needed to learn to adapt.

Well... now we know a bit better. (Although I don't trust society to have truly "learned" this in any permanent fashion.) But the chronic stress of this situation has caused for me major adjustments in career and, eventually, rather run me down.

To put the bottom line at the bottom, here: If a situation is not working for you, IT IS NOT WORKING FOR YOU. TRUST THIS. TRUST YOURSELF!

"Professionals" of varying occupations and levels of training will all -- ALL -- tell you all kinds of crap. Even several years of medical school does not divorce most from their prejudices nor from cultural suasion.

Don't waste your time -- your life -- running yourself down trying to live up to someone else's idea of the "right way".

3
wbond 5 days ago 9 replies      
My company gives all engineers their own office with a door. Recently four of us petitioned to be able to have an open office together. We collaborate better, feel generally happier, and knowledge sharing happens so much more fluidly.

I was going crazy the first 6 months here because I was holed up in a office by myself with little in-person communication. There was no benefit to being in the office versus working remotely. My first attempt was to get the company a HipChat account for engineers to stay more connected. I even pushed for a couple of monthly engineer events so I would have an opportunity to interact with other engineers.

Open office setups can go horribly wrong. Never allow anyone who spends time on the phone into the open office setup. That stifles all interaction due to the need for silence. Additionally, engineers are forced to listen to a single side of a conversation that likely has nothing directly to do with the engineers. Project and account managers have a valuable job, and engineers should not need to be distracted by work that is not related to what they need to accomplish.

Additionally, I believe an open office for engineers should be reasonably small (4-10 people), and there should be some common responsibilities or projects between the engineers.

Other steps can be taken to give people the appropriate space for the task at hand. I've used a stand-up desk for the past three years. I hardly ever spend a whole day standing. I alternate between sitting and standing as my body gives me signals. Similarly, having quiet space (alternatively headphones, if desired) to crank on certain work can be useful useful. That said, three of the four of us have not used solitary space in the past 2 months.

Basically all of this is to say the issue is not black and white. If you prefer to work in a private office, like more than half of the engineers at my company do, that's fine. If you prefer to work in the company of others, that is fine too. Not everyone wants to work at a startup, and not everyone hates working for big financial companies.

4
rayiner 5 days ago 1 reply      
I find it hilarious that a bunch of people who work on internet technologies apparently need so much face-to-face communication.

If you want my attention, send me an e-mail. Also: get off my lawn.

5
at-fates-hands 5 days ago 3 replies      
It's interesting to note most people don't know the history of the cubicle and why it was invented in the first place:

http://en.wikipedia.org/wiki/Cubicle

"The office cubicle was created by designer Robert Propst for Herman Miller, and released in 1967 under the name "Action Office II". Although cubicles are often seen as being symbolic of work in a modern office setting due to their uniformity and blandness, they afford the employee a greater degree of privacy and personalization than in previous work environments, which often consisted of desks lined up in rows within an open room.[1][2

Image of an office circa 1937: http://en.wikipedia.org/wiki/File:Photograph_of_the_Division...

I've never liked the open office layouts anyways. The two companies I worked for used it and it was tremendously noisy and so I usually did anything I could to avoid having to work in the office. Either by going to the cafeteria to work, or staying home. It made both of the teams I worked on very inefficient. The exact opposite goal it was meant to address.

6
macspoofing 5 days ago 3 replies      
Heh. Open layouts were a response to the cubicle system which isolated people and gave the impression that you are nothing but cattle on an assembly line. It also reinforced status (size of cubicle/office/location). Just watch any 80s or 90s movie. Now the pendulum is swinging the other way. Have the original problems with cubicles been solved?

The problem is that people look for ideological purity and look to absolutes because an unambiguous answer seems simple, whereas the reality is quite grey. The reality is that some people work better in cubicles, and some prefer open layouts. To complicate things even further, some situations call for one, others call for the other.

I see a similar debate going on between proponents of traditional schools (rows of desks, and teacher in front) and structure-less/self-pacing schools. Which is better? Well, some kids thrive in one, others thrive in the other. Worse, some kids get absolutely destroyed within the wrong king of system.

There are no simple answers.

7
raldi 5 days ago 3 replies      
They had open-office layouts 100 years ago, too. Back then, though, they called them sweatshops.
8
abalone 4 days ago 0 replies      
Cornell did a study of open-plan offices for software engineering awhile back. It's well worth a read if you're interested in this subject.

It's definitely not anti-open. They basically found that closed offices benefit individual engineers the most while open plans benefit the team. Interestingly, while noting the need for concentration, they note a whole bunch of ulterior careerist motives for developers wanting to work in private.

They found that the nature of communication was markedly different in each environment. Open was not only more frequent and immediate, it raised the bar for what was considered a frequent amount of team interaction, suggesting greater knowledge-share. The conversations were also shorter and subject to "cues" about whether it was a good time to interrupt someone. And the stronger social bonds encouraged more people to ask for help and bounce crazy ideas around.

They do note that it comes at the cost of distractions, and in the end they call for a balance.

http://iwsp.human.cornell.edu/file_uploads/office_ex2_123825...

9
shubb 5 days ago 6 replies      
In my open office, I currently code next to some project managers, who spend all day on the phone negotiating.

This is a bit bad, but I just wear PPE Ear Defenders all day, on top of in ear headphones. With both of these, I can't hear a thing.

The eerie quiet is great for short bursts of concentration, but it also means I can turn my music up to a normal level without worrying about escaping noise annoying my colleagues.

It looks very nerdy, and people need to email me or wave if they want something (which cuts down interruptions a lot). I take them off about half the time so as to be social, which I guess is like leaving an office door open.

Sort of sad it's necessary though. Hope this helps people with a similar situation.

Ear defenders, buy good ones -> http://goo.gl/NlgnPv

10
Macsenour 5 days ago 0 replies      
My last company visited a company with open office and took pictures to prove to us how great it is. In the pictures the people are hunched down behind their screens, to avoid the distraction of the person facing them, and 90% have head phones on because of the noise distraction.

Basically, they were in mental cubes when they were lacking physical cubes.

P.S. The company I worked for went with the open office, productivity plummeted and the office is now closed. When I pointed out the above issues in the pictures I was told: "You don't like it? Maybe you need to work somewhere else". Well, now, they all work somewhere else.

11
resu_nimda 5 days ago 1 reply      
I sit in an office with desks with half-height dividers. I enjoy it. A while ago our company expanded into another floor, and my product's team was moved there (dev, QA, product, services, support). Previously the layout was arranged more by department than product.

Pretty much everyone on the team loves it, and has felt a major boost in productivity and team cohesion, as virtually anyone you might need is "right there" in the room with you, and you can tune in to some of the chatter for an organic understanding of what everyone's up to. I imagine if everyone were in offices it would feel dead and empty, and totally kill the team spirit.

I think the only thing we're missing is more ad-hoc space - more conference rooms for breakout groups and individuals seeking temporary escape from the floor.

12
wldlyinaccurate 5 days ago 3 replies      
I work in an open office with no dividers. Unfortunately for me I don't have selective hearing, so 95% of the time I'm trying to drown out the buzz by wearing over-ear headphones (usually with no music playing). I also spend a lot of time fending off product managers and testers who just refuse to acknowledge the headphone rule and constantly bug me about trivial things that can be put in an email or an IRC message.

The other 5% of the time is great - as other people have already mentioned, it's really easy to listen in to conversations and get an idea of what everybody is up to.

13
city41 5 days ago 2 replies      
I currently work in an open office and I really hate it. I've previously had jobs with cubicles and one job where everyone got their own full fledged office. Of the three, I actually think cubicles are the best.

Everyone having their own private office was detrimental in the opposite way. Everyone was closed off and really inaccessible. Knocking on someone's door felt invasive and wrong, so people would avoid doing it.

Cubicles give everyone privacy and space, but not so much that it stops collaboration dead. The impediment to interruptions seems to be at just the right level.

I'm also interested in offices that have open collaborative spaces combined with private offices. I've never had that and I think it could be a good compromise too.

14
munificent 5 days ago 3 replies      
"Thats what work is: It is a vacillation between collaboration and solitary exploration."

It's weird that the author notes that, but then proposes that the solution to focusing on one half of the vacillation is to just focus on the other half instead. Surely the ideal is to support both.

If I could I would run an experiment like this:

1. Have a large number of small, quiet office-like spaces.2. Have a big open plan area.3. Have a fixed schedule during the day where for a certain number of hours, everyone is required to be in the open plan area.

You can still hack there if you want, but you're expected to be there, and you understand that during that time you're free to interrupt and be interrupted.

The reason for making the open space mandatory is so that people actually go there. If it's optional, then it looks like people only go to the open spaces to not do "real" work. Since no one wants to be seen slacking, the open space just ends up unused.

15
nlh 5 days ago 4 replies      
This is great - in theory. Let me bring up something which the article brings up right away but none of the comments seem to discuss.

Look at it from the startup's side of things: The ideal office that we'd all love to work in - that perfect 4-6 person bullpen with private offices surrounding (x number of 4-6 person teams), is _expensive_. Very few companies can afford a build-out like this until much much later in company-life.

If we're talking about an Apple or Google, fine - let's have the debate. But for a vast majority of early-stage startups, this simply isn't a viable discussion to have. Office space is very limited in many parts of the big tech hubs, and often it's a matter of just getting an affordable space in the first place, much less being able to build out the perfect working environment. And the fact of the matter is, most spaces are open and filled with $200 IKEA tables because that's all the company can afford.

So I'm not sure what the answer is. On the one hand, you can say "well, budget more for office space", but we all know it's not that easy. It's not a small expense -- big buildouts for private offices costs tens of thousands of dollars (or more), precious capital for a small business.

16
vacri 5 days ago 1 reply      
An alternate story in favour of open-office layouts. Here in Aus, the Department of Human Services (DoHS, has had many previous names) is responsible for welfare. The old offices were an arrangement with a counter - staff on one side, clients on the other. Aggressive incidents rose and the counters ended up having old-school bank bulletproof windows installed.

Some bright spark changed that - got rid of the counters, and made the offices all open-office plan. You wait off to the side, and when it's your turn for whatever, someone comes and fetches you to their desk in the open-office plan with some space between desks. Instead of shouting your personal issues across a counter, you could discuss it in a normal tone, and if it was private, you could be quieter or more subtle about the topic. Aggressive incidents dropped off a cliff - and there was much less of an 'us-versus-the-gummint' mentality seeded by the demarcation line of a [fortified] counter.

So in this particular use-case, an open-office layout was clearly superior for employees, bosses, productivity, and clients.

17
rhizome 5 days ago 3 replies      
How many more times is this "open plan is the best!" "open plan is terrible!" cycle going to continue to receive your clicks? This has been an ongoing topic literally all year! These sites are playing the community like a piano, and the comment threads all read exactly the same: anecdotes.

I'm guilty of participating, too, but no more. My assumption will now be that any article with a headline that presents an absolute for a subject that is a matter of preference is garbage. It's all part of growing up, I guess.

18
ChristianMarks 5 days ago 0 replies      
On my first day on one job, my managers invited me to lunch. I thanked one for assigning me a desk next to a corner in their open office. The other supervisor could not resist chiming in that they could move people around at will. The other manager averted his eyes. I never expressed gratitude for my working conditions again.

Headphones would be too distracting for me--however I am developing tinnitus, which has become a blessing in disguise.Although I find it difficult to listen to music now, I would rather listen to the ringing in my ears than office chatter.

19
rubiquity 5 days ago 2 replies      
It's all about balance. Open offices work for certain occupations but not for others. When it comes to software development I think you need a combination of open office and cube farm. The best balance I've found is open office with all communication happening in a place it can be persisted (Campfire, HipChat, etc.) for others to see and benefit from. Occasionally the entire team can break into talking in the open office area but this should only be done if the entire team is participating. If the entire team isn't participating then communication should be handled in a chat (preferably) or in a conference room.

If you're trying to build software in an open office where people are constantly talking then I'm sorry, good work will not get done. Decisions to change your office layout should be in the interest of boosting communication, team cohesion and productivity. Cubicles are too restrictive, completely open is too distracting.

20
maxk42 5 days ago 2 replies      
It may not be for everyone, but for people like me it really boosts productivity. The last office I worked in was a massive open-office in a warehouse which sounds just miserable, but it was great. If I ever had a question, I could just lean over and ask the person I had a question for. No waiting for emails to bounce back and forth or for people to get back to their IMs. If I needed to make a private phone call, I'd just walk out of the office to do it. Plus, having people around me made it easier to focus on work instead of fucking off on Hacker News or Facebook.

Now, as a self-employed individual I rent a seat in a shared open office to maintain that focus. It's far too easy to turn on the TV or play a video game or linger on the phone with a friend when I'm working from home. In a different setting -- with people around: all focused on work -- it's much easier to maintain a focus on work and getting done what's important.

21
awjr 5 days ago 1 reply      
It's a hard one to solve. In the company I work in I've sat in 3 different places as teams expanded. Given the density of employees you can achieve in an open plan office vs individual offices, it is hard to justify to an employer.

However one thing we do is, that it is perfectly within your right to work from home if you feel you have enough to get on with and people do this often.

As to headphones, we have golden rule, if they are on, the building better be on fire if you disturb somebody. Not quite a sackable offence but damn close. :)

I've also found that sites like www.coffitivity.com offer a 'break' from the music. They can kill any background conversation distraction. ANY. Investigate white noise.

As to socialising, jokey things still get passed around. We're encouraged to use IM, and we also go in groups to the coffee machine which is kept in a cafeteria area, away from workers where you can chat freely and loudly.

I personally hate open plan offices, but in my 20+ years of working, I've only worked in an office once and that still had 4 people in there because they could squeeze that number into it.

22
Segmentation 5 days ago 0 replies      
Something not brought up often: smell.

I don't work in an open office, but I wonder what it smells like. When in closed meetings or an elevator I can keenly smell people, sometimes good (women's fragrance) but most of the time distracting (perfume, odor). I'd hate to be surrounded by distracting smells all the time.

This can be fixed with proper ventilation (and proper hygiene let's hope), but ventilation can be hard to come by in the non-summer months. (without freezing everyone out)

23
andrewcooke 5 days ago 0 replies      
peopleware was written 27 years ago. why on earth is this still news?

http://en.wikipedia.org/wiki/Peopleware:_Productive_Projects...

24
DanBC 5 days ago 1 reply      
For people working in open plan offices or cubicles: Would small hoods help? (Especially if combined with headphones / earplugs?)

Here's an example. (Ignore the desk, which looks a bit fragile. I'm just asking about the hood.)http://www.designboom.com/design/gamfratesi-the-rewrite-desk...

25
retrogradeorbit 4 days ago 1 reply      
I think the reason this persists is because everyone is doing it. Thus your open-plan, inefficient office is only competing with other equally open-plan, inefficient offices. We are all in a less-productive equilibrium together.

This of course gives those willing to make offices for everyone (like, say, Fog Creek) a competitive advantage. But your average corporate manager doesn't care about that. They still get their office and get paid.

26
digisth 5 days ago 0 replies      
The real lesson is that there is no silver bullet. No matter what {office layout, technology} you choose, there's going to be upsides and downsides. There's no one-size-fits-all solution. We had a backlash against separate offices for a reason, and we're having the same sort of backlash now (and will likely have plenty more in the future.)

It's the price paid for what often seems like blind fad-following; rather than analyzing whether X really makes sense given the attributes of the organization (people/culture, type of work, department, etc.), it's adopted, used, and eventually, revolted against. A more thoughtful, situation-specific analysis might produce better results.

27
mcv 2 days ago 0 replies      
I like working in the same room as the rest of my team. It means I can ask them questions, they can ask me, we can quickly discuss little things without getting out of our chair.

I can imagine private office might be preferable if you really work on your own. But I work in a team, and I prefer working in the same room as them.

Though the stories about noise suggest that some people are sharing a room with a hundred people, and that's just ridiculous. 6, 12, or even 20 programmers in a room don't make a lot of noise. The occasional question or discussion really isn't that distracting (though nerf-gun battles certainly are).

Just keep it sane. Put people in a room with the people they need to be in a room with. Don't make them hide from their team. Don't put them in a crowd of noisy strangers.

There's good and bad ways to do this.

28
RandallBrown 5 days ago 1 reply      
Open office layouts are bad for some employees and some people's productivity.

Having a private office is bad for some employees and some people's productivity.

I went from an open office that I loved to having my own office, which I hate.

I could write this same article saying the opposite things and it would be no less correct.

I hate my office. In the almost 2 years I've been at my current company I feel like less of a team member than I did in 2 months at my last job.

29
pathy 4 days ago 0 replies      
Open office schemes has been around awhile. The earliest research that, I know about, into them is by Allen & Gerstberger from 1973 [0]

In essence they found that performance was roughly the same as before but the employees preferred the new arrangement and that communication was improved.

Here is part of a summary of the article, made when revising for an exam:

> "The most important and most obvious conclusion that this paper found is that the non-territorial idea works. It not only reduces facilities costs by eliminating the need for rearranging walls, air ducts, etc. every time an area is re-organized, but it also allows for the allocation of space based upon an expected population density at any point in time. More important than the cost savings, however, is the fact that people find it comfortable to work in."

The open plan arrangement is not only to benefit the employees, which it may or may not do, but to reduce costs. Office space isn't exactly cheap in many locations.

[0] http://dspace.mit.edu/bitstream/handle/1721.1/1866/SWP-0653-...

30
tomphoolery 5 days ago 0 replies      
My company does the open-office thing really well. The building we're in used to basically be all offices so everyone has an "office", but most of us share a room with someone else. This leads to "just enough" exposure, for me, to other people while still leaving me time to get work done. Rarely are people coming into my office to talk about things that don't pertain to me. When that does happen, I happily put on headphones. There's also a large common area with couches and bean bag chairs you can sit on, if you want a larger place to work, and we have a whole wall of ideapaint if we need to do a big meeting of some kind.

This is in sharp contrast to my last job, a fully open office where it was pretty much one gigantic room and everyone was LOUDLY talking over one another. Pretty much had to have the headphones on the whole day just so people wouldn't bother me. I'd even have them on without playing music just to signal to people not to come around...that's how annoying it was. It was truly interruption-driven development at that place.

31
zackbloom 5 days ago 1 reply      
I dig working in an open office. I see my work as very collaborative, so I wouldn't want to be in an environment where I was siloed off. That being said, headphones are critical.
32
pbreit 4 days ago 0 replies      
This is such a big, important topic that surely not all tech firms have settled on the "open plan" which intuitively and in my experience is awful.

I think the breakthrough will come when workplace interiors get much more modular and flexible. I'm envisioning different teams getting to choose (within reason) what types of setups they would like from enclosed offices to bullpens to cubes to open desks.

And I can even see planned re-arrangements every 6 months or so to eliminate the moss.

33
steven2012 5 days ago 0 replies      
During my career, I went from office to cubes, to office and now open layout. I thought I would hate the open layout, but actually I like it a lot. I'm not easily distracted, so it's convenient being able to ask questions directly without having to walk around or knock on a door.

The other thing I enjoy that I didn't expect was the social aspect, where I can chat with everyone in the room before work starts in earnest in the morning, or after 6-ish when we're all ready to leave for home anyway.

34
msluyter 5 days ago 2 replies      
One trick I've recently adopted: I use this site:

http://mynoise.net/noiseMachines.php

In particular, the "babble" generator. The babble blends with actual conversations so you can no longer distinguish spoken words and reduces what would otherwise be attention grabbing conversations a to coffee shop level din.

35
mikecaron 5 days ago 0 replies      
We have offices for every developer, if they want one. We also have an open space. I used to have an office (still do, it's just empty now). I work in the open space, but it's not typical. There's only two of us that work in this open area, so it's very quiet as we're both developers. I think it's an unusual setup, but being more extraverted, I feel less lonely as I can see when people are going to the lunch room, I can participate in conversations around the pool table, etc. If there were more than 3 people in this area, I'd head back to my office, but for now, it's a great environment. I also have to mention that our open area isn't very large, and the desks are tripods (three workstations to a pod); again, my pod is just me. I'm also surrounded by windows and sunlight, where as my office only had one window.

Not complaining, just sharing a different situation.

36
briandear 4 days ago 1 reply      
The best office layout ever is the one that allows me to works from home.
37
dschiptsov 4 days ago 0 replies      
This is also not very interesting question. It was recognized ages ago that mechanical, manual labor, such as assembly line or McDonald's , should be organized in an open-space, while thinkers must have their private comfort zones (which is very expensive) and occasionally meet in a small groups to share ideas.

The balance is quite subtle, as usual. So-called brain-storming sessions (which in the language of normal people is called a discussion group) could be very effective (only if participants have something to be stormed) while meeting of committees of idiots is always a disaster. The first activity is centered around subjects and goals, while the second is dedicated to the action itself and a sense of self-importance.

In other words, for those who think of software development as an assembly line (which is very wrong) mass-production best practices are quite appropriate, while others, who think of it as a process of writing poetry, the best practices appropriate for a writers and thinkers should be considered.

Unfortunately, idiots dominate the world.

38
startupstella 5 days ago 0 replies      
There is no one size fits all solution...for me personally, I require a mix of social and private time to maximize my productivity. Working from home half the time, and working in the other half tends to be best...For those who want to talk/meet, knowing I'm only there at certain makes them more likely to think twice about prioritizing meeting time. Also, the quiet of home and lack of distractions (no giggling coworkers or visitors to the office) leads to the best writing/thinking.

You just have to know how you work best, and hope your company can support it. If you're a startup, be flexible about optimizing workers' time...

39
archonjobs 5 days ago 0 replies      
The best tradeoff I've found is1. Open-office layout two (specified) days a week; perfect for collaboration and meetings.2. People work from home three days a week; perfect for those coding marathons.

Obviously you can still code in an open-office and you can still collaborate working from home, but it's sub-optimal. With the setup above, you're in the right environment for the right type of work most of the time, and employees love it.

Lots more about this here:http://www.archonsystems.com/devblog/2013/09/19/open-offices...

40
pnathan 5 days ago 0 replies      
Well, I write this from a quiet corner I escaped to from my open office area so I could have a sustained focus time.

I've worked in open office, half-cube, full cube, shared office, and sole office. Of all of those, sole office was best for concentration and shared office was best for collaboration.

41
scotty79 5 days ago 0 replies      
Team sized offices for the win. Team office doesn't have to have a door, but it should have small room with door very close for phone calls and longer (or involving more than two people) face to face chats. Short two person chats are initiated by one person getting of his ass, comming to the other persons computer and talking quietly.
42
LordHumungous 5 days ago 0 replies      
At my office I always have someone looking over my shoulder, and to be honest, it keeps me on task. At my last job I had my own space and there were days when I just decided, "welp, not gonna do any work today."
43
aaron695 5 days ago 0 replies      
Part 1 certainly is very strawperson in not addressing the real issue, cost.

Everyone knows open offices are worse but they are also far cheaper, if productivity is down 15% but TCO of the office space saves more then that's ok.

Labour is a commodity, it has value but so do many other factors.

Part 2 perhaps will talk on this issue.

44
grealish 5 days ago 1 reply      
I cannot stand the selfish arrogant thoughtless behavior shown by the few that destroy the productivity of an open office. Your constant sniffing and playing drums with your fingers is not respectful or mindful of others trying to work.

Has anyone thought about doing a study on the effects of people wearing headphones all day to drown out these distractions? I mean ear infections and hearing loss must surely be long term side effects.

End rant.

45
munimkazia 4 days ago 0 replies      
It's been one year since I joined my current employer, and we work in a big open floor. There are around 30 people in this big room, and its very distracting and we have no privacy. It is weird as someone who is setting next to me or walking by can just peep and see what I am doing and read my IMs. Since this is my first big office, it has been terribly distracting and has really crashed my productivity. But then again, this is a big company, and office space is pricey. I don't expect them to give us all more personal space of our own.
46
voidlogic 5 days ago 0 replies      
Many good points, but this doesn't touch of the issues of sickdays, lost productivity and how illness burns though open and traditional offices at very different rates.
47
msoad 5 days ago 1 reply      
It sucks when you want to considerate and someone flies a RC helicopter!
48
theklub 4 days ago 0 replies      
I think the amount of time spent talking about this topic is bad for employees, bosses and productivity. Its been beaten to death and the truth is everyone is different so there is no ONE solution.
49
cdmckay 4 days ago 0 replies      
I used to work in an open office and it was super annoying. People would throw stuff around the office and you could hear everything that was going on...
50
c4mden 5 days ago 0 replies      
Never mind the inherent dangers of open lines-of-sight: http://www.theonion.com/articles/open-floor-plan-increases-o...
51
hackula1 5 days ago 0 replies      
I share a small office with 1 other dev. This is the absolute max I can handle while coding. I am in meetings a good chunk of the day. I really don't need to be sitting next to 20 coworkers the rest of the time.
52
shmerl 5 days ago 0 replies      
Open office layouts always remind me of factories and conveyors. I don't like these short dividers as well, they aren't conductive for productivity at all.
53
pdfcollect 5 days ago 0 replies      
In open offices, these are the things that are problematic (when it happens from the person who is sitting next to me):

- cell phones- chats- social networking- random web surfing

(when my neighbor does it)

Maybe I'm not just concentrating enough at work. But perhaps there is a way to solve these problems?

54
ajasmin 5 days ago 0 replies      
Forget these OpenOffice layouts. I think LaTeX is more flexible... oh wait, never mind.
55
leerodgers 4 days ago 0 replies      
I think it all comes down to the employees and the culture. Some people thrive in these open environments and some down. For large companies a mix probably works but if you are a small shop might as well do what works for you.
56
erobbins 5 days ago 0 replies      
I miss having an office.

I also miss having 2 30" monitors in my office.

Who would have ever thought that working conditions for engineers would be more comfortable in florida than the bay area? Not me, and boy was I wrong.

57
Eye_of_Mordor 4 days ago 0 replies      
I think people should have a choice. Can't stand a quiet office and much prefer it if other people have music playing. Other people can't work with noise. Everyone's different.
58
jimmytidey 5 days ago 0 replies      
I've worked in all kinds of set ups, and I've never worked anywhere where everyone liked it. One problem with designing an office is having a diverse bunch of people like the same space.
59
washedup 5 days ago 1 reply      
Different types of people thrive in different types of environments.
60
sTevo-In-VA 5 days ago 0 replies      
I was in an open office for ten years and I can verify every thing that Jason wrote.
61
ffrryuu 5 days ago 0 replies      
Bad for health and lifespan too.
62
codegeek 5 days ago 4 replies      
He is missing the point of Open office plans. Frankly, the blog post comes off as a little entitled when he says "we all deserve office of our own" (paraphrasing). Really ? How about a bed to nap while we are at it (well ok google has the nap pods). The point of open office plan is to try and encourage a culture of equality (in my opinion). I love open office plan because I could be sitting next to a college graduate and an executive director at the same time. Imagine the level of access you have if you have the balls to actually utilize it. With closed doors, even if the person inside is welcoming, it just creates a senseless fear of rejection.

All this point about not being able to focus and getting disturbed all the time is hardly an issue. Most co-workers are respectful of your time whether they are in open office or closed office. The ones that are not respectful will bother you regardless of where you sit. Behind closed door ? No problem, I will give this guy an annoying phone call.

Now is there a binary answer to this ? Of course not. But claiming that Open office plans are completely useless is stretching it a little too far.

15
Why Class? hadihariri.com
411 points by hhariri  2 days ago   158 comments top 27
1
btilly 2 days ago 4 replies      
There is a fundamental point here that really bears thinking about.

Software design has as a fundamental problem being able to express yourself in a way that is clear and remains clear as things change. Basic principles of good software design, like reducing cohesion, remain good principles no matter what "paradigm" you think you're using. When you change paradigms, be aware that the basic principles remain the same no matter what label you give them. And since design is all about tradeoffs, if you are going to have to violate a principle, you might as well do it in the clearest and most straightforward way possible.

Let me give an example. A singleton is bad for all of the reasons that a global variable is bad. If you need one, I prefer to make it a global variable simply because that is honest about what design principle has been broken. (Unless I want to make it lazy - then it is easier to write with a singleton!)

So learn the principles. Figure out what they look like and are called in your paradigm. Get on with life and solve real problems.

Let me give a concrete example. I learned more about good OO design from the first edition of Code Complete than any other book that I ever read. Even though it was entirely about procedural programming. Because the principles that it discussed, from how to name functions to the value of abstract data types to information hiding, all apply directly to OO. And to functional programming. And to aspect oriented programming. They are truly universal. Learn the principles, they are more important than the labels.

2
sklivvz1971 2 days ago 5 replies      
I really like Hadi's criticism of OO, and in fact I would like to hijack it and take it a few steps further, to paint what in my mind is a more realistic picture. Functional programming is only an alternative choice of paradigm. Expert programmers will use the best tool for the job and eschew the pointless search for the absolute best.

Why OO? Classes exist to decouple behaviors from data. In fact, their purpose is to think in terms of cohesive sets of behaviors instead of data-driven functions.

Does this apply to any possible case? No, it's a leaky abstraction.

Why FP? Functions exist to decouple behavior from other behavior. In fact, their purpose is to think in terms of completely decoupled sets of behaviors instead of cohesive functions.

Does this apply to any possible case? No, it's a leaky abstraction.

Why AOP? Aspects exist to decouple behaviors that apply across classes. In fact, their purpose is to think in terms of orthogonal sets of behaviors instead of hierarchical sets of behaviors.

Does this apply to any possible case? No, it's a leaky abstraction.

Why Procedural Programming? To separate the functionality of the application in separate behaviors. In fact, the purpose of procedures is to think in terms of behaviors instead of sequences of operations.

Does this apply to any possible case? No, it's a leaky abstraction.

Why Declarative Programming? To express solutions in terms of goals instead of behaviors. In fact, its purpose is to think in terms of desired state, not sequences or compositions of behaviors.

Does this apply to any possible case? No, it's a leaky abstraction.

Why Logic Programming? To explore the logical consequences of given facts, instead of focusing on behaviors or desired outcomes. In fact, its purpose is to find new truths starting from given truths.

Does this apply to any possible case? No, it's a leaky abstraction.

Why spaghetti code? Wait, no that's just crap ;-)

3
RyanZAG 2 days ago 4 replies      
If you think of classes as namespaces and not objects (which is what they are half the time), it becomes a bit saner. Pragmatism makes the world work.

You can still write functional code in OO. Heck if it makes you feel better, you can just make a macro to rename 'namespace' to 'class' and then you can write code like

  namespace Utils {   function thingy() {...}  }
Namespaces are generally an improvement over globally defined functions as it lets you mix and match libraries without having to worry that you included the right header files, etc. Again, pragmatism.

4
Jormundir 2 days ago 5 replies      
I think of learning OOP much like learning Chess. In Chess it's very easy to learn how the pieces move and start playing, but becoming a master is incredibly difficult, requiring years and years of deliberate practice. When I see people complaining about OOP, it makes me think of two beginners playing Chess, thinking victory is a random outcome.

The dialogue of the article is entirely about coupling, and in OOP figuring out how to make a loosely coupled system is what separates the grand masters from the beginners. Designing a loosely coupled system is incredibly hard and requires years of deliberate practice to master. So when someone who's a beginner or not very good at OO design claims OO isn't a valuable paradigm, I don't listen. Bad OO can be less valuable than no OO, just like anything else, and just like anything else, that doesn't make the whole paradigm bad.

I've explored functional programming in a couple of college courses now, and find it very intriguing. When it's time to go a make a system, however, I find it difficult to boil down functional principles into the system I want. This is a complaint in the same vein as the OO complaints. Functional programming is incredibly impressive when designing a behavior, but what separates the masters from the beginners seems to be when it's time to model an entire system. This is the current separation in my mind: OO is designed and really good for modeling a complex system that involves lots of mutation, and functional programming is really good for modeling and separating behavior within a system.

I don't think we have grounds to say one is better than the other yet. Both OO and Functional have their niches where they're amazingly effective, and both have a giant gray area where it's difficult to mold the pure paradigm onto a solution.

I think the best solution so far is a mixture of the two, such as we see in Scala. Though, a mixture too has its own set of drawbacks. Mainly what seems to be an explosion of language complexity and a giant need for design conventions and limitations when there are so many possible routes to a solution.

The best of the current climate is a mixture of the two, with a framework that clearly defines and separates where and when to use one paradigm over the other, so you aren't bogged down with the additional complexity.

5
strictfp 2 days ago 2 replies      
I think that the article uses a bad example. OO is actually really bad in db apps (DTO apps). OO shines when you are in one big continuous memory space, and don't have to worry about the data behind the objects. I see OO as a good way of abstracting the data away from the logic of your program.

This abstraction works well with classical desktop apps (lots of code, one big heap, pointers are always valid), but breaks down badly when your program is about communicating data and you start to cross memory address barriers.

A lot of modern apps are about data. You see a lot of networked apps, db apps and layered apps with different technologies in every layer, communicating with each other only through shared data formats. Such an app is primarily concerned with shuffling data around, and oo breaks down into a horrible mess. Each transition to a data representation requires a complete serialization of the object graph to a reasonable data representation and back again at the other side. This means that you essentially have twice the amount of work than if you would operate on the data directly as you would do with FP.

Once the amount of "internal juggling" with the data exceeds a certain limit, OO might be beneficial. But for data-centric apps, forget it.

6
huhtenberg 2 days ago 1 reply      
> Why Class?

To enforce an invariant but of course. Just ask Bjarne.

[1] http://www.artima.com/intv/goldilocks3.html

7
gress 2 days ago 0 replies      
Not to get into the oo vs functional debate, but generic advice about how big a class should be and how it should relate to other classes is guaranteed to be inappropriate some of the time.

Classes are a tool you can use to design a solution to your particular programming problem. Sometimes the problem with call for a few large classes. Sometimes it will call for an evenly balanced network of small classes. Sometimes there will be a lot of DTOs, and sometime none. Sometimes large classes will cause maintenance problems because there is a lot of complexity confusingly jammed in one place, and other times they will be a boon because you don't have to look in lots of confusingly named files to discover what you are looking for.

Criticizing a design by how well it conforms to abstract rules, rather than criticizing its fit to the actual problem will often lead to confusion.

These rules of thumb are useful, but only as a way to question whether your design is a good fit - I.e. "Would it be better if this class had more behavior or less behavior? Would it be better to split this class into smaller components, or should this group of similar classes be coalesced into a single entity?"

8
jakejake 2 days ago 2 replies      
This is a great piece and probably mirrors the process that a lot of us have gone through over the years. I know it definitely rang true for me.

I would hazard to guess that functional programming or any other solution will go through the same type of evolution and we will be looking back in 10 years at how immaturely we used the technology. Then we will circle back to yet some other technology that was previously used and went out of style, with some new insights added.

It seems to me that we go in a great big circle over and over again, picking up a little bit of new info each time around.

9
abalone 2 days ago 1 reply      
It's not like one paradigm is perfect for everything. But there's one thing that the author wrote that stood out for me: writing a "utils" class for stuff he doesn't "know where it really belongs."

That's one of the benefits of OOP: it forces you to think harder about where things belong, and this really helps as codebases grow. A "utils" class is often a sign that things could be better organized.

10
taybin 2 days ago 1 reply      
Object-oriented in the large, functional in the small. Makes sense to me.
11
hawkharris 2 days ago 2 replies      
Will someone give me a few concrete, practical examples of using functional programming in JavaScript?

I'm interested in this concept, but I'm not sure how to go about applying it to my own work as a front-end web developer. Most of the resources online are theoretical.

12
naiquevin 1 day ago 0 replies      
I can somewhat relate to the conversations in the article. While I admit that I don't think I used OOP correctly, since I started using more functions instead of classes (in Python, my primary language), I observed that it has been more convenient to reuse and refactor existing code.

Another observation is that it's far easier to read someone else's code if there is no mutation. For eg. I have enrolled for the proglang course[1] on coursera and only yesterday I completed this weeks homework which involves enhancing an already written game of Tetris in Ruby. Major part of the assignment was about reading and understanding the provided code that uses OOP and makes heavy use of mutation. It was quite difficult to understand what one method does without having a picture of the current state of the object, specially with side effecting methods that call other side effecting methods. A few times I had to add print statements here and there to actually understand which one is being when and how many times. While I could score full marks in the end, I am still not confident about completely understanding the code and have a feeling that if I ever need to work with it again, the amount of time it will take to load up everything again into my mind's memory will be equal to what it took for the first time.

Of course, one could only make a true comparison by studying an implementation of Tetris written in FP style. But from my experience with reading and writing code in Erlang (for production) and Racket, Clojure (as hobby) I find it relatively easier to study functional code written by others.

[1]: https://www.coursera.org/course/proglang

13
lightblade 2 days ago 3 replies      
I think OO and FP can coexist peacefully. You just need to pick out their good parts and discard the rest, and use a programming language that allows this.

Do put your object in classes, but don't put behaviors there. Use classes to allow static type checking so that functions taking them can guarantee certain field exist.

Separate behaviors out into its own module. I'd avoid calling them classes here because there will be confusion. Interactions between objects can be implemented in FP style. This is where you get to use map and reduce.

I've been learning about DCI (data context interaction) recently. I can't say I've fully grasped it yet, but it did opened my mind on how to do design differently.

14
davidkhess 2 days ago 1 reply      
I've found the most critical difference between OO and FP to be state. Objects are stateful from the moment they are created until they are destroyed. FP is remarkable for its general lack of side-effects - i.e. computations do not normally affect shared state.

I believe this distinction is a critical one because an application with a lot of shared state is more difficult to scale than one without. As another poster pointed out, in a shared addressed space such as a desktop app, this isn't a concern. But if you want to scale an application across multiple address spaces, communicating and synchronizing the state of objects between these spaces tends to be difficult.

So, my tendency is:

Small and shared address space? Tend to use OO patterns and implementations.

Big and distributed address space? Tend to use FP patterns and implementations.

Note, I don't consider OO vs. FP a language choice I consider it a programming paradigm choice.

15
Quarrelsome 1 day ago 0 replies      
:D. Why is anyone even considering FP vs OO programming a choice? I've used functional patterns within OO before and I'm sure the opposite is possible.

Both have merit depending on the skill-set and staff available to you, scale of project and existing infrastructure.

I was however very disappointed when the author bundled interfaces into the mix as if it was the same as the rest of it. The interface enables you to support many different types of behaviour and enables you to structure your libraries in different ways. It's a different aspect from just classes themselves.

Genuinely I don't think shit like this is healthy. Smacks of religion.

16
dj-wonk 2 days ago 1 reply      
This is a nice narrative. Some people have misconceptions about what OO gives you in comparison to other paradigms. OO provides specific abstractions, which may be useful in many cases, but also may be too much. In some cases, simpler abstractions may be preferable. Less complected (baked together and inseparable) abstractions can often be found in other paradigms. For example, in Java, a person might use a class even though all they need is a namespace. In many functional languages, you can just create a namespace without the other trappings of OO. Sometimes that is just what you want.

I've seen some nice slides from Clojure talks that talk about how many OO abstractions are complected (combined) versions of simpler abstractions. If anyone has seen the slides I'm talking about, please share. For example, a class is combination of a namespace and ___. Possible answers include mutable state, shared data, and so on.

18
mortyseinfeld 2 days ago 1 reply      
Ever since I started playing around with Java (or even earlier with C++) having data co-mingled with functions has never felt right to me. It just never seemed very extensible to me, especially in a language like Java. Even in C# where you have extension methods, they are second class citizens.

But also the old question of where to put a method seems like unecessary mental overhead for solving a problem. Use modules and just pass things in. You could probably even put some syntactic sugar in the language so that you could use the "object".method notation on the first argument passed in if you really wanted to.

So someone give me clojure with optional typing, a Python-like syntax, and get off my lawn!

19
hernan604 2 days ago 0 replies      
Use a better OO meta object system, ie. Moose.

And then you can build a class like LEGO, literally.

Because, it has Class and Roles, a class can extend another class. And a Role can be built with multiple roles. And a class can be built with multiples Roles.

Your OO is outdated if it doesnt implement classes, roles, types and at least method modifiers.

And your language should allow first function class to allow functional programming if you desire.

Perl has all that and comes installed in any unix/linux/mac.. try in your terminal: $ perl -v

Every linux/unix/mac system runs perl. So your company is probably using perl and they dont even know.

20
rcirka 2 days ago 1 reply      
I do agree the "noun" vs "verb" analogy when classes are taught are horrible..actually the way classes are taught in general are just confusing.

Ultimately, classes are about code organization and reuse. Done properly, it leads to a well maintained program. Functional programming...it still needs to be organized somehow, whether you put them in classes or namespaces, the last thing you want is spaghetti code. Now depending in the type of programming, you may need to preserve state, which can be quite challenging in a purely functional environment.

21
neebz 2 days ago 0 replies      
In one of the initial posts of the series, OP mentions that functional programming doesn't have app state. Can anyone point out more details on this?

I don't know if I am understanding it correctly but one of the reasons I absolutely loved Backbone.js was that it kept state of our DOM. Is this relevant to above discussion? because state surely looks pretty beneficial.

22
goggles99 2 days ago 4 replies      
IMO, most critics of FP or OOP do so because they lack a full understanding of one or the other. Either paradigm can be applied incorrectly and cause plenty of problems. Both have their merits and strong points. Each fits a different type of project better than the other. Also different people tend to grasp or appreciate one or the other better naturally depending on how their brains are wired. More logical, abstract thinkers tend toward OOP while more right brained people tend toward FP. There is nothing wrong with this. There is also nothing wrong with learning the benefits of the "other side". If fact it is quite beneficial.

some language designers see and fully appreciate the benefits of both. They start putting class synonymous structures in FP languages or FP functionality in OOP languages. This is the future of the mainstream (Ex: EcmaScript 6 has classes/types, C# has Linq, closures, lambdas ETC).

23
sholanozie 1 day ago 0 replies      
I might be a little late, but does anyone have any examples of simple CRUD web applications written in a functional language? There seems to be a lot of meta discussion about the suitability of functional programming for the web, but not very many concrete examples. Thanks!
24
yeukhon 2 days ago 2 replies      
One can argue that Javascript's prototype is probably the future, not the traditional class-based OOP.
25
zamalek 1 day ago 0 replies      
> Dont be silly Jake. Where else are you going to put functions if you dont have classes?

I'm seeing where this is going: GoLang.

26
kul_ 1 day ago 0 replies      
'Dont be silly Jake. Where else are you going to put functions if you dont have classes?'

The core argument is more of philosophical in nature in terms of paradigms, because from functional programming view a function just acts on a data transforming it to other form. But in pure OO world (like Java) one can not dare to think beyond objects so for them it has to be in a object.

27
afshinmeh 2 days ago 0 replies      
+1, Nice to see your post man, keep it up! :)
16
HTML5 game written in 0 lines of JS codepen.io
401 points by golergka  4 days ago   119 comments top 35
1
golergka 4 days ago 3 replies      
Obligatory information:

I'm not the original author. It was posted on russian HN/Reddit clone Habrahabr: http://habrahabr.ru/post/203048/Habrahabr featured translations of "30 LOC of javascript" topics from HN, some people continued it for a bit, and this one was created as an ironic answer to that.

3
networked 4 days ago 0 replies      
Nice. There are also "games" [1] made with just GLSL shaders in WebGL. There are several of those on Shadertoy but I particularly like https://www.shadertoy.com/view/MsX3Rf.

[1] Edit: "Games" in scare quotes because the lose state doesn't (can't) persist in a way that requires player action.

4
jayflux 4 days ago 6 replies      
If you leave your ship in the far bottom right corner, you will never get killed
5
ThePinion 4 days ago 1 reply      
This is brilliant. It makes me really stop and think about how far we've come from the days where HTML4 and CSS2 were everyone's limit.
6
tfb 4 days ago 1 reply      
This is pretty cool. Although, I'm having trouble clicking on the bonuses and don't see my score. I must be overlooking something and cba to decipher how this works at this hour.

Edit: Managed to make it the whole way through by leaving the ship in the bottom left. And then when the bonuses kept flying by uncontested because the game was "over", I was able to click on them after a few tries. The issue must have been that the cursor wasn't where I thought it was. Still very cool!

7
idProQuo 4 days ago 0 replies      
Off topic, but I had to make an Android game for my Junior year final project, and I think I used that exact same space ship sprite (it was an Asteroids clone with motion controls).
8
rplnt 4 days ago 11 replies      
I'd bet that Doom, a much better game, was written in 0 lines of JS as well (in the same sense). I fail to see how is this trend of "doing something the horrible way" interesting. Just because it's unconventional?
9
gprasanth 4 days ago 0 replies      
Hack:Right Click and just move your cursor on the context menu. Now the enemies can't see you + you get to teleport where ever you want! :D
10
chrismorgan 4 days ago 0 replies      
Assigning a tabindex of -1 to the bonus inputs would stop people like me from getting all ten bonuses by repeating {tab, space}.
11
jawr 4 days ago 0 replies      
Awesome. I would have been tempted to call it HTML5 game written in 30 lines of JS and then had some defunct JS code..
12
nollidge 4 days ago 0 replies      
What am I supposed to be seeing here? Chrome 31, Windows 7 x64. Maybe my proxy server is screwing something up, because I mouse over the blue area, and then the scroll bar goes wonky for a bit, and then it turns red and says "game over".

EDIT: yep, definitely proxy, seems all the stuff from http://nojsgame.majorov.su/ is blocked.

13
Aardwolf 4 days ago 0 replies      
Would have funnier if it was in Dart :)
14
lhgaghl 4 days ago 0 replies      
This really illustrates the true power of the web. No construct has a stable definition. If we can't or don't want to use JS to write algorithms, just add new features to CSS until it has the ability. Nobody needs a reliable format to write static documents. We need to keep extending amending extending amending extending amending (while trying to be backwards compatible)!
15
AndrewBissell 4 days ago 3 replies      
Love the tongue-in-cheek title.
16
Myztiq 3 days ago 0 replies      
This reminds me of a game I built a long time ago:

http://www.ryan-kahn.com/static/onlyCSS/

I ended up building a generator for the CSS+HTML and at the time I had a PHP script (2+ years ago, I would use Node now) that could allow me to pick the number of lanes, the difficulty etc. Now it's just a single snapshot. I built it in about a week. There is a new bug apparently where the cursor is not changing as expected in chrome.

17
ibrahima 4 days ago 0 replies      
How does one do logic and store state in HTML/CSS? DOM elements for state I guess, but logic?
18
fakeanon 4 days ago 2 replies      
"This Site Totally Doesn't Work Without JavaScript.

Like, at all. Sorry. If you enable it and reload this page you'll be good to go. Need to know how? Go here." Okay, that's funny.When Javascript is on: Ah, Nice little game. Interested how it needs a .js file with just a comment. Can we improve this to remove it?

Edit: oh, so maybe the overall website need JS, not the game(?).

19
blt 4 days ago 0 replies      
collision detection is based on full bounding boxes. this is especially annoying on the big ships.
20
deletes 4 days ago 0 replies      
Moving the ship with scroll wheel is a feature is suppose.
21
DonPellegrino 4 days ago 0 replies      
I find it amusing that the content is served from a .su (Soviet Union) domain.
22
pearjuice 4 days ago 0 replies      
Technically this is still HTML4.
23
taopao 4 days ago 1 reply      
If I was a mod, I would relabel these submissions to "extend your e-penis in 0 lines of JS."
24
nashashmi 4 days ago 0 replies      
I saw the code and a whole lot of -webkit- flags so I tried it in firefox, and it still worked.
25
msl09 4 days ago 0 replies      
Awesome, the bonus is the this is perhaps the lightest JS game that I have seen in a while.
26
jpincheira 4 days ago 0 replies      
Haha, it's funnily amazing!
27
jheriko 4 days ago 0 replies      
that is pretty genius... even if its a rubbish game :)
28
Pete_D 4 days ago 0 replies      
I'm impressed this is possible in pure CSS. What implications does this have for security/privacy? Should I be blocking CSS in addition to JS now just in case?
29
adam12 4 days ago 1 reply      
0 lines of JS and 500+ lines of CSS
30
BorisMelnik 3 days ago 0 replies      
if you stay completly on the left side you will avoid all objects
31
choarham 3 days ago 0 replies      
This is fucked up :)
32
cauliturtle 4 days ago 0 replies      
WOW!!
33
wilhil 4 days ago 1 reply      
Technically, there is one line! :P
34
vinitool76 4 days ago 4 replies      
Time waste.. Wonder why these kind of things come on top of HN. What does this project teaches us? Nothing..
35
emirozer 4 days ago 1 reply      
hacker news post quality down by %5 by this post , special thanks to the people upped this to number 1
17
I Hope My Father Dies Soon dilbert.com
380 points by Garbage  2 days ago   200 comments top 27
1
avar 2 days ago 0 replies      
No matter what you think about assisted suicide I'd highly recommend Terry Pratchett's (of Discworld fame) documentary about it: https://www.youtube.com/watch?v=slZnfC-V1SY

He's got Alzheimer's and documented his personal tour of looking into his end of life options.

2
vezzy-fnord 2 days ago 11 replies      
Since this concerns the U.S. government, I'll say that the vendetta against euthanasia and doctor-assisted suicide by the Christian right is puzzling. The Bible has a very ambivalent view on suicide (as with any other topic), the tale of Samson being a prime example. The main argument against suicide comes from Corinthians, but it is starkly contradicted by other passages.

A person who is slowly succumbing, immobile, on the brink of death and suffering has nothing to offer or worthwhile to live for. Their life is effectively over and they're simply waiting for the moment their biological functions will at last cease.

Yet the Christian right insists that the agony be prolonged to a maximum. The religion is notably fascinated with martyrdom, asceticism and persecution complexes. But that suffering has no purpose in overcoming tribulation when you're a near-lifeless bag of meat. Then, of course, very few Christians actually ascribe to the ascetic life as laid down in their scripture.

Sorry for your loss.

3
IanDrake 2 days ago 2 replies      
Reminds me of a good C.S. Lewis quote:

Of all tyrannies a tyranny sincerely exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber barons cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.

4
rickdale 2 days ago 1 reply      
My father was murdered about 10 years ago and really the only solace in the situation is that he was pretty sick at the time and would have suffered through the rest of his life and he was only 47 at the time. My mom has repeatedly told me she has no desire to get to an elderly age, like her mother is now. All I know is perspective on this stuff evolves over the years.

On another note, the only good thing about people being sick is that you can prepare for their death and thus when they die it can be more of a celebration. The funeral won't be as hard emotionally, and because they were sick, you are comforted by them not suffering any more.

5
Strilanc 2 days ago 1 reply      
Related: http://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/

> this is the way many of my patients die. Old, limbless, bedridden, ulcerated, in a puddle of waste, gasping for breath, loopy on morphine, hopelessly demented, in a sterile hospital room with someone from a volunteer program who just met them sitting by their bed.

6
JshWright 2 days ago 2 replies      
As broken as our system is (and Scott's obvious pain should make it clear that it _is_ broken), there are things you can do to make it suck just a little less.

Please talk to your family and your doctor about your wishes in terms of life-sustaining care. Even in your 20's, in perfect health, please take some time to become informed about what the options are and how you can express what you want done or not done (this may be a DNR, a MOLST, a living will, or any of a myriad of other options depending on where you live).

If the decision you come to is anything short of "do absolutely everything possible to prolong my life" then be sure to have that paperwork in a safe and readily accessible place (and let your family members know where it is).

7
badman_ting 2 days ago 4 replies      
Sorry, but I am getting really tired of this form of discourse.

A million interests prevent you from doing a million things every day, but apparently it's only objectionable and worth remarking about when it's the government. And it's not even like I disagree with him -- yes we should allow assisted suicide. But boy oh boy is it hard not to notice that he wastes no time thinking about why things are this way, or which interests specifically may be preventing things from changing. Nope, it's just THE GOVERNMENT. Short circuit, do not pass go, no further critical thinking necessary.

"The Nanny State Didn't Show Up, You Hired It" http://thelastpsychiatrist.com/2012/09/the_nanny_state_didnt...

8
forktheif 2 days ago 5 replies      
Maybe he just didn't mention it, but nowhere in that post does it say that it was his father's wish to die if he was in such a situation.

Helping someone die who made their wishes clear, I'm perfectly okay with. But assuming someone wants to die, I'm not.

Whether his father made it clear or not, I don't know. It just isn't in the post.

9
dsego 2 days ago 0 replies      
I suggest watching Terry Pratchett's thought-provoking and humbling documentary Choosing To Die (http://www.youtube.com/watch?v=slZnfC-V1SY). It covers the topic of assisted suicide and towards the end of the film we can witness a man ending his life in a Swiss clinic. It is also a very personal subject for Terry, as he was diagnosed with Alzheimer's and is afraid to live without being able to use his mind and write books.
10
jl6 2 days ago 2 replies      
I don't wish to minimise his situation but it seems like he considers his father's suffering not quite as bad as his own suffering were he just to do the killing himself and take the jail sentence.
11
melling 2 days ago 4 replies      
The Singularity is about 30 years out, or so we're led to believe. It takes about 10 years and billions of dollars to create a new drug, and that time and cost is not decreasing.

Stories like this are a grim reminder of how much work we really must do. "Fixing" the human body is a really hard problem.

12
FrankenPC 2 days ago 0 replies      
Any form of suffering on the helpless scale is especially painful. There's something hellish about helplessness that can't be described, you just have to experience it. Anyone who goes through months or years of helplessness without committing suicide is a warrior. The particularly insidious part of this kind of trauma is the intense PTSD that is the result of this mental torture. Reading the S.Adams blog entry, you can just see the PTSD dripping off every word. I feel for him. I feel for anyone who has gone through this because there is no justice at the end of the bullst rainbow that the Christians describe.
13
hrjet 2 days ago 0 replies      
To avoid the slippery slope, couldn't there be a jury system that could decide the genuineness and applicability of a case of euthanasia? Leaving it only in the hands of a single person, be it the patient, or doctor or relative is bound to attract criticism. But if a jury can sentence one to a life-sentence in a crime case, they could arguably be trusted to deal a similar sentence for someone terminally ill.
14
tunap 2 days ago 0 replies      
They should have pinned a medal on him, instead they threw him in jail.

Thanks for trying, Jack.https://en.wikipedia.org/wiki/Kavorkian

15
ahi 2 days ago 0 replies      
Fortunately, doctors can still fiddle around the edges. My 96 year old grandmother went into the hospital last Thursday. After some tests it was concluded she would have a couple more months with rehab or maybe a couple weeks without. We decided to just have her loaded up with morphine and she passed on Sunday surrounded by her children. The morphine didn't kill her, her heart was mush, but it likely knocked some time and suffering off the end. It's unfortunate that we dont all get to die so gracefully.

I dont forsee euthanasia ever being legal in the US given the absurd response to medicare requiring just basic end of life planning with a doctor.

16
altero 2 days ago 1 reply      
Follow the money

Today patient has to pay lot of money to be kept alive. Also deciding about death is expensive and this burden would be on government.

In a few decades elderly people will be 40% of population. Government will probably have to pay bills for most of them. Only then will this become norm, but in really hideous way.

17
DanielBMarkham 2 days ago 5 replies      
Wow. This touches home with me on many levels. I lost my parents a few years ago. Both suffered. I lost several other close family members who also suffered. And just last week somebody I was close to told me they were dying of cancer.

And I'm a libertarian that strongly believes that government should keep its nose out of my life.

So, of course, I'm going to play devil's advocate, at least a bit. Because if you can't defend the other side's points well, you shouldn't be in the argument.

Here's the thing: physician-assisted suicide is one thing if the doc gives you the pills and you take them on your own. It's something else if the state requires doctors to assist you, or the state requires insurance companies to require doctors to assist you. Then we're getting into the state telling doctors what types moral views they should hold.

Hey, I'm all for you shooting your cat if it's sick. What I'm against is the community all making some collective decision about who's got to go and when, and who has to do the work. That's fucked up. This thing has to be a personal choice among everybody involved: docs, nurses, family, the patient. I could imagine in many cases the patient wants to whack themselves and the family hates the idea. Fine. Then let the guy do it. On his own. Or perhaps the docs have given up hope, but for religious reasons they will never pull the plug. Fine, then let the patient do it. There's a decision to be made, somebody should make it, and the rest of society should butt out -- either from disapproving or forcing everybody else from doing things they don't want to do. It's just as bad to make somebody kill somebody else as it is to make somebody keep somebody else alive in this endless suffering nonsense.

The problem here is a search for some universal rules for a very personal thing. I don't see that playing out too well.

In many cases, we have elderly losing their minds while family members hope against hope that they recover. What to do then?

With the lack of a living will, I think the family members get to decide. That's why we have living wills. I don't like that result, but there it is. (I'd also add that this means the family members should be able to dose up grandma with enough morphine to put her out of her misery -- and I imagine such a thing happens a hell of a lot more than we will ever know. Institutionalizing people and placing all of this in bureaucratic hands is the crux of this problem.)

Also Scott, sorry to hear about your dad.

18
lmg643 2 days ago 2 replies      
i am wondering if i am the only person who read this, who felt the piece was a very strange way to consecrate his father's passing. "i wish i could have put him down like a cat, and if you disagree with me, i hope you die a horrible death." there is a good case for euthanasia, and there is a case against it, it just seems inappropriate to approach a dialogue with this piece as a starting point. in the end, i'm sorry he lost his father in a painful and ugly way and felt captive to a hospital. the end of life situations in my family were also pretty gnarly but involved hospice care. i'd love to know the circumstances here, and how the situation could have been improved. but you'll never win over your opponents starting from a place like this.
19
fernly 2 days ago 0 replies      
Very important: by law in most states, you can control the conditions of your end of life care. State laws vary but if you are in California, this[0] is the form you print out, fill out, have notarized, and give copies of to your regular physician and the person(s) holding your medical power of attorney.

Every person should do this, because without this clear and legally binding statement of your wishes, medical people are compelled to do everything possible to preserve your life.

Part 1 establishes who can act for you when you can't act for yourself (i.e. in a coma). Unless you are legally married or a minor (when your spouse or parent can act), nobody can intervene to make medical choices for you. Under privacy laws, nobody can even ask about your condition. So your live-in lover or best friend is helpless to change how you are cared for -- unless they are named in a document like this one.

Part 2 expresses your wishes on how you want to be cared for at end of life. If Scott Adams's father had filled out this form and checked 2.1(b), none of that tragedy would have happened! And you can add more conditions. On my form, for example, I specified that if I was unable to read, to watch TV, or listen to audio with comprehension, and had no reasonable prospect of regaining those abilities, my life was over and I wanted to refuse all medical interventions except pain management. My idea being, to die quickly of pneumonia is nearly as good as assisted suicide.

However you fill it out, do fill it out, and encourage your loved ones to do so as well. A stroke or fall or collision can wipe out your consciousness at any age, and this form relieves your survivors of a lot of horrible choices.

[0] http://ag.ca.gov/consumers/pdf/ProbateCodeAdvancedHealthCare... PDF)

20
stickhandle 2 days ago 3 replies      
Please HN don't disappoint me with "slippery slope" and "it's too complicated" arguments. I want to continue to believe the level of discourse here is better than that. Its not just do-able ... doctor-assisted suicide is right.
21
snez 2 days ago 2 replies      
Although I'm on the author's side, things like http://www.independent.co.uk/news/uk/home-news/alzheimers-tr... make me think twice on whether I'm right or wrong.
22
GalacticDomin8r 2 days ago 0 replies      
Frontline had a nice piece on this issue a couple years ago called The Suicide Tourist:

http://www.pbs.org/wgbh/pages/frontline/suicidetourist/

23
a8da6b0c91d 2 days ago 13 replies      
Physician assisted suicide is a super slippery slope. I'm stunned when thinking people advocate.
24
znowi 2 days ago 5 replies      
I get the story and am sympathetic to the OP, but a little confused about this line:

And while I'm at it, I might want you to die a painful death too.

Is he just angry at everyone?

25
spacecowboy 2 days ago 0 replies      
So sorry to hear about the current situation, found myself filled with a lot of emotion as I read through it. Reminds me of a talk I had with a good friend of mine where we basically acknowledged that we're not afraid to die but that we're more afraid of the dying process.
26
eip 2 days ago 1 reply      
Breathing pure nitrogen is a quick and painless way to die.
27
bashcoder 2 days ago 0 replies      
I if ever become so weak as a man that I would vote someone else's conscience instead of my own, then on that day, Mr. Adams, be my guest: Kill me.
18
Zurb Foundation 5 Released zurb.com
378 points by mos2  5 days ago   165 comments top 41
1
pwenzel 5 days ago 5 replies      
Changes of note:

"Interchange uses media queries to dynamically load responsive content that is appropriate for different users' browsers.

http://foundation.zurb.com/docs/components/interchange.html

Offcanvas Javascript (Originally bolt-on, not bundled)

http://foundation.zurb.com/docs/components/offcanvas.html

"Abide is an HTML5 form validation library that supports the native API by using patterns and required attributes."

http://foundation.zurb.com/docs/components/abide.html

Zepto support has been removed from Foundation 5.

Docs are continuing to look better, and they still have docs back to Foundation v2.

Thanks Zurb!

2
chrismorgan 5 days ago 4 replies      
I'm puzzled about the switch from camelCase to snake_case for JS as shown in http://foundation.zurb.com/docs/upgrading.html#javascript-va.... The clear convention in JavaScript is camelCase, why switch away from it? (I say this as someone who in normal life using Python and Rust uses and prefers snake_case but who uses camelCase when writing JavaScript.
3
baby 5 days ago 2 replies      
Wow, that was fast !

I love Foundation, but I had to switch to Boostrap because I found it... ugly. And Bootstrap is great to quickly create "pretty" prototypes. But I've always found Foundation having better... foundations. I've used both on numerous projects and here's my take :

* The grid system now looks like bootstrap, and I don't like that. You have to choose the type of column you want to use (and I don't want to be bothered by that). so no .six anymore, it's .medium-6 or .large-6 or .small-6... They should call the .small-6 just .six so we know its the default one.

* It does look a bit better, although they removed styling of the radiobox in forms? Why?

* Overall I still prefer bootstrap's theme, I wish Foundation would offer an optional theme like bootstrap 3 does.

* OffCanvas menu is great ! I can already see plenty of applications (but for mobile only)

* The CLI is a nice thing to have but I'm gonna stay away from it. I like the easiness of copypasting files to quickly begin a small project.

* Documentation is hard to go through, doesn't allow to glance at what it offers. It's a huge improvement from F4 or F3 though.

* I use sublime text snippets all the time and this might be a huge addition!

* I like the JS that verifies forms. I usually always use this on my projects so it's nice to have it by default.

Overall I don't really know if I should switch back to Foundation. But I'll definitely use it for my next project to see how good it is.

4
chrisblackwell 5 days ago 3 replies      
This framework is so much further along than Bootstrap is, and the team at Zurb seems to iterate much faster then the Bootstrap team.
5
tnorthcutt 5 days ago 1 reply      
I looked at the project on Github [1] and the latest tagged release is 4.3.2. It seems odd that they'd release 5.0 for download on their website before tagging it on Github; is there a particular reason for that?

1: https://github.com/zurb/foundation

6
spitfire 5 days ago 5 replies      
So for the HTML deficient are there any template sites for foundation yet? These exist for bootstrap, and for someone who doesn't have even a single bone of design talent in his body they're a godsend.
7
frakkingcylons 5 days ago 4 replies      
I just finished the redesign of my entire site to use Foundation 4 instead of Bootstrap last night. I'm laughing tears right now...
8
masklinn 5 days ago 1 reply      
Damn, it's gone completely broken on old firefox engines http://i.imgur.com/HAR8Rjz.png (yeah I'm still using Camino when I can get away with it)
9
kderbe 5 days ago 0 replies      
Can you explain the benefit of defining media queries with em's rather than pixels? It seems like an unnecessary layer of mental translation for developers, given that you deem it necessary to list px-equivalents in the CSS comments. [1]

Also, the medium/large screen sizes in Interchange don't align with the media query sizes. Interchange says 1024px wide is large, [2] but the media query says 1024px wide is medium. Or is it just a documentation error?

[1] http://foundation.zurb.com/docs/media-queries.html

[2] http://foundation.zurb.com/docs/components/interchange.html#...

10
te_chris 5 days ago 2 replies      
I'm more curious about this which is mentioned on the page: https://github.com/hcatlin/libsass/ has anyone got it working with rails? Faster SASS compilation would make life much better (especially when bootstrap or compass are involved).
11
Jgrubb 5 days ago 1 reply      
On the one hand it's never been a better time to be a front end dev, and on the other it's absolutely crazy how fast front end technology is progressing the last couple years. I just caught on to Foundation 4 in the last 4 months or so, and now here's a new release that's way more evolved. Amazing.

Thanks to the Zurb team! I'll definitely be ripping off lots of ideas for my company's tortoise-speed Drupal sites.

12
hanifvirani 5 days ago 2 replies      
I would just like to say that I love Foundation! Kudos to the team and congrats on the new release. I look forward to exploring the new version. That being said, I am not really digging the new documentation page. The sample code containers should have a non-white background or at least some kind of a border.
13
fideloper 5 days ago 0 replies      
I think it's worth noting that Bootstrap and Foundation may not be comparing apples to apples.

Bootstrap has more styles so you can...bootstrap. Foundation is meant to be a foundation to build on.

That being said, of course their functionality is very close, but be aware of the core differences in outlook between the 2.

14
applecore 5 days ago 0 replies      
How does this compare to Bootstrap 3?
15
antihero 4 days ago 0 replies      
One thing that seems utterly absurd is that now you need to have two ecosystems to build one project - both node and and ruby. Seeing as it is uses libsass to build now, why not ditch the ruby cli and port it to node?
16
andyl 5 days ago 1 reply      
I use Bootstrap, but it looks to me like Foundation5 is a better framework and is making faster progress. Especially I like Foundations use of SASS.

Problem is - some widgets I depend on - like date-pickers and X-Editable - only support Bootstrap.

17
sergiotapia 5 days ago 2 replies      
I've tried to use Foundation before and it's responsive grid was ghastly. This was the first time I tried anything responsive mind you. So I jumped towards Bootstrap 3 and it's grid was phenomal to use.

Predictable, simple and quick to iterate - everything I wanted.

I'm going to give this release a try. The interchangable items based on device widths looks fantastic! I'm really excited to give Zurb a try. :)

18
minimaxir 5 days ago 0 replies      
When I was choosing a framework for a redesign for my blog (switching from Bootstrap since it was getting cumbersome), I decided to try out UIkit, since I ended up not needing any of the JavaScript plugins or super-fancy CSS effects.

However, after taking a look at Foundation 5's plugins, I will definitely try using the framework if I need to undertake a website with more ambitious functionality.

19
mixmastamyk 5 days ago 1 reply      
What's the recommended way to use this with a python dev environment?

I'd rather not have to install ruby too just to rebuild the css. When one of these is announced I usually find myself navigating the various poorly-maintained python modules that process the source files, get lost, give up, and go back to plain css.

Perhaps I could just add a bit of css to a static build instead? Are there any shortcomings to that?

20
kclay 5 days ago 0 replies      
The Interchange plugin saved my butt on a recent project. Client wanted to have 5 different images for different sizes, it was a breeze to setup even when having to integrate it with the supersized slider.
21
mtarnovan 4 days ago 0 replies      
Congrats ! Looks like a big release.

Some pain points from using Foundation (4) in our latest project:* topbar sucks* custom forms are horrible (they seem to have been removed from 5, or maybe justs the docs are missing)

Also, using under_score instead of camelCase in javascript is a questionable choice with no real benefits.

22
xwowsersx 5 days ago 3 replies      
I don't know why (maybe I'm just not good with css and html in general), but I'm always confused when looking at these grid systems. Trying to use zurb in a project now and I'm kinda lost. Any good resources other than their docs for sort of showing how to use zurb in a full project?
23
vcherubini 5 days ago 1 reply      
Foundation is without a doubt the nicest CSS framework I have ever used. It really, really helps me, as a programmer, create amazing interfaces without much effort at all. Combined with SASS and it's a winning combination.

Can't wait to start playing with version 5.

24
silviogutierrez 5 days ago 2 replies      
Is Foundation better than Bootstrap? I'm genuinely curious, as they seem functionality equivalent.

Of course, one uses SASS and one uses LESS. I knew LESS, so I picked Bootstrap.

But I'm more than willing to switch.

25
jeffpersonified 4 days ago 0 replies      
How is their inclusion of a medium break point not at the top of this thread? This is the most significant and noticeable addition to the Framework IMO.
26
ultrasandwich 5 days ago 5 replies      
Looks like it dropped support for IE8, which unfortunately eliminates this as an option for a lot of client work. Seems like a solid go-to for more forward-thinking projects though.
27
ds_ 5 days ago 1 reply      
Can anyone tell me how to make a split button / dropdown that goes upwards (dropup)? This is one thing bootstrap has which I've been missing in foundation.
28
afriend4lyfe 4 days ago 1 reply      
I'm new to web design and have been experimenting with a few other development environments. Reading through these comments I became excited to learn more about foundation. I went to their site and looked at their site examples. Many of them were broken and didn't seem to work as intended, nor were they very beautiful to the uninitiated. This was on a desktop using chrome. I didn't bother to check with my mobile.

My main platform so far has been extremely buggy too and is not even primarily made to create websites. I've been using Google Web Designer. Take my advice with a grain of salt as I represent hobbyist developers who thought "hey, i'd like to build a site. what tool should I start with?" I would not invest $200 to enroll in your intro course for something that gave me an initial first impression of being flimsy. However, it is equally likely that I am unable to realize the full potential of your product with my limited understanding of web development at first glance.

I'm excited about seeing how Macaw works and am going to begin a new project with Bootstrap soon. While my main focus has been purely static web design I plan to incorporate dynamic applications within my approach very soon.

29
tszming 4 days ago 0 replies      
The sliding panel's animation is sluggish on Firefox Mobile/Samsung S4
30
qhoc 5 days ago 0 replies      
I switched from Foundation 4 to Flatstrap (a version of Bootstrap) instead because I like the flat UI. v4 had many problems with JS and especially topbar was never good enough. I ended up creating my own topbar which is not ideal. Also the lack of fixed cols is a big issue for desktop design.
31
princeverma 5 days ago 0 replies      
Is there a good alternative to Abide for Bootstrap ?
32
nej 5 days ago 1 reply      
Is clicking on Learn suppose to drop the page like this http://imgur.com/yH9OKB3? Happens on both Chrome and Firefox.
33
caiob 5 days ago 1 reply      
When will this be available on Rails?
34
nathanwdavis 5 days ago 2 replies      
The linked web page crashes Safari on iOS 7. Mobile first, eh?
35
nettletea 5 days ago 1 reply      
Chrome zoomed in at 125% is enough to break the layout on the getting started page, which is a little worrying.
36
michaelbuddy 5 days ago 0 replies      
already foundation 5. I've only just had two dates with foundation 4. Between foundation and Jeet, I'm so stoked to have these to work and collaborate on.
37
thomasfl 4 days ago 0 replies      
I am switching to today.
38
dabernathy89 5 days ago 1 reply      
seems kind of odd to integrate their own CLI instead of just writing a yo generator - and where is the documentation for it?
39
travelorg 5 days ago 1 reply      
Is Foundation a "Standard" now?
40
dylandrop 5 days ago 0 replies      
Just asking - Bootstrap and Foundation essentially help you accomplish the same task, so why wouldn't you compare it to Bootstrap?
41
Segmentation 5 days ago 2 replies      
My only problem with Zurb is that ridiculous mascot that represents it. What is that sky-blue creature? Hipster yeti?
19
DOJ lied to Supreme Court to avoid judicial review of warrantless surveillance documentcloud.org
337 points by revelation  4 days ago   90 comments top 11
1
rayiner 4 days ago 5 replies      
Read the questions at the end. The letter isn't fucking around:

"We believe that a formal notification to the Supreme Court of the government'smisrepresentations in the case--both relating to its notice policy and relating to its practice of'about' collection under Section 702 of the FISA Amendments Act--woulcl be an important stepin correcting the public record and would be in the interests of the public as well as of theAdministration and the Supreme Court."

2
Zikes 4 days ago 3 replies      
This sounds important, but I'm at a loss as to its significance.

What was Clapper v Amnesty?

It sounds like Solicitor General Verrilli made a lie of omission in the court. Is that considered a lie under oath?

What obligation does Solicitor General Verrilli have to the three Senators to answer their questions? What consequences might he face if he choose to ignore the letter?

Realistically, what could this mean for the original Clapper v Amnesty case, and how might it affect the public in general?

3
w1ntermute 4 days ago 1 reply      
Unless some high-level DoJ and NSA officials are thrown in jail for a couple decades for all this, it's not going to stop.
4
jstalin 4 days ago 0 replies      
I know the words "ethics" and "lawyers" don't generally enters people's minds at the same time, but the model rules of ethics for lawyers take this sort of thing seriously. If the Senators show that some lawyers did indeed lie to the Supreme Court, the Court itself could take action on those attorneys' licenses. It's also a lawyer's duty to report if they are aware of another attorney's violations of ethical rules.

I know it's doubtful, but one can hope.

5
fleitz 4 days ago 4 replies      
I wish someone could make a treason case out of perjury in relation to a matter of national security during a time of war for these actions.

This crap would stop pretty fast.

6
zcarter 4 days ago 0 replies      
Lawyers: What precedent is there for supreme court decisions citing a specific piece of evidence as the basis for their ruling, where that piece of evidence is later shown to be erroneous?
7
tsaoutourpants 4 days ago 3 replies      
These three Senators have realized that they can capitalize on public sentiment against the NSA.

Good on them... that's what representing the people is about.

8
mvanga 4 days ago 0 replies      
Would someone be willing to provide the background, interpretation and ramifications of this document for someone not very familiar with this case?
9
salient 4 days ago 0 replies      
Don't expect Holder the Untouchable to ever be punished for this. At this point I think he's more untouchable than even "Emperor Alexander", the current (and soon former) chief of NSA.
10
VladRussian2 4 days ago 0 replies      
just to put things in perspective - is is any surprise that people who is ok with torture would lie? Why would someone expect it different?
11
adultSwim 4 days ago 0 replies      
Warm Regards
20
Docker 0.7 runs on all Linux distributions docker.io
336 points by zrail  12 hours ago   106 comments top 23
1
shykes 12 hours ago 5 replies      
A few details on the "standard linux support" part.

To remove the hard dependency on the AUFS patches, we moved it to an optional storage driver, and shipped a second driver which uses thin LVM snapshots (via libdevmapper) for copy-on-write. The big advantage of devicemapper/lvm, of course, is that it's part of the mainline kernel.

If your system supports AUFS, Docker will continue to use the AUFS driver. Otherwise it will pick lvm. Either way, the image format is preserved and all images on the docker index (http://index.docker.io) or any instance of the open-source registry will continue to work on all drivers.

It's pretty easy to develop new drivers, and there is a btrfs one on the way: https://github.com/shykes/docker/pull/65

If you want to hack your own driver, there are basically 4 methods you need to implement: Create, Get, Remove and Cleanup. Take a look at the graphdriver/ package: https://github.com/dotcloud/docker/tree/master/graphdriver

As usual don't hesitate to come ask questions on IRC! #docker/freenode for users, #docker-dev/freenode for aspiring contributors.

2
Legion 7 hours ago 2 replies      
Could someone explain the logistics of Docker in a distributed app development scenario? I feel like I am on the outskirts of understanding.

My goal is having a team of developers use Docker to have their local development environments match the production environment. The production environment should use the same Docker magic to define its environment.

Is the idea that developers define their Docker environment in the Dockerfile, and then on app deployment, the production environment builds its world from the same Dockerfile? How does docker push/pull of images factor into that, if at all?

Or is the idea that developers push a container, which contains the app code, up to production?

What happens when a developer makes changes to his/her environment from the shell rather than scripted in the Dockerfile?

What about dealing with differences in configuration between production and dev? (Eg. developers need a PostgreSQL server to develop, but on production, the Postgres host is separate from the app server - ideally running PG in a Docker container, but the point being multiple apps share a PG server rather than each running their own individual PG instance). Is the idea that in local dev, the app server and PG are in two separate Docker containers, and then in deployment, that separation allows for the segmentation of app server and PG instance?

I see the puzzle pieces but I am not quite fitting them together into a cohesive understanding. Or possibly I am misunderstanding entirely.

3
Sprint 12 hours ago 4 replies      
I looked at it several times but never really got it. Can I use Docker to isolate different servers (think http, xmpp, another http on another port) on a server so that if one of them was exploited, the attacker would be constrained to inside the container? Or is it "just" of a convenient way to put applications into self-contained packages?
4
neals 12 hours ago 2 replies      
I see docker come around every now and then here. I'm a smalltime developer shop, small team, small webspps. What can docker do for me?

Can this reduce the time it takes me to put up and Ubuntu installation on Digital Ocean?

Is this more for larger companies ?

5
Nux 10 hours ago 2 replies      
EL6 users (RHEL, CentOS, SL), I've just learned Docker is now in EPEL (testing for now, but will hit release soon):

yum --enablerepo=epel-testing install docker-io

PS: make sure you have "cgconfig" service running

6
speeq 12 hours ago 2 replies      
Does anyone know if it is possible to set a disk quota on a container?
7
chr15 2 hours ago 1 reply      
For local development, I use Vagrant + Chef cookbooks to setup my environment. The same Chef cookbooks are used to provision the production servers.

It's not clear to me how I can benefit from Docker given my setup above. Any comments?

8
kro0ub 43 minutes ago 0 replies      
Can someone please explain what docker does and brings to the table, what all the fuss seems to be about? I've looked into it several times and really can't tell from any of what I've found.
9
sown 11 hours ago 2 replies      
Hey,

docker newb here. Can I easily put my own software in it? I've got this c++ program that has a few dependencies in ubuntu.

10
T-zex 12 hours ago 2 replies      
Is it possible to have multiple instances of the same app running in Docker containers and having readonly access to a "global" memory linked file? What I'm trying to achieve is having sand-boxed consumers having access to some shared resource.
11
apphrase 12 hours ago 1 reply      
Can anyone please tell about the overhead of Docker, compared to no-container scenario (not against a fat vm scenario)? I am a "dev" not "ops", but we might make use of Docker in our rapidly growing service oriented backend... Thanks
12
jeffheard 7 hours ago 1 reply      
This is crazy talk of course, but I wonder if there'd be some way to use rsync or git to support distributed development of images the way git does with code?

I mean, it'd be neat to be able to do a "pull" of diffs from one image into another related image. Merge branches and so on. I don't know, possibly this would be just too unreliable, but I would have previously thought that what docker is doing right now would be too unreliable for production use, and lo and behold we have it and it's awesome.

13
shimon_e 12 hours ago 0 replies      
The links feature will make deploying sites a million times easier.
14
Xelom 11 hours ago 1 reply      
Will it be possible to run Docker containers on Android? I may be asking this incorrectly. So correct me if I have a mistake. My question might be "Will it be possible to run Docker containers on Dalvik VM?" or "Can I run an Android in Docker container?"
15
gexla 12 hours ago 1 reply      
So, I assume that if you aren't using AUFS then you don't have to deal with potentially bumping up against the 42 layer limit? Or does this update also address the issue with AUFS?
16
unwind 12 hours ago 1 reply      
Annoying typo in the submission's title, it would be awesome if someone could fix that.

It's just s/distrubtions/distributions/, obviously.

17
dmunoz 9 hours ago 2 replies      
Nice to see Docker 0.7 hit with some very useful changes.

I see lots of people are getting some generic Docker questions answered in here, and want to ask one I have been wondering about.

What is the easiest way to use dockers like I would virtual machines? I want to boot an instance, make some changes e.g. apt-get install or edit config files, shutdown the instance, and have the changes available next time I boot that instance. Unless I misunderstand something, Docker requires me to take snapshots of the running instance before I shut it down, which takes an additional terminal window if I started into the instance with something like docker run -i -t ubuntu /bin/bash. I know there are volumes that I can attach/detach to instances, but this doesn't help for editing something like /etc/ssh/sshd_config.

18
neumino 9 hours ago 0 replies      
You guys are awesome, just awesome!

I was pretty sure that the requirement for AUFS would stick for a long time -- I was resigned to use a special kernel. But again, you folks surprise me!

You guys just rock!

19
oskarhane 10 hours ago 1 reply      
Hmm, not sure I'm understanding #1 correct. Can I install it on, let's say, Debian without Vagrant/Virtualbox now?

I can't find the info in the docs.

20
jfchevrette 12 hours ago 2 replies      
Unfortunately it looks like the documentation has not been updated yet...

So much for feature #7. Documentation should be part of the development/release process

21
vpsserver 5 hours ago 1 reply      
It doesn't run on a typical OpenVZ VPS.

Is there any alternative for separating apps on a single VPS?

22
Edmond 12 hours ago 0 replies      
getting excited about docker and lxc in general..
23
igl 12 hours ago 0 replies      
i like docker \o/
21
What is the enlightenment I'm supposed to attain after studying finite automata? stackexchange.com
325 points by netvarun  2 days ago   74 comments top 26
1
azov 2 days ago 2 replies      
The answer on SE is still very theoretical. Yes, technically we can model any computation [1] as an FSM, but when does it actually make sense to do so?

FSMs are often used when modelling physical devices - think microwaves, elevators, CD-players, something with buttons or sensors, something event-driven, embedded controllers, also network protocols, text parsers, etc. Basically, whenever we must ensure correct behavior having few constraints on sequence or timing of inputs.

How can you actually apply it in your everyday programming? Look at your class. Every method call is an event that can cause a transition. Member variables are the state. Theoretically changing any bit of any member variable gives you a different state. In practice you only care about the bits that affect your control flow. Figuring out how variables map to states is the tricky part. Sometimes one variable corresponds to several states (say, when airspeed goes over V1 a plane transitions from can abort to must takeoff state), sometimes several variables define one state (say, when ignition=on and door=closed car is ready to drive), sometimes you'll need to add an extra variable to distinguish between states. Then you draw a table with events on one side and states on another and look at each cell. This is when you have all those precious "oh, crap, what if the user hits play when the song is already playing" moments. And voila - now you know that your class behaves correctly because you've systematically considered every possible scenario. It takes enough effort that you probably don't want to do it for each and every class, but when you do it - it's magic :-)

[1] Any computation with finite memory, to be pedantic

2
Confusion 2 days ago 1 reply      
The link does not answer the question. The answer is: there is no enlightenment you're supposed to attain immediately after studying finite automata. You've created the possibility for enlightenment further down the road.

Studying finite automata equips you with a bunch of knowledge about their behavior, which equips you with a bunch of knowledge about the behavior of a variety of other subjects, because they can be projected onto automata (can be reduced to automata, can be morphed into automata, ...).

The link lists a bunch of properties of, and relations between, various kinds of FA's and explains how these are useful to other areas of CS and even to regular programming. But you cannot be expected to have these thoughts as sudden epiphanies. At most you can at a later time realize a certain property exists, realize the same property exists for FA's and realize that makes sense, because of the relation between your subject and FA's. You can then further realize that you know immediately know a lot more about the subject as well. Some of these realizations may strike you with a physically tingling sensation of aesthetically pleasantness which some call 'enlightenment', but YMMV.

3
raverbashing 2 days ago 4 replies      
I have to say I am very happy with this discussion and the "data structures in the Linux Kernel"

Such theoretical basis (and even their practical considerations) seems much more important than "What are the advantages of latest fad.js"

4
Adrock 2 days ago 2 replies      
The list in the 1st answer is amazing, but the 2nd highest answer has one of my favorite things:

> Turing machines (we think) capture everything that's computable. However, we can ask: What parts of a Turing machine are "essential"? What happens when you limit a Turing machine in various ways? DFAs are a very severe and natural limitation (taking away memory). PDAs are a less severe limitation, etc. It's theoretically interesting to see what memory gives you and what happens when you go without it. It seems a very natural and basic question to me.

I like to think of it the other way, though. A DFA with a stack is a PDA. A DFA with two stacks is a Turing machine. Looking at the 3 different classes as DFAs with 0, 1, or 2 stacks is just so beautiful...

5
diminish 2 days ago 2 replies      
If you're interested in learning Automata from Prof. Jeff Ullman of Stanford you may join the current session on Coursera. What's best about this course is that Prof. Ullman is always present in discussion forums, and helps everyone learn and enjoy great discussions. Join here https://www.coursera.org/course/automata

Edit: I'm currently learning there, and enjoy it a lot. Especially in video 6, you may listen to the stories of Junglee startup, Started in the mid-90s by three of his students, Ashish Gupta, Anand Rajaraman, and Venky Harinarayan. Goal was to integrate information fromWeb pages. Bought by Amazon when Yahoo! hired them to build a comparison shopper for books. They made extensive use of regular expressions to scrap..."

6
alexkus 2 days ago 0 replies      
For my Comp Sci degree I wrote a regexp/automata library. It allowed you to create/modify/load/save/execute NFAs, and the same kinds of functions for a basic regexp language (character classes, ., Kleene star *, ?, etc) by first converting the regexp into an NFA (with epsilon transitions) as each regexp operation maps to a small NFA so they can be bolted together (with epsilon transitions).

You could then (attempt to) convert NFAs into a DFA using Myhill-Nerode theorem to determinise it; I say attempt as it was possible to make it consume huge resources if you gave it the right/wrong kind of regexp. It also did minimisation and a few other bits and bobs.

To say this gave me a good understanding of regular expressions is an understatement, and once you understand the basics then all of the parts of more complex regular expression languages become a lot easier! That was my enlightenment.

7
austinz 2 days ago 0 replies      
Coming from an EE background I remember lots of emphasis over the techniques for converting finite state machines into HDL code. For me, it's really cool to see the topic approached from the opposite direction, so to speak.
8
bonemachine 6 hours ago 0 replies      
That nature computes (many, if not all natural processes can be thought of as computational in some respect), and that many computational systems can be thought of as at least analogs of physical or biological processes.

See for example: http://www.uc09.uac.pt/JamesCrutchfield.php

9
netvarun 2 days ago 0 replies      
I had come across this answer some time back, but got to revisit it again while reading another one of the author's (Vijay D. - http://www.eecs.berkeley.edu/~vijayd/#about ) answers trending on HN (https://news.ycombinator.com/item?id=6787836)

Additional discussion: https://news.ycombinator.com/item?id=6788969

10
chubot 2 days ago 3 replies      
A modern processor has a lot in it, but you can understand it as a finite automaton. Just this realisation made computer architecture and processor design far less intimidating to me. It also shows that, in practice, if you structure and manipulate your states carefully, you can get very far with finite automata.

I don't understand this one. Surely a modern processor is like a Turing machine and not a finite state automata. For example, a DFA/NFA can't express simple algorithms like matching parentheses, while obviously a modern processor can.

11
PaulHoule 2 days ago 0 replies      
I'd say that finite automata give practical answers to many parsing problems which would be difficult otherwise.
12
en4bz 2 days ago 0 replies      
I originally went into my Theory of Computation class this year grunting and moaning about it was a waste of time and how I would rather be writing actually software. Now I actually kind of enjoy the class and can see its use. I find the discussion on Decidability to probably be one of the most important part. Although it's a little disappointing to find out that there are some problems that we will never be able to solve.
13
blibble 2 days ago 2 replies      
on examinations of code it's normally pretty easy to see who has a formal understanding of state machines, and who doesn't.
14
MichaelMoser123 2 days ago 0 replies      
I think the insight is that each formalism is defined by/equivalent to the input language that it is able to parse;

State machines can do regular expressions, but they can't parse matching brackets,

Stack machines can do BNF grammars - but they don't have counters; also Stack machines are more general than state machines; each regular expression can be expressed by an equivalent BNF grammar.

Turing machines can parse Perl - that means any language that depends on the context; this is because Turing machines have counters/memory. Also Turing machines are more general than stack machines; each BNF grammar can also be parsed by a Turing machine.

All this business of deterministic/non deterministic machines is about efficiency of the parser; nice to know.

15
t1m 2 days ago 0 replies      
New appreciation for the goto statement.
16
segmondy 1 day ago 0 replies      
Some problems are best solved using FA. Pac mac? Vending machine? ATM machine? Once you have the rules. You can be confident of no bugs.
17
crb002 2 days ago 1 reply      
This. Micron's in memory automata is going to eat your lunch on many algorithms. Bye bye Van Neuman bottleneck. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd...
18
knicholes 2 days ago 0 replies      
Oh yeah-- Without studying finite automata, writing my compiler would have sucked.

[edited] would have sucked even more

19
GhotiFish 2 days ago 2 replies      
>Turing machines (we think) capture everything that's computable.

??

eh?! we think?

Isn't the definition of computable, "something that can be reached by a Turing machine?"

20
ape4 2 days ago 3 replies      
What's the best book on this?(I'd prefer something non-academic)
21
robdd 2 days ago 0 replies      
ROBDD: Reduced Ordered Binary Decision Diagram

A flow chart with a truth table, which is used to map all of the possible ways to arrive at one or zero by carrying out a set of steps, which may follow recursive paths, such that the similarities of different paths to identical outcomes can be better understood.

Where (

  reduced = all roads lead to one or zero
) and (

  ordered = applying good organization to             the steps of the decisions in             the flow chart, even when steps             in the decision tree can be             carried out in a variety of ways)

22
lindig 2 days ago 3 replies      
You might realise that parsing HTML or C with regular expressions is a bad idea.
23
undoware 2 days ago 0 replies      
...cannot be played on record player B.
24
mrottenkolber 2 days ago 0 replies      
I'd say after learning about different classes of automata, you should understand what a compiler does in general. At least, that's what I took from it.
25
jmount 2 days ago 1 reply      
Suffix Trees. That kind of hyper efficient recognition is even harder to think about without the finite automata theory behind regular expressions.
26
itistoday2 2 days ago 1 reply      
I didn't study finite automata at school, but I studied cellular automata on my own. If anyone would like to describe the difference (if there is one), it'd be appreciated.

W.r.t cellular automata, I made this video, and if this isn't enlightenment, I don't know what is: https://vimeo.com/80147310

22
FDA Warning Letter to 23andMe fda.gov
323 points by jefffoster  1 day ago   404 comments top 38
1
jfasi 1 day ago 5 replies      
This seems absolutely reasonable. The letter indicates that the FDA has notified 23andMe that their products are not satisfactorily cleared, they're reached out to them several times, and they've offered assistance through a group they specifically set up to help companies in this situation.

Meanwhile, 23andMe went ahead and began marketing and selling their product, despite the FDA's concerns.

Relevant quotes:

> Most of these uses have not been classified and thus require premarket approval or de novo classification, as FDA has explained to you on numerous occasions.

> However, to date, your company has failed to address the issues described during previous interactions with the Agency or provide the additional information identified in our September 13, 2012 letter for (b)(4) and in our November 20, 2012 letter

> To date, 23andMe has failed to provide adequate information to support a determination that the PGS is substantially equivalent to a legally marketed predicate for any of the uses for which you are marketing it; ...

> ...we have proposed modifications to the devices labeling that could mitigate risks and render certain intended uses appropriate for de novo classification.

> As part of our interactions with you, including more than 14 face-to-face and teleconference meetings, hundreds of email exchanges, and dozens of written communications, we provided you with specific feedback on study protocols and clinical and analytical validation requirements, discussed potential classifications and regulatory pathways (including reasonable submission timelines), provided statistical advice, and discussed potential risk mitigation strategies...

> Thus, months after you submitted your 510(k)s and more than 5 years after you began marketing, you still had not completed some of the studies and had not even started other studies necessary to support a marketing submission for the PGS.

2
k-mcgrady 1 day ago 16 replies      
Am I the only one this seems completely reasonable to? There are probably people who take action over the results they get from the service and if the results are incorrect the actions could have negative impact on their health. Therefore the service should have to prove the results are accurate before advertising it as a first step in prevention.
3
guylhem 1 day ago 12 replies      
As usual, the government is trying to meddle with companies. Read the letter, but make no mistake - the "kind" tone, especially when reminding how they did they best to get in touch, have meetings, help insuring compliance etc. is just a decoy. The truth is with "must immediately discontinue marketing the PGS".

The gov wants to decide what's best for the people. Should the people decide differently, using their wallets for example, this anomaly will be quashed.

This trend is especially strong in the medical field - gov approval required everywhere, then people wonder why medical things are so damn expansive.

I use 23andme and I'm happy with the information provided. I know it's not reliable - it's not a lab test anyone will use to base important decisions on, since it is not a full sequence of genes.

Yet, by being commercially available and easy to use, it is paving the way for commercial offers of full genome sequencing, which I damn well intend to use when they reach the $500 threshold.

However, gov actions like this one may very well make that impossible, making sure the only full genome sequencing offers there will be will be "FDA cleared" at a huge markup.

Suggestions to "medical" like companies - get out the gov eye. Move your business to Asia, the caribbean or wherever the gov will not get in your way like this. I want to keep using (and recommending) your products!

4
saalweachter 1 day ago 4 replies      
I'd like to point out that -- as much as I'm sure most people love 23andMe -- it's not really a scrappy start-up. It's a nearly 8 year old company which has received something like a hundred million in funding.

So this isn't really a case of the gov't stomping on the little guy before he has a chance to grow; 23andMe has had its chance to grow into a fairly big deal, and now it's time to start playing by the rules.

5
pilom 1 day ago 1 reply      
Summary:5 years ago, 23andMe started marketing a test for among other things a BCRA indicator

July and Sept 2012 - 23andMe submits a form to the FDA saying "our test isn't really useful for diagnosis and thus shouldn't fall under these rules"

Nov 2012 - FDA says we don't agree with you you need to either prove your effectiveness or change your marketing

Jan 2013 - 23andMe says "it will take us a couple months to do the tests, we'll get back to you"

Nov 2013 - FDA says, "its been 11 months and you never got back to us. Stop selling and let us know within 15 days what you're going to do explicitly because you ignored us for close to a year"

6
sqrt2 1 day ago 2 replies      
This reminds me of a blog post in German [1] by a person who due to a software bug had been falsely diagnosed by 23andMe with limb-girdle muscular dystrophy. (Fortunately, he was able to identify that it was a misdiagnosis.) It appears that in this case potential misdiagnoses aren't just a theoretical problem.

[1] http://www.ctrl-verlust.net/23andme-wie-ich-fur-todkrank-erk...

7
rmrfrmrf 1 day ago 4 replies      
Anyone who thinks the FDA is overreaching here has little awareness of how stupid the majority of the world's population is. The people that this protects don't know how to even read this statement from the FDA.

Perhaps the startup echo chamber has more respect for unchecked opportunism; if that's the case, someone should make a startup called 23andMeFree (monetized by ads, duh) that has you spit in a tube, send it to some PO Box, then randomly generate positive and negatives across the board. If you wanted the scam to last longer, you could even generate random values based on statistics of certain characteristics. A true libertarian must support such a business.

8
sethbannon 1 day ago 3 replies      
The tone of the letter was surprising to me. I wouldn't expect the FDA to go quite so far in explaining the why behind the desist letter. I suppose that's because it was written just as much for public consumption as it was for 23andMe.
9
geetee 1 day ago 3 replies      
I understand the FDA getting pissy if I'm consuming/injecting a substance that may or may not harm me, but why this? I spit in a tube and get some results which may or may not be accurate. Go away, FDA.
10
labaraka 1 day ago 1 reply      
> For instance, if the BRCA-related risk assessment for breast or ovarian cancer reports a false positive, it could lead a patient to undergo prophylactic surgery, chemoprevention, intensive screening, or other morbidity-inducing actions, while a false negative could result in a failure to recognize an actual risk that may exist.

I cannot imagine someone getting surgery or chemo solely based on a 23andme heads up warning and without consultation with a specialist physician.

As someone working in medical devices, this dramatic language is extremely frustrating.

11
wheaties 1 day ago 0 replies      
When the FDA sends you a letter like this, you either pay a fine and change tactics or you embark on one of the most frustrating approval processes known to the world (all for people's protection.) Good luck with that.
12
btilly 1 day ago 1 reply      
Silicon Valley "It is better to ask for forgiveness" culture meets the worst of government "We'll need that in triplicate 5 years in advance of starting to look at the paperwork."

This should be interesting, if you have the patience to watch the fallout in slow motion.

13
Sephr 1 day ago 1 reply      
Obviously 23andMe results are not a diagnosis from your physician. You should use 23andMe in conjunction with a real physician. For example, I used 23andMe back in 2010 and it told me I had a high risk for a certain genetic condition which I recalled one of my family members having, so I went to see my physician for a real diagnosis, armed with this newly found information.

23andMe helped me catch this early enough with the assistance of my physician that I was able to get treatment long before I would have developed symptoms. If I never used 23andMe I probably would not have had this diagnosed until years later.

14
seehafer 1 day ago 1 reply      
I'm most curious to see how 23andMe is going to respond to this, because the more technical-regulatory language in this letter says essentially that in FDA's opinion the device is Class III (the highest risk of all medical device/diagnostics) and would require a PMA, unless 23andMe provides the evidence that allow FDA to de novo classify the device as Class II.

A Class III ruling would destroy the personal genomics market, because it would mean extensive clincial testing and documentation about the development of the device. I hope it doesn't stand.

15
carbocation 1 day ago 0 replies      
> Therefore, 23andMe must immediately discontinue marketing the PGS until such time as it receives FDA marketing authorization for the device.

No joking around.

16
logfromblammo 1 day ago 0 replies      
It is only reasonable in the context of the standard operating procedures of the FDA. As this is essentially prior restraint upon speech and/or trade, it is unreasonable, but no more unreasonable than anything else the FDA does.

Ideally, the FDA would have independently-generated evidence indicating that the product in question is unreasonably dangerous or ineffective for some intended purpose before issuing a cease and desist order. Instead, they simply assume guilt and place the burden of proof upon the vendor.

Given that the FDA has vastly more resources than 23andMe, and companies like it, this makes the FDA seem like bullies against microbusinesses, and like the captured servants of agricultural and pharmaceutical megabusinesses.

17
SCAQTony 1 day ago 3 replies      
Never once did they attack the technology but rather the potential for error or the consequences if the public "can't handle the truth."

The closest they come to calling her a "quack" is when they state "...We still do not have any assurance that the firm has analytically or clinically validated the PGS for its intended uses..."

The bigger question is why is the FDA having a seizure over this? Could it be the potential for added treatment and preventative healthcare measures that insurers vis a vis the Affordable Care Act and not looking forward to paying for? (Not a rhetorical question, just asking)

18
ChikkaChiChi 1 day ago 0 replies      
If you believe 23andMe is a sound source for the clinical diagnosis of medical conditions, you're probably not going to read the fine print telling you otherwise.
19
mbreese 1 day ago 0 replies      
Is there anyone else that sees this as a positive thing for the company? They've lived under the cloud of potential FDA regulation for a while, and I'm a bit surprised that it took this long for the FDA to step in.

Obviously, it would have been preferable to have the company and FDA work together to announce how FDA regulations apply before an enforcement action. But, now that it has happened, the process has started. If the company can come out of this with some kind of FDA approval, then that cloud will be lifted and they can keep on working. And then the company will know exactly what rules they'll have to play by. So, depending on how things work out, it could end up being a positive for the company.

Now that the FDA has played their hand, I'm very curious to know how the company will respond.

20
jheriko 1 day ago 0 replies      
I have to agree that this seems completely reasonable... glad to see so few commenters jumping to the expected conclusion that this is some kind of government oppression.
21
patrickg_zill 1 day ago 0 replies      
Crap... I better buy the test for an older relative (84) like I have been saying I would, pronto.

Before the FDA gets its hooks into it and the price goes up...

22
crb002 1 day ago 0 replies      
CMS/HHS will look like idiots as mRNA sequencing dips below the $100 per test price point and doctors for the first time will have a histogram of genes turned on for use in their diagnosis.

Instead of whinging, FDA needs to partner with NIST to come up with quality control protocols so doctors know the error distribution in data that they receive.

23
mankypro 1 day ago 0 replies      
This is simply the result of the medical lobby. This pressure is being put on them simply because it takes power from the gods of medicine. If I order my own bloodwork, (from the same labs that my medical foundation does), somehow the same bloodwork costs me 20% of what it would otherwise. This is about taking away the ability to monitor your own health, in order to enrich the medical community.

This will simply result in this type of testing to move beyond FDA borders. Great job FDA, you're helping kill a successful and profitable US company. Un-F-ing-believable.

24
mac1175 1 day ago 2 replies      
I JUST bought the kit yesterday. The FDA is right though. Imagine making extreme decisions (e.g., double masectomy to avoid breast cancer) based upon the information. This is making me consider cancelling my order.
25
cliftonk 1 day ago 0 replies      
I'm not sure if 23andme's management team is totally out of touch or if they have genuinely mislead the public about their product's effectiveness. I'd assume the latter. They could have easily done periodic check-ins with the FDA to throw them a bone while gathering longitudinal data to support their claims.
26
kefka 1 day ago 0 replies      
Why not just put a disclaimer like one you'd see on a late night psychic ad service?

"This service is intended for entertainment purposes only."

No disclaimer about medical anything. And people pay more than 99$ for psychic services.

27
DennisP 1 day ago 1 reply      
I wonder whether a company could bypass the FDA by simply giving people their genome data, without any interpretation or diagnosis, which could be left up to people's doctors, opensource software, etc.
28
javert 1 day ago 3 replies      
More evidence that we live in a very mixed economy, not under free market capitalism.

Government agencies empowered to weild regulatory force against citizens are a threat to everyone, and this is case in point.

29
mkramlich 1 day ago 0 replies      
When evaluating startup risks/events there needs to be a standard term or acronym for "doesn't matter; spouse is multi-billionaire."
30
bhartzer 1 day ago 0 replies      
You would think that before putting in so much money, time, and effort into 23andMe they would have had discussions with the FDA and actually responded with more information. I'm amazed that it didn't happen.
31
brosco45 1 day ago 0 replies      
Yeah, that is why we need a Health Freedom Constitutional Amendment.
32
FrankenPC 1 day ago 1 reply      
Magic sentence: "For entertainment purposes only"
33
aabalkan 1 day ago 2 replies      
It's been years since 23andMe is out and FDA just noticed? Or am I getting it wrong?
34
matponta 1 day ago 0 replies      
That could get interesting... Lots of statups are popping up everywhere with DNA related products..
35
slashdotaccount 1 day ago 3 replies      
Is there a list of government agencies IP addresses? I would like to block them from accessing my sites. And write in the ToS that they are not allowed to browse and read more.
36
drakaal 1 day ago 3 replies      
The back story is 23andMe declined a federal request for a customers DNA. This was the government backlash as a result of protecting user privacy.
37
boonez123 1 day ago 2 replies      
Owner is married to the founder of google. I think they'll be okay. :)
38
tomelders 1 day ago 0 replies      
Well I suppose that now that all the food is safe to eat, they've got time on their hands to take down the sinister corporate bad guys behind 23&Me.
23
Half an operating system: The triumph and tragedy of OS/2 arstechnica.com
310 points by jorgecastillo  2 days ago   186 comments top 33
1
Stratoscope 1 day ago 5 replies      
> Long before operating systems got exciting names based on giant cats and towns in California named after dogs, most of their names were pretty boring.

Ah, yes. Mavericks, California. It's a great little offshore town, just off Pillar Point. I love that town.

Kidding aside, this is a great article.

Related to this story, the Windows 3.0 visual shell was originally not supposed to be Program Manager and File Manager. It was going to be a program called Ruby that I worked on with Alan Cooper and our team.

Ruby was a shell construction kit with a visual editor to lay out forms and components, which we called gizmos. You would drag arrows between gizmos to connect events fired by one gizmo to actions taken on another.

The shell was extensible, with an API for creating gizmos. A really weak area was the command language for the actions to be taken on an event. It was about on the level of batch files if that. But we hoped the API would allow for better command languages to be added along with more gizmos.

BTW, this project was where the phrase "fire an event" came from. I was looking for a name for process of one gizmo sending a message to another. I knew that SQL had triggers, but for some reason I didn't like that name. I got frustrated one night and started firing rubber bands at my screen to help me think. It was a habit I had back then, probably more practical on a tough glass CRT than it is today.

After firing a few rubber bands, I knew what to call it.

(As one might guess, I've always been curious to know if the phrase "fire an event" was used before that. I wasn't aware of it, but who knows.)

Anyway, Ruby didn't become the Windows 3.0 shell after all. The went with ProgMan and FileMan instead. To give Ruby a better command language, they adapted Basic and the result was Visual Basic. Gizmos were renamed "controls" (sigh), and my Gizmo API became the notorious VBX interface (sorry about that).

And we still don't have a programmable visual shell in Windows.

2
protomyth 1 day ago 1 reply      
I bought OS/2 for work to run on some DEC PC (not the damn Rainbow, the decent 486 DEC sold). The graphic card (S3) wasn't supported out of the box, so I called the IBM and got nowhere other than an acknowledgement it existed.

I called DEC and they too believed it existed, so they (while I was still on the line) called their contact at IBM. After being transferred twice, we arrived at the person who could mail me the driver, but I would have to sign an NDA. Myself and the DEC rep explained we didn't want source or a beta driver, just the release one. He insisted every customer had to sign. I said I'd think about it. After hanging up, the DEC rep couldn't stop laughing. He asked if I wanted a free copy of NT compliments of DEC. I took it and it had the correct driver.

I tried, but they had no chance.

3
jboggan 1 day ago 1 reply      
Up until March of last year a lot of ATMs in the US were still running OS/2 . . . I "upgraded" a lot of them to Windows XP. Yuck.

When I would take the OS/2 system offline and replace it with a Windows cage the payment network would sometimes tell me the uptime on the deprecated machines . . . one network operator claimed 8 years of uptime at one particular machine. I have no way of confirming that, but I definitely felt the OS/2 machines were rock solid, especially compared to the vulnerable Windows machines. Most small banks with NCR machines are running two software packages (APTRA Edge or Advance) with default admin passwords and are really behind on the monthly bug patches. Eek.

The OS/2 machines required you to input config info in hex though, so I was glad I didn't have to work on them in the field too much.

4
melling 2 days ago 3 replies      
If anyone wants an insider's view, here's a Usenet post from one of the early Microsoft employees, Gordon Letwin:

http://gunkies.org/wiki/Gordon_Letwin_OS/2_usenet_post

Somewhere in the Usenet archive is Gordon trolling the OS/2 users for weeks (or months?) on end. I can't remember the exact details, but he had a bet with several people that Windows would have multitasking, or that OS/2 wouldn't have some sort of multitasking before Windows. The bet was to fly the winner to any city of their choice and buy them dinner.

The discussions were quite heated and it was particulary memorable because he was one of the first 12 employees at Microsoft.

http://en.wikipedia.org/wiki/Gordon_Letwin

5
mdip 1 day ago 2 replies      
I worked at a large computer chain (R.I.P. Softwarehouse / CompUSA) from 1993 - 1996 and had been building clone computers for businesses from 1990-1993. I remember how this played out very well.

At the time, IBM had sent in scores of company reps to train up our floor staff on the advantages of OS/2 over the always-soon-to-be-released Chicago. They did a good job getting all of us to "drink the Kool Aid". I received a free (not pirated, promotional) copy of blue OS/2 Warp 3.0. It was a fantastic operating system for running a DOS based multi-node Telegard BBS and it did well with Win16 applications.

The impact of Windows 95 coming on the scene, though, is difficult to fully appreciate unless you were there. We had been selling pre-orders for months and there were a myriad of promos. I remember some of those preorders were sold under the threat that there wouldn't be enough copies to go around on release day. I had been playing with pirated copies of the betas of Windows 95 for the prior two months. Even in its beta form, it ran circles around Windows 3.0/3.1 in terms of reliability. I even remember reloading my PC with the most recent beta after release because a DOS application I used ran more reliably in it than in the RTM code.

Then launch day came. It was unlike anything I had ever seen in terms of a software release. We closed up at 9:00 PM and re-opened at 12:00 midnight to a line of customers that went around the building --- A line of customers ... for an operating system. We joked at the time that "Windows really was that bad". There were tons of additional promotions to ensure people came and lined up--Some RAM / hard disks selling under "cost" and others. And the atmosphere of the store felt like a party. We had theme music playing (start me up?) and some Microsoft video playing on our higher-end multi-media PCs. It was obvious to us, on the floor, trained by IBMs marketing machine, that Warp died that day.

As an anecdote to the stories about IBMs marketing being a little off: I remember around the release of Warp 4.0 I saw an advertisement at a subway station something along the lines of "Warp Obliterated my PC!"-- that tagline, evidently, meant to be some hip new use of the word obliterated.

6
steven2012 1 day ago 3 replies      
I actually bought a copy of OS/2 Warp when it came out because I was interested in its preemptive multi-tasking, and it was a decent operating system for what it's worth. I was definitely more stable than Window 3.11, but its real problem was compatibility. Back in the early 90s, everything was about getting compatibility, and while OS/2 had good compatibility, didn't have perfect compatibility.

As well, I worked at a bank, and as the article correctly stated, the entire bank was run on OS/2, most notably the ATMs, except the ATMs I worked with was using OS/2 2.0.

However, when Windows NT 3.51 came out, that was the game changer. I was the only person I knew who even knew what it was (I read about it in a magazine at the time), and I was able to get a student-priced copy at my college bookstore. I started using it, and it was awesome, everything just worked, except for some games. You couldn't even compare NT 3.51 to OS/2, it wasn't even in the same level. The look and feel of NT was exactly the same as Windows 3.11, and all the programs worked.

7
sehugg 1 day ago 11 replies      
OS/2 had many flaws but its multitasking was unseen on a PC at the time. I remember formatting a floppy while running two 16-bit Windows sessions (which were communicating with each other) and multiple DOS windows, thinking I was in the future.

Even Windows 95 was limited by many system calls being funneled through single threaded BIOS or DOS 16-bit land.

8
jjguy 2 days ago 3 replies      
If you liked this article, you should read Show Stopper. It's out of print, but there are ample used copies via Amazon.

http://www.amazon.com/gp/aw/d/0029356717/

9
michaelhoffman 2 days ago 3 replies      
> IBM licensed Commodores AREXX scripting language and included it with OS/2 2.0.

I find this hard to believe, given that Rexx was developed by IBM.

http://en.wikipedia.org/wiki/Rexx

10
lbarrow 2 days ago 2 replies      
Great read. In the end, it seems like a classic failure to resolve the innovator's dilemma. IBM decided that the future of computing would revolve around mainframes because they liked mainframes, not because that's where the facts led them. And ultimately they paid the price for it.
11
damian2000 2 days ago 4 replies      
Incredible that companies like Apple (Mac), Atari (ST) and Commodore (Amiga) weren't able to fully capitalise on their leading position in GUI based OSes of the time, which were miles ahead of both MS and IBM.
12
dangoldin 1 day ago 2 replies      
I remember my dad getting a promotional shirt for OS/2 with the caption "Flight 4.0 to Chicago has been delayed, I'm taking off with OS/2"

The idea being that Windows 95 was internally called Windows 4.0 with the codename Chicago.

I keep on searching for it but can't find it anywhere.

And Bill Gates on OS/2 in 1987: "I believe OS/2 is destined to be the most important operating system, and possibly program, of all time."

13
nnq 1 day ago 1 reply      
Why aren't the unix-family OSs of the era part of the story? Why didn't IBM even consider porting a unix-family OS to the PCs instead of paying an unproven company like Microsoft to write an OS?

(...all the events from this stretch of computing history seem so weird to me, like from a steampunk-like alternate reality movie. There's surely lots of context missing and stories that nobody will ever tell, since most of the decisions taken by all the key players seem so anti-business. Computers may have changed a lot from back then, but business is still business and all the decisions made seem either "irrational" or based on "hidden information" that is not part of the story.)

14
outworlder 1 day ago 0 replies      
And not a single mention of Babylon 5.

The special effects were created on Amigas: http://www.midwinter.com/lurk/making/effects.html

Also, while looking at Video Toaster's entry on Wikipedia, I found this gem:

"An updated version called Video Toaster 4000 was later released, using the Amiga 4000's video slot. The 4000 was co-developed by actor Wil Wheaton, who worked on product testing and quality control.[6][7] He later used his public profile to serve as a technology evangelist for the product.[8] The Amiga Video Toaster 4000 source code was released in 2004 by NewTek & DiscreetFX."

http://en.wikipedia.org/wiki/Video_Toaster

15
kabdib 1 day ago 1 reply      
When OS/2 Warp came out, I remember it being insanely cheap ($20?), so in a what-the-hell mood I bought it. Took it home and tried to install it. It was a hopeless mess of disks, both optical and floppy, and I never got it to run.

One of my cow-orkers at Apple had worked on the OS/2 Presentation Manager at IBM. I tried talking with her about it, but she said the experience had been "absolutely awful" and she didn't want to say much else.

IBM never had a chance.

16
kickingvegas 1 day ago 1 reply      
17
yuhong 1 day ago 1 reply      
The sad thing is that I have never seen an article on the entire MS OS/2 2.0 fiasco that is what I call complete and detailed and many omitting for example the unethical attacks MS did against OS/2 such as "Microsoft Munchkins". I try with my own blog article, but I admit it is not very good either.
18
mschaef 1 day ago 0 replies      
I have fond memories of OS/2 from the summer of 1995. At the time, I was a undergraduate at the University of Texas at Austin, and IBM needed summer intern testers for a product they were calling "OS/2 Lan Server Enterprise". OS/2 LSE was IBM's effort to re-platform OS/2 LAN Server on top of OS/2 DCE (in development on the lab next door to LSE). The general idea was to provide a way to scale up OS/2 so that it would interoperate with other DCE-based systems (mainly RS/6000 AIX, IIRC).

Anyway, the machine IBM gave me to use was a PS/2 Model 80. This was a 1988-era machine that had been brought to the semi-modern era with 20MB of RAM memory installed via several MCA expansion cards. Against my best expectations, the machine ran well, despite the fact that its CPU was at least 10% the speed of the then-state of the art.

From what I remember, the OS/2 LSE product itself was fairly solid. However, the biggest memory I have from that summer was the afternoon we spent playing around with the Microsoft Windows 95 beta disk we received for compatability testing. Towards the end of the afternoon, we tried to DriveSpace (compress) the disk. We got bored during the wait for the compress, so we pulled the power on the machine thinking that would be the end of it. However, once we powered the machine back up to install OS/2, Windows 95 just resumed compressing away like nothing happened. A few weeks later, a friend and I went to CompUSA for the Windows95 launch. Even at midnight, there was a line out the door, winding past the Windows 95 boxes, then the Plus Pack, then Office 95, and then memory upgrades... Didn't hear much about OS/2 after that...

19
zenbowman 1 day ago 0 replies      
Thank you for sharing, a great article indeed.

I'm glad it ended up the way it did, Microsoft at the time was betting on openness being a feature, and I think they helped move the computer and software industries they have gone in since, towards greater openness (and thereby professionalism).

People associate Microsoft with closed source, but it is of course relative, they were in their day the vendor who was banking on openness and courting developers harder than the others.

20
picomancer 1 day ago 0 replies      
Here's a perspective from the founder of a successful bootstrapped software startup that began by developing native OS/2 applications:

http://www.stardock.com/stardock/articles/article_sdos2.html

I own a copy of the OS/2 Galactic Civilizations 2.

21
forgottenpaswrd 1 day ago 0 replies      
"Version 3.0 was going to up the graphical ante with an exciting new 3D beveled design (which had first appeared with OS/2 1.2)"

I think it was Next computers who got first on this.

22
jgeorge 1 day ago 0 replies      
One of my fondest memories of OS/2 (there weren't many, sorry) was finding a media file on one of the diskettes called IBMRALLY.MID which was a little piano rendition of "Ever Onward, IBM" from way back when in the Way Back When Days of IBM.
23
justincormack 1 day ago 1 reply      
As this article admits, it is just a rewrite of "Triumph of the Nerds".
24
zura 1 day ago 1 reply      
Hah, I was thinking to Ask HN these days about the availability of exotic jobs, including OS/2 (or eComStation) programming jobs. I also wouldn't mind to take Motif jobs. Feel free to contact me if you have any ;)
25
mathattack 2 days ago 0 replies      
I remember an internal training class at a large consulting firm in the mid 90s that was using OS/2. I thought, "This is an awful sign. Are we doing this just to get some business with IBM?" They were a big user of Lotus Notes 2. You never know...
26
nikbackm 1 day ago 0 replies      
Interesting read. This part really brought the current Windows 8 push by Microsoft to mind.

"These machines were meant to wrestle control of the PC industry away from the clone makers, but they were also meant to subtly push people back toward a world where PCs were the servants and mainframes were the masters. They were never allowed to be too fast or run a proper operating system that would take advantage of the 32-bit computing power available with the 386 chip. In trying to do two contradictory things at once, they failed at both."

Not quite the same situation, but they have many similarities.

27
mnw21cam 1 day ago 0 replies      
The article says that the Mac OS was the only OS to ship that ran on PowerPC CPUs. This is not true - later versions of the Amiga OS ran on PowerPC.
28
swampboy 1 day ago 0 replies      
I enjoyed the article. It was a nice trip down memory lane. Regarding development tools, there were 2 commercial IDEs based on REXX: Watcom VX-REXX and VisPro REXX. I used Watcom's VX-REXX and it was a joy to use and allowed for incredibly fast and powerful application development. I heard the same about VisPro REXX. IBM's early set of tools C/2 and C Set++ were a bit painful to use. VisualAge C++ 3.0 was a decent toolset once you got over the weirdness of it. For a while, if you wanted to write C or C++ code using sockets you had to purchase IBM's TCP/IP SDK for $100.

The SIQ was a "synchronous" input queue and the problem has been understated in the article and comments. It was really bad. The base OS was incredibly stable, but the GUI shell, not so much due to the SIQ problem.

There were a number of Unix and Unix-like systems in addition to the ones already listed: Coherent, Interactive, and SCO are some that come to mind. They were pretty expensive IIRC, around $1000 to license.

29
erichocean 1 day ago 0 replies      
I seem to recall UPS widely deploying OS/2 internally back in the 90s, with custom (internal) apps written for it.

Probably all Windows by now...

30
jlebrech 1 day ago 0 replies      
don't outsource for the sake of a few months, if things take time, then they take time.
31
derleth 1 day ago 0 replies      
> So the new System/360 mainframe line would run the also brand-new OS/360.

Bad example. Really bad example: Not even IBM could standardize on a single OS for the System/360.

The System/360 went through a few OS iterations before OS/360 came along: OS/360 was late, as recounted in The Mythical Man-Month, so DOS/360 came along, then BOS/360, then TOS/360, and even PCP, which didn't support multiprogramming. Other OSes were CP-67, which became VM, MFT, MVT, and still more OSes on top of that.

To this day, there are multiple OSes for the architecture descended from the System/360, including Linux.

32
mp99e99 2 days ago 0 replies      
This was a great article, thanks for posting!
33
mbennett 1 day ago 3 replies      
>Finally, and most importantly for the future of the company, Bill Gates hired the architect of the industrial-strength minicomputer operating system VMS and put him in charge of the OS/2 3.0 NT group. Dave Cutlers first directive was to throw away all the old OS/2 code and start from scratch. The company wanted to build a high-performance, fault-tolerant, platform-independent, and fully networkable operating system. It would be known as Windows NT.

A couple of decades later, Dave Cutler is still around at Microsoft and worked on the hypervisor for the Xbox One at the ripe young age of 71, allowing games to run seamlessly beside apps.

From http://www.theverge.com/2013/11/8/5075216/xbox-one-tv-micros...

>Underneath it all lies the magic a system layer called the hypervisor that manages resources and keeps both platforms running optimally even as users bounce back and forth between games, apps, and TV.

>To build the hypervisor, Multerer recruited the heaviest hitter he could find: David Cutler, a legendary 71-year-old Microsoft senior technical fellow who wrote the VMS mainframe operating system in 1975 and then came to Microsoft and served as the chief architect of Windows NT.

>It appears his work bridging the two sides of the One has gone swimmingly: jumping between massively complex games like Forza Motorsport 5, TV, and apps like Skype and Internet Explorer was seamless when I got to play with a system in Redmond. Switching in and out of Forza was particularly impressive: the game instantly resumed, with no loading times at all. "It all just works for people," says Henshaw as he walks me through the demo. "They dont have to think about what operating system is there."

24
How to Get Good at Chess, Fast gautamnarula.com
310 points by gautamnarula  2 days ago   148 comments top 29
1
birken 2 days ago 6 replies      
And for those that are interested in building their interest in chess in a way that is more entertainment than education, I highly recommend watching "live commentaries", IE chess players playing against other people online while commentating it live.

My personal favorite is IM Greg Shahade (aka curtains), who has hundreds of these videos online [1]. There are also a bunch on youtube from other sources, but in my opinion curtains is the most entertaining (he is also quite good, the 49th ranked player in the US).

[1]: http://www.chessvideos.tv/chess-video-search.php?q=curtains+...

2
rmrfrmrf 1 day ago 5 replies      
> "One big mistake is to rely heavily on computers for chess analysis. Computer analysis should be done only after you analyze the game on your own."

This isn't correct information anymore, in my opinion. Chess engines have improved tenfold over the last four years (when the author says he stopped playing). The increase in the strength of chess engines has subsequently caused an increase in the "humanity" of chess engines, meaning that, instead of playing bizarre moves that are strong yet incomprehensible to humans, they play principled, sound moves that are strong tactically and strategically.

The main thing that you will miss as a sub-2000 player (or ever, really) is tactics, which is exactly where computers excel. A computer will be able to tell you tactics you missed and will allow you to experiment to see how different moves would have improved your game.

I agree that you should analyze games with your opponent after the game (and also with stronger players), but keep in mind that, if you're both sub-2000, you'll both miss obvious tactics even as you review the game, which doesn't really improve your chess thought.

One last thing: the 400 points in 400 days training comes from the book Rapid Chess Improvement, which I do not recommend for the beginning player (the knight exercise is good, though). In it, Michael De La Maza wastes time blasting Jeremy Silman and the strategic approach to chess games. Some people like the book for the mild drama it started, but the tl;dr is "tactics, tactics, tactics," which pretty much everyone will tell you.

3
karpathy 1 day ago 1 reply      
I'm part of this "renewed interested in chess", as I've played competitively when I was in high school but then stopped for several years until I came across this year's championship a few weeks ago.

I've tried several things to get back into chess but so far I've gotten the most "bang for the buck" on http://chesstempo.com/ , go to Training > Chess Tactics. It's fabulous, it feels very useful and it's even highly addictive. And for chess videos I warmly recommend ChessNetwork on YouTube. Lastly, I found a really nice app on iPad just called "Chess", which lets me squeeze in quick games in-between events and it's optionally computer assisted, which can help find interesting moves.

4
nsxwolf 1 day ago 7 replies      
Will anyone here besides me admit they've never won a game of chess, ever, not once? Even against some random little kid who only knows how the pieces move, at Thanksgiving?

I have this suspicion that some part of my brain is damaged and I'll never be able to play chess. I've made many attempts at learning, but have never improved over randomly moving pieces around the board.

Any advice on a resource that will help me at least not embarrass myself, even if I still can't win? "Play more chess" doesn't seem to be the answer. I don't think practice helps if you are practicing poor chess.

5
tzs 1 day ago 3 replies      
Speaking of getting better at chess, someone on Reddit pointed out that Shredder for iPhone/iPod and iPad went on sale at half price ($3.99 [1]) for the world chess championship.

There are stronger chess engines for iOS available for free (Stockfish and Smallfish), and Shredder has some interface annoyances (the move list only shows the last couple of moves, making it annoying if you want to jump around while analyzing a game), but its saving grace is that it seems (both from what I've read and what I've experienced after a few games with it) to be better at playing at a lower level.

Many engines, when asked to dumb it down to give the human a chance, play like a grandmaster and then suddenly make a dumb sacrifice or ignore an attack on a piece--and then they go back to playing like a grandmaster.

That doesn't give the human a good game. It gives the human an ass kicking, then a brief moment of hope, and then teaches the human that even if the engine gives him rook odds or more it will still destroy him.

Shredder's lower levels seem to me to actually play pretty much like humans of that level. It keeps track of how you do against it at various levels, and by default automatically adjusts its level based on your performance.

[1] the sale is still on. I have no idea how long until the price goes back to $7.99. Also note that unfortunately Shredder for iPhone and Shredder for iPad are separate apps.

6
sethbannon 2 days ago 1 reply      
For tactics training, I highly recommend "1001 Winning Chess Sacrifices and Combinations" (http://www.amazon.com/1001-Winning-Chess-Sacrifices-Combinat...) and "1001 Brilliant Ways to Checkmate (http://www.amazon.com/Brilliant-Checkmate-Chess-lovers-libra...). Both books are things you can toss in your bag and pick up when you have a spare minute.
7
peterwwillis 2 days ago 2 replies      
> Error establishing a database connection

For the love of god people. Please stop requiring a database connection to serve static content.

8
dfan 1 day ago 0 replies      
This is a good set of suggestions for improving in chess that obviously worked well for the author. When I saw how great his results were, though, I suspected that he must have been pretty young during his improvement phase, and sure enough, he was a teenager. Rapid chess improvement, like language acquisition, is a lot easier when you're young. So if you're an adult, don't be too disappointed if this regimen doesn't shoot you up to 1800 in a year. I know many adults who have been playing the game seriously for decades who never got there.

(Context: I have been an 1800-rated adult myself who recently got up to 2000 with a lot of hard work.)

9
Tloewald 2 days ago 8 replies      
Anyone care to provide similarly practical tips for Go?
10
ibagrak 1 day ago 1 reply      
Good chess player - sign of a great mind. Great chess player - sign of a wasted mind.
11
wyclif 2 days ago 0 replies      
Those are good book recommendations, but I'd add "Pawn Power in Chess" by Hans Kmoch. Reading this book will improve your pawn manipulation and overall game. Excellent illustrations from real games, too. http://www.amazon.com/Pawn-Power-Chess-Dover/dp/0486264866/
12
tmallen 2 days ago 1 reply      
Read Nimzowitsch's "My System": https://en.wikipedia.org/wiki/My_System

Everything you need is in that book. It's not too long, and very readable. It has a very common-sense approach. Look for the 21st century edition at used book stores or your chess club: http://www.amazon.com/My-System-21st-Century-Edition/dp/1880...

chesstempo.com is good for practicing tactics between games.

13
scott_s 1 day ago 0 replies      
His rules for chess psychology actually apply to any one-on-one competition, including sports:

1. Dont ever be afraid of your opponent

2. Fight as hard as you can until the game is over

14
halfcat 19 hours ago 0 replies      
Beyond getting good at chess, fast, if you want to get great at chess, slowly, then GM Rashid Ziatdinov has the instructions you seek.

GM Ziatdinov is unique in that he gives the blueprint that he claims will get anyone to master level (2200+) [1], and it's dead simple. It's much of the same:

1. Study tactics a ton [2]2. Memorize 300 key positions and games3. Now you are a master

His definition of "memorize" is that you understand the key position/game instantly and without thinking, the same way you walk or read your native language. 300 doesn't sound like a lot, but to understand each key position to the depth he advises, we're looking at the 10,000 hour rule for all 300 positions.

For comparison, either he or another GM claimed that super-GMs know 1000+ key positions/games, and Magnus Carlsen has said he has memorized 10,000+ games.

[1] http://www.amazon.com/GM-RAM-Essential-Grandmaster-Chess-Kno...

[2] He used to have a few thousand tactics problems on his website. He said to do 1-10 quickly until you could get through them without a mistake. He emphasizes quickly, it's about getting new patterns in your brain, not figuring it out on your own. After 1-10 are perfect, do 11-20 until perfect, then 1-20 until perfect, and repeat until you can do all 1-4000 (or however many). At that point he said you will have the tactical ability of a GM.

15
spot 1 day ago 2 replies      
why study chess when you can play go?

The rules of go are so elegant, organic and rigorously logical that if intelligent life forms exist elsewhere in the universe they almost certainly play go. -Lasker

16
Matetricks 1 day ago 0 replies      
If anyone's interested in how I studied chess, I wrote a few posts on my blog on chess.com about it: http://www.chess.com/blog/Matetricks

I became a National Master when I was 13 and I played a lot as a kid.

17
navan 1 day ago 0 replies      
I agree with almost everything in that article. I have been playing chess for 20+ years. During that time I have spent several months at a time seriously spending all the free time to improve my chess. I have read numerous chess books, many of them multiple times. I have an expert level rating (USCF) now. I wish someone has told me to concentrate on tactics before going after openings or strategies. Nowadays for any new beginner once they learn the rules I tell them to practice tactics.

Learning positional strategies and all the fancy openings from the books was great. But was useless to improve my results when I was beginning. When I analyzed my games with the help of computer, I found 90% of the games were decided because me or the opponent missed a simple tactic which is just 1 or 2 moves deep. If this is the case in your games you should study tactics until you can find all 1-2 move tactics. It sounds easy. But I have seen a number of class A players miss these simple tactics numerous times.

Finally you will understand opening and positional strategies only if you can spot tactics in them. Once you do not find any tactical mistakes in your game you start to play positional chess. You will appreciate making good positional moves when you do not make silly mistakes.

18
mathattack 1 day ago 0 replies      
My impression is that Chess is making a comeback in schools. At least in NYC, many schools have competitive programs.

Here are two programs: http://www.nychesskids.com and http://www.chessintheschools.org.

19
V-2 1 day ago 0 replies      
http://youtu.be/46CwTDLkHA8 - very interesting insights into the aesthetics of chess, by a Scottish grandmaster Jonathan Rowson.

I think it's inspiring for every chess player, or even those who don't play (yet).

Part II: http://youtu.be/f8ErcUCQoUs

20
thewarrior 1 day ago 1 reply      
Is there something like this for programming ?
21
netvarun 1 day ago 1 reply      
If you are looking for a systematic learning program, I would like to suggest Artur Yusupov's training program (9 books that gradually increase in difficulty).http://www.qualitychess.co.uk/docs/14/artur_yusupovs_awardwi...
22
GraffitiTim 2 days ago 0 replies      
This is similar to how I learned and how I teach beginning chess players. Good set of recommendations.
23
mrcactu5 1 day ago 1 reply      

  Im going to define good as the 90th percentile   among the player pool youre competing against. 
This is a very interesting definition of "good". Appropriate for Chess, but has interesting ramifications elsewhere.

24
vijayr 1 day ago 0 replies      
Are there similar ways to improve in other games, like Scrabble for instance? I don't enjoy Chess much, but I enjoy Scrabble - plus it's a good way to learn a new language
25
k_os 1 day ago 3 replies      
Is there any evidence that becoming good at chess improves any mental aspects or is it just a hard game that you can brag about being able to play?
26
jacobkg 1 day ago 2 replies      
Any recommendations for a good place to play chess online?
27
imahboob 1 day ago 0 replies      
never take shortcuts... I especially don't like the computer analysis part for beginners...
28
kimonos 2 days ago 0 replies      
Wow! Cool! Thanks for sharing!
29
EdwigePelagia 2 days ago 3 replies      
who is this guy and why should i take him seriously? he doesn't explain why his advice is worth a damn
25
Mathematica on Raspberry Pi for free raspberrypi.org
298 points by 2pi  5 days ago   180 comments top 18
1
jordigh 5 days ago 8 replies      
Timeo Danaos et dona ferentes... I fear the Greeks even when they bring gifts...

I don't get it... Wolfram is a money-hungry egomaniac. For example, unlike the other big Ma competitors (Maple, Matlab, Magma) not a single source line of Mathematica code is exposed. He's litigious, he labels everthing "mine", he endlessly praises himself. He wrote this insulting "don't worry your pretty little head with our source code, it's too complicated for you" piece:

http://reference.wolfram.com/mathematica/tutorial/WhyYouDoNo...

So... gratis Mathematica on Raspbian... what's the catch? Is it to lure us to the cloud?

http://www.wolframcloud.com/

Edit: To clarify, my guess here is that they want to give people a taste of Mathematica on weak hardware in order to lure them to a subscription model on "the cloud" where much more processing power will be available, just like widespread university site-wide licenses and turning a blind eye to student piracy are great marketing strategies.

Is there any evidence to support my wild theories?

2
shared4you 5 days ago 3 replies      
This is why Debian does not recommend Raspberry Pi [0]

> Despite the hype it is a more closed platform than many other things you could buy

Claiming to be open, but still encouraging and endorsing non-open-source software.I was startled to read why R.Pi is unsuitable for education [1]

[0]: https://wiki.debian.org/RaspberryPi

[1]: http://whitequark.org/blog/2012/09/25/why-raspberry-pi-is-un...

3
fidotron 5 days ago 5 replies      
Kudos to Wolfram for this.

I do wonder how much pressure they're feeling from the likes of IPython these days and if that was a motivating factor.

The other part is I don't think I've met a regular Mathematica user that actually likes it, so this may turn out to be a bad idea!

4
nswanberg 5 days ago 0 replies      
You know who else bundled Mathematica for free?

http://en.wikipedia.org/wiki/NeXT#Software_applications

5
tomrod 5 days ago 3 replies      
So all I need for a free mathematica install is to spin a virtual machine with Raspbian?
6
phonon 5 days ago 0 replies      
You can read Wolfram's post about it here.http://blog.wolfram.com/2013/11/21/putting-the-wolfram-langu...

Note that it includes a beta version of the new Wolfram Language!

7
ISL 5 days ago 0 replies      
It has been my hope for many years that Wolfram would open-source Mathematica one day. I can think of no better way to ensure his legacy.
8
Create 5 days ago 0 replies      
http://www.reddit.com/r/lisp/comments/1mmm02/screenshot_of_f...

FriCAS/Axiom running on ARM board (ie. Raspberry Pi) on top of Clozure CL (on Ubuntu/GNU/Linux)

9
ics 5 days ago 1 reply      
So how many Pis can you cluster together to equal the performance of a standard i3/5/7 laptop (for non-GPU bound calculations)?
10
thearn4 5 days ago 1 reply      
Cool, but I've been using python+scipy+sympy on a pi for almost a year now. I think Wolfram is a bit behind the curve.
11
Someone 5 days ago 1 reply      
Does anybody know what license this is released under? For example, can one run it under an emulator? On something more powerful than a Pi? Using an ARM-to-my-CPU jitter (does that work at all, or does Mathematica have its own JIT inboard?)

If one managed to hack the binaries and include them in an iOS app, would Wolfram permit that?

12
maxvitek 7 hours ago 0 replies      
Try out an emulation and see if you like it. Here is one on OS X.

http://maxvitek.wordpress.com/2013/11/24/get-mathematica-on-...

I find that it's fine for most things but slow for visualizations (and you need an x server running, not making the most of the emulated hardware).

(Can anyone succeed in using this method and then adding it as a remote kernel to a normal desktop Mathematica session?)

13
SifJar 5 days ago 3 replies      
seems like buying a Raspberry Pi just became an extremely cheap way to get Mathematica, then.
14
misframer 5 days ago 3 replies      
ARMv6 doesn't support hard float, right? Why would you want to do calculations on an Pi?
15
flatfilefan 4 days ago 1 reply      
Mathematica being what it is, what are science intensive problems worth solving on raspberry pi without uploading to a server?I'm thinking along the lines of robototechnics. Something like calculating a ballistic trajectory for autonomous gun turret with statistical analysis of precision via feedback loop. Feedback being the delay of sound of projectile impact for impact distance. Or visual position of a hit. You get the idea. What can you think of for civil use?
16
mcguire 5 days ago 0 replies      
Interesting marketing strategy: free is good advertising, but (FWIU) the raspi doesn't have the horsepower to compete with Wolfram's actual products.
17
runn1ng 5 days ago 1 reply      
Wasn't Raspberry Pi supposed to be free software?
18
chj 4 days ago 0 replies      
Does it mean a 600MB package on the Raspbian image?
26
25 years ago I hoped we would extend Emacs to do WYSIWG word processing gnu.org
276 points by ics  5 days ago   178 comments top 29
1
tikhonj 5 days ago 9 replies      
With inline LaTeX previews, we're already surprisingly close. In fact, I'd say that going all the way would be almost a step back. WYSIWYG is ultimately not an ideal editing paradigm: it wins in the short term, being easy to learn, but drags you down in the long term.

I've recently started using Quora a bit more. Unlike StackOverflow, they use a WYSIWYG editor. I've found this significantly less convenient than StackOverflow's markdown. Similarly, switching from Word to LaTeX was an improvement for most tasks once I got used to it.

Unfortunately, LaTeX has a bunch of its own shortcomings not related to it's non-WYSIWYG nature. For common tasks, I think going from markdown to LaTeX is ideal. Markdown itself is far from perfect, but it's the best compromise I've found especially with Pandoc's extensions.

So here's my idea for a great emacs-based document editor: markdown with inline math previews coupled with a full live preview to the side. All the necessary modes for this already exist (like whizzy TeX and AucTeX's previews), so it should be much easier to put together than a full WYSIWYG editor. More productive, too.

2
gexla 5 days ago 4 replies      
> I don't know how to use Org mode, and don't know what it does (it seems to do so many things), but if it displays through Emacs then there are many formatting features that it can't display in a WYSIWYG fashion like Libre Office.

I can't believe Stallman doesn't know how to use Org mode. If he is interested in selling people on Emacs, then Org mode is one of the killer features for the presentation. I don't expect him to know something he has no use for, but he should know the most popular components in the Emacs ecosystem. Org mode is one of the only reasons I started using Emacs.

3
Derbasti 5 days ago 2 replies      
Quite simply, explicit markup makes it very easy to see what formatting will be applied to what text.

WYSIWYG only shows you the end result, with no clean way to see how you got there. Was this font introduced because of some theme? Was it applied because of some toolbar button? Is it the result of some template? Was it copied from somewhere else, thereby baking someone else's theming into the copied text?

These are the questions that make WYSIWYG so confusing. These are the things that make explicit markup so straight forward. I don't think you can have WYSIWYG without the confusion or while maintaining the power of explicit markup.

If anything, Markdown or RST or Org provide a compelling middle ground: Markup is still explicit but minimal, and styling tries to come as close as possible to WYSIWYG without sacrificing control.

This, I think, is a far more compelling route to take than WYSIWYG or LaTeX-style explicit markup.

4
ChuckMcM 5 days ago 2 replies      
This is the epitome of the challenge of open source.

RMS whines : "25 years ago I hoped we would extend Emacs to do WYSIWG word processing. That is why we added text properties and variable width fonts. However, more features are still needed to achieve this.

Could people please start working on the features that are needed?"

And he's 100% accurate, it has been 25 years, and there is an open source WYSIWYG word processor, called Libre Office these days, but that isn't what RMS wants. He wants someone to do the work to make his tool of choice into something which can do what he wants to do in it.

A lot of people go this way, and we see several tools that all do variations on the same thing in their own peculiar way (Vive du choix!) but that means it is really really hard to figure out how to get somethings done when each set of tools rely on their own set of other tools.

The nice thing about Cathedrals is that you know what is expected of you :-)

5
Zigurd 5 days ago 2 replies      
Is this as staggeringly naive as it looks?

Some of the people responding are steering the discussion to a layout language with a preview window. I don't know if they are doing it because they prefer to work in such a user-hostile mode (I did this for a book, in Eclipse. Ugh.), or if they think this is a more sane goal.

WYSIWYG has its own issues. most users of word processors have no idea that paragraphs are objects in an object model, but the command structure only becomes clear when you realize that. Most users just hack at a document it until it looks right enough. At the really diabolical end of the spectrum I could show you an Ericsson documentation template that manages to manifest dozens of bugs in Word, laying in wait to eat your previous hour's work. I'm sure you have inherited documents like that.

It's all more or less a kludge, and WYSIWYG never is quite, nor is it real direct manipulation. At best it is something like "moderately friendly visual document CAD, if you get the trick behind the slick appearance."

6
dspillett 5 days ago 1 reply      
I'm hoping this is a joke, otherwise

> Could people please start working on the features that are needed?

sounds far too like the completely detail-less requirements we get through from our clients like "please provide robust MI".

7
melling 5 days ago 4 replies      
Stallman lacks a coherent vision. He has an end goal but he really doesn't have a great plan to get there. It's really frustrating. Emacs could be a lot better. For instance, it has taken forever to get a high-performance Lisp working inside of Emacs. I think Guile is partly there?

Anyway, since we'll all be long dead before his plan starts to work, I think the better solution is to support inexpensive software. For example, I pay for Sublime Text. Recently I bought PixelMator and Sketch, and I'm planning on learning how to use them soon. :-)

Sure it would be great if Free Software ruled but faster change comes with a paid ecosystem. The real problem was that software was expensive. If it's simply inexpensive, we'll get most of what we need.

8
adamnemecek 5 days ago 2 replies      
Out of curiosity, what is Stallman a doctor of? Wikipedia says that he did not finish his Ph.D. Or is that one of the honorary doctorates he received?
9
motters 5 days ago 0 replies      
Personally I think that orgmode is more useful than WYSIWYG
10
Toenex 5 days ago 4 replies      
I've always felt it was this kind of thinking that puts Emacs at odds with the UNIX philosophy of do one thing well. Unless your one thing is everything-you-can-do-with-text.
11
pkaler 5 days ago 1 reply      
The thing about "What You See Is What You Get" is that you have to define 'where you get it'. You have to define 'it'.

The implementation will look vastly different if you define 'where' as a printer, desktop, tablet, mobile phone, wearable device, etc. It sounds like RMS means his desktop/laptop computer. On the upside, you have a user archetype: Richard Stallman. A good product manager would start building up a list of user stories:

  - As Richard Stallman, I want *goal* so that *benefit*.

12
mathattack 5 days ago 0 replies      
Could people please start working on the features that are needed?

As someone aware of gnu, but not an active participant, how effective are requests like this? Does "Can people start working on this?" actually get results? I'm curious as this gets to the heart of why they may have trouble finishing things. (You can't toss money at someone to do the dirty work)

I'm coming with an open mind, and would like to hear either side of this.

13
daleharvey 5 days ago 6 replies      
I love emacs, and although I have long tried switching to other editors (I am fairly determined to use web based applications only, brackets is getting close) I havent been able to replace it yet.

However its the only application I didnt know how to copy and paste in when I started using it, its still the only application I use that I dont know how to resize the text in.

It would be kinda nice to see people work on those type of things.

14
joosters 5 days ago 1 reply      
Quick! Someone add emacs bindings to MS Word, change the styling a little bit, and we can get Stallman to unknowingly be using the devil's software :)
15
zvrba 5 days ago 3 replies      
Is this Stallman-humor?
17
blisterpeanuts 5 days ago 0 replies      
I like enriched-mode for simple markup purposes, like bolding headers. It's nothing spectacular, but makes an on-screen document that much more readable, and it doesn't add that much bulk to a text file, just a few extra markup directives.

It's easy to use, too. Select the text, ALT-o b = bold, and so forth.

Still, a true WYSIWYG editing mode would be cool once in a while. Although, it's not that much trouble to select text and paste into a nearby LibreOffice window for true formatting.

18
jmount 5 days ago 0 replies      
We've already run the experiment and seen what happens when people try to collaborate with the Gnu Emacs team (jwz, lemacs, xemacs).
19
gavinlynch 5 days ago 1 reply      
I wonder what is preventing Stallman from doing it himself? It is, after all, open source.
20
catmanjan 5 days ago 1 reply      
Neat. Really. I really want to use emacs but I find it hard to justify learning all it's intricacies when I'm only going to use it at certain times while programming.

This kills two birds with one stone, gets me off MS Office and into keybind heaven

21
danielweber 5 days ago 4 replies      
How about making emacs close when I hit the "close window" button on Windows 7?
22
petdance 5 days ago 0 replies      
Dear Mr. Stallman,

I don't think that's how open source works, but hey, can't hurt to ask.

23
jheriko 5 days ago 0 replies      
but we have many great wysiwig editors... isn't emacs specifically for all the people who want a highly configurable weird and wonderful dev tool and care little about wysiwig because they are writing code etc.?

i don't agree with what those kinds want.. but they should be allowed to have it. :)

24
gnuvince 5 days ago 0 replies      
No thank you; I like Emacs as a text editor, and if I need a word processor, there are alternatives out there. No need to make Emacs even more complex.
25
callesgg 5 days ago 0 replies      
How are one supose to get usefull wysiwg in the console.You are basicly restricted to bold and a few colors.

I guess that could work but when i want anything useful like larger font size.Then what?

26
whydo 5 days ago 1 reply      
Why don't we have both: a WYSIWYG Designer, and a Source code editor?

That way you get productive immediately with the designer, yet still have the power to fine-tune every detail using the text editor.

27
minor_nitwit 5 days ago 0 replies      
I'm surprised there's no shortcut for this.
28
dmead 4 days ago 0 replies      
ironic he recommends regular phone calls which use software that is not free to just using skype
29
thatmiddleway 5 days ago 0 replies      
Take a look at asciidocs, really a great solution.
27
LG TV logging filenames from network folders renney.me
275 points by Amadou  5 days ago   101 comments top 19
1
noonespecial 5 days ago 4 replies      
The implications are troubling. Your TV collects and broadcasts for the permanent record of anyone who can snoop the cleartext (your neighbors, your ISP, whatever the NSA looks like in your country, etc) all the media it can find on your network.

We used to need firewalls at the edges of our home networks to keep bad actors out. Now we need firewalls that point the other direction to keep the bad actors on our networks in.

2
Amadou 5 days ago 8 replies      
Can anyone recommend a consumer grade router that has a good GUI for tracking outgoing connections in real-time and setting up rules to control them?

I am imagining some kind of add-on to DD-WRT or derivatives that will put up a real-time graph of devices on my home network and draw lines representing outgoing TCP and UDP connections while also logging them in a tabular format. Both forms would be clickable to drill down for more details (including session packet captures if enabled) as well as set policies like a per device white-list of acceptable IP addresses to connect with.

I know all of this is possible with individual tools like tcpdump or wireshark and ip-tables configs, but that is too painful. I'm looking for a robust GUI on top of all that.

3
sdfjkl 5 days ago 3 replies      
Seems it's time to put your closed-source consumer devices into a DMZ, with carefully limited access to both the internet and your home network.
4
cientifico 4 days ago 2 replies      
The only possible way to fix this in some way, is having Open alternatives.

Will love that when you buy a tv, you buy just the monitor. Without the tunning hardware or the crappy ooss. Like when you do with projectors.

Then you buy any chromecast, raspberrypi, or something that you can hack.

I can see for 2014/15 having a lot of startups creating small devices to connect to monitors only that tune internet in the same way they tune digital-tv.

Once you have competition in that market, you can start thinking in security.

5
munger 5 days ago 2 replies      
Here is the list of domains from the original doctor beets post linked in the this story to block on your router to stop this:

ad.lgappstv.com

yumenetworks.com

smartclip.net

smartclip.com

smartshare.lgtvsdp.com

ibis.lgappstv.com

6
birger 4 days ago 0 replies      
The dutch website tweakers.net contacted LG and confronted them with this behaviour. They replied that it was a left over from some functionality that was never fully implemented and it will be removed in an update.

Most of the commenters there don't buy that story, just like here. Full story (dutch): http://tweakers.net/nieuws/92747/lg-erkent-versturen-privacy...

7
Wingman4l7 5 days ago 0 replies      
The dangerous precedent set here is inclusion of Terms & Conditions on multipurpose electronic hardware.
8
vijucat 4 days ago 0 replies      
Also, my LG TV's WiFi password text box doesn't accept anything other than letters and numbers and not more than 8 chars long. What is this? A 10th grade programming assignment?!

Having to change my router's password to something insecure just to accommodate LG's retarded software sealed the deal : I will never buy anything LG again.

9
kenrose 5 days ago 0 replies      
Previous discussion from the original DoctorBleet finding:

https://news.ycombinator.com/item?id=6759426

10
nemik 4 days ago 2 replies      
This was only found because LG was stupid enough to use plain HTTP instead of HTTPS. I wonder how many devices use SSL/TLS for this same thing that just haven't been caught yet.
11
nathan_long 4 days ago 1 reply      
"Dear LG,

I've really enjoyed using my LG TV/network informant. I'm wondering whether LG has any other exciting products I could use.

Do you happen to sell a camera that monitors my location? What about a vacuum that phones home with my fingerprints? Or perhaps a washing machine that steals my dreams?

Thanks for developing the products of The Future!"

12
rak 5 days ago 4 replies      
Genuine, question. How does one actually go about sniffing traffic from a device like this? This is really interesting stuff.
13
fat0wl 4 days ago 0 replies      
reminds me of the old sony rootkit cd stuff.

But i think a lot of these companies know that it would be legally hairy to get into vigilante DRM justice, so instead they just surreptitiously collect data that will let them plot their next move. maybe that's paranoid, but comeon in this day & age everything is logged. Even if they are serving 404s, it's trivial to log that data anyway (as was pointed out) or maybe it goes straight to server logs and someone in LG analytics says in the future "well, that data is there somewhere... we may as well use it"

it's hard for me to imagine someone at a corporation standing up and going "NO! that's violating our users' privacy". they pretty much consider any info they can get to hit their servers to be their property no questions asked

14
ris 3 days ago 0 replies      
So who's going to be the first to start sending bogus data to LG's endpoints?

Could do some very fun things to their statistics.

15
nathan_long 4 days ago 0 replies      
What is this even supposed to be doing? Monitoring the user's watching habits is evil but unsurprising. But why do they even want your filenames?
16
salient 4 days ago 0 replies      
Isn't Windows 8.1 logging local filenames, too, thanks to the integrated Bing search and advertising platform, so then it can serve you ads based on your local files?
17
dredwerker 5 days ago 0 replies      
Coming soon to an episode of CSI
18
philthesong 5 days ago 0 replies      
Great work by Samsung!
19
shmerl 5 days ago 0 replies      
Did anyone think DRMed systems can ever be trusted? If you are using one, expect stuff like this by default.
28
33 Questions github.com
272 points by splike  2 days ago   158 comments top 52
1
stbullard 2 days ago 8 replies      
Fun to think about, but in the real world, no question neatly divides people, even the gender one. To quote Reddit's u/tailcalled[1], the exo-software/meatspace world is even less standardized than the software world:

Falsehoods programmers believe about gender:http://www.cscyphers.com/blog/2012/06/28/falsehoods-programm...

Falsehoods programmers believe about names:http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-b...

Falsehoods programmers believe about addresses:http://www.mjt.me.uk/posts/falsehoods-programmers-believe-ab...

Falsehoods programmers believe about time:http://infiniteundo.com/post/25326999628/falsehoods-programm...

More falsehoods programmers believe about time:http://infiniteundo.com/post/25509354022/more-falsehoods-pro...

Falsehoods programmers believe about geography:http://wiesmann.codiferes.net/wordpress/?p=15187&lang=en

[1] http://www.reddit.com/r/programming/comments/1fc147/falsehoo...

2
powrtoch 2 days ago 4 replies      
I don't think this problem is solvable in any elegant form, but it is solvable. You'll just end up with massively conjunctive questions that you can't even hold in your head at once, like "27: Are you a non-practicing Catholic with exactly three children, or an asian owner of a minivan produced between 1998 and 2004 that isn't green, or a licensed boat mechanic with astigmatism, or..." and so on for the next 6 pages.

In short, you can draw categories to include or exclude as precise a number as you like, you just have to be willing to draw really, really complicated boundaries.

3
sz4kerto 2 days ago 0 replies      
> To contribute to the project, open up a pull request and add your question to the list below. All questions are open to debate and discussion.

This is a completely wrong way to approach the problem. Because the questions should all divide the population into two parts the questions should be 'matched' to each other. This approach is a bit like doing a PCA by figuring out one component, then the other, then the rest...

One way to solve this problem is to have a lot of yes/no questions (like a big Karnaugh-table), then everybody would have a long bitstring as his unique ID. Now you need to compress that bitstring -- like the minimization of the Karnaugh-table.

http://en.wikipedia.org/wiki/Karnaugh_map

-- you need to generalize this for N number of questions (which can be done), then you'd have 33 complex questions like'is it true that (you live in NAANDyou are male)OR(you live in CanadaAND you are whiteAND) .. and so on and on.

4
gkoberger 2 days ago 4 replies      
Very interesting thought experiment. A few random thoughts:

Reminds me of Panoptic by the EFF: https://panopticlick.eff.org/

Everyone's ID would change as time passed (if they move, if they age, if they get a sex change, etc).

The best questions for this are inherently "irrelevant", since "relevant" questions tend to be statistically linked. So, questions like "Was the second letter of your first girlfriend's middle name between A and M?" is better than "Were you younger than 20 when you had your first girlfriend?", since we can likely guess the latter based on the other statistics.

It's very unlikely every ID will be unique if only asking 33 yes/no questions. I mean, look at two twins living together -- very few questions will be able to differentiate between them.

I think it's possible to do based on a random snapshot in time, however less possible if it's meant to last a lifetime.

I also think the questions exist, but not in a manner that we'd be able to come up with on our own. As in, I believe that a program that knew every detail about every human could create 33 yes/no questions that differentiated people, however I don't believe we could do it ourselves.

I also wonder how many questions would be required to ask non-yes/no questions and get a completely unique ID for everyone. For example, questions like "weight? languages spoken? birth place?".

5
Laremere 2 days ago 1 reply      
Assuming that the person doesn't necessarily need to know their answer (which is important for babies anyways) the answer is trivial. The first question would be "Given that we ordered all humans in order of the time of their birth, would the 1st bit of your position in the ordering be 1?", continue the other 32 questions with the remaining 32 bits.
6
tinco 2 days ago 0 replies      
33 questions is sort of the Shannon-Hartley optimal encoding of identifying information about human beings.

That means to come up with them is identical to finding an optimal compression of identifying data.

Necessarily, as the second question already implies, for this question to correctly divide the population in half, you would have to group large amounts of small populations together, resulting in very long questions.

For example, if you'd like to make another geographical question that's independent of the second one, it would have to divide in half every population of the 6 countries you mentioned. The next question would necessarily have to divide those 12 again.

By the way, the first question you ask is already suboptimal when combined with the second question, as those countries together probably do not have a clean 50% male/female split. (if they do, you should really explain that as it's not obvious)

7
ZirconCode 2 days ago 4 replies      
Just use:

- Birthday (19~ bits)

- Rough Location (remaining bits)

And base the questions around those two, for example, where you born on a 1-15th, does the city you were born in start with the letter's a-k. This part would be an exercise in statistics, I would think.

edit: And one bit for if you were the first to be born of two identical twins =p

8
psuter 2 days ago 1 reply      
Interesting exercise, which I'd call impossible in the given form. Imagine someone magically came up with 32 statistically independent binary indicators. Now you need to come up with the 33th question Q such that if you pick any two persons who are similar up to the 32nd bit, that single question must allow to distinguish them. Sounds hard.
9
knowtheory 2 days ago 0 replies      
No one here seems to have mentioned Hunch (http://en.wikipedia.org/wiki/Hunch_(website) ).

Picking discrete questions like this is equivalent to building a decision tree for humanity. This is actually something that could be approached as an engineering problem (and there are mechanisms for optimizing decision trees).

The problem still remains in the face of both the technological capabilities of decision trees, and practical implementations like Hunch.com, that decision trees are reductive and discrete. Reality is neither discrete nor reductive.

It may very well be the case that there is a set of questions that could uniquely identify humans, but the insight that could be drawn from those questions might be essentially pointless.

For example:

* Were you born in the northern hemisphere?

* Were you born on an even numbered year in the Gregorian calendar?

* Is the country of your birth governed through a representative system?

10
brownbat 2 days ago 1 reply      
This project assumes we can know things that are not really knowable for everyone. It starts with gender and birthplace, both tricky questions in some situations.

So maybe we get to assume we have some oracle that helps us simplify the hard questions.

At that stage, it's easy. Begin with, "Assume we build a list of people sorted by time of birth (with some arbitrary tiebreakers, like proximity of birthplace to Barbados, or darkness of hair color...)."

Question 1: Are you on the top half or bottom half of this list?

Question 2: Are you on the top quarter or bottom quarter of the half?

Question 3: ...

11
abentspoon 2 days ago 2 replies      
It's not enough to find 33 independent questions that evenly split the world's population.

An optimal, though inelegant solution to that goal might look something like this:

"Is the {1..33}th bit of sha1(name : location : date of birth) 1?".

Clearly you'll have tons of collisions with that solution, as you would have with any solution using 33 independent questions.

To uniquely identify people, we'd either need to use more bits, or look very closely at the population and derive very specific questions.

12
mcphilip 2 days ago 1 reply      
I think first you have to show a question exists that effectively separates identical twins before you spend much time working on broad questions like gender and geography.
13
Zarathust 2 days ago 2 replies      
I don't think it is possible with exactly 33 questions. It will probably require more than that. Binary numbers have the property of adding twice as many numbers +1 for every new bit. For example if you already have 7 bits and you add an 8th one, then you'll be able to represent 127 numbers with that bit off and 128 numbers with that bit on.

To properly mimic this property with yes/no questions, you will have to come up with questions that divide the whole Earth's population equally AT EVERY NEW QUESTION. Even the most obvious one, "are you (fe)male?" is slightly biased toward men (according to wikipedia). At every question that skew your 50/50, you'll have to add another question beyond 33 to catch up with this.

14
tehwebguy 2 days ago 2 replies      
This reminds me of Akinator: http://en.akinator.com

It's a little spammy nowadays, but it's had enough input that it seems pretty amazingly accurate at "guessing" what / who you are thinking of in ~ 25 questions.

15
fmax30 2 days ago 0 replies      
So you want to create a data set with entropy = 1 .Think of this in terms of a hash function , You want to create a hash which only has an address space of 33 bits.Something in terms of H(Alice) = 0x12321 {H is a function which generates 0x12321 to store the data of alice)

Doesn't this sound like perfect hashing with limited memory.I don't really think that this can be done with such memory constraints.Even now we cannot produce a perfect hash function that uses 1 bit / key.The theoretical best we can do is 1.44 bit / key. And the practical best we have done till now is 2.5 bits per key. [1]

This may just be possible without the memory constraint that is , you answer N number of questions which uniquely identify you. (where N > 48 )

[1] http://en.wikipedia.org/wiki/Perfect_hash_function#Minimal_p...

16
ruswick 1 day ago 0 replies      
This is a really cool concept, but one that is totally impossible. In the actual world, few things are truly independent. Even if you could find 33 binary questions that did not correlate with each other at all, you still run the risk of having multiple people yield the same 33 answers.

Just because two things aren't statistically linked does not mean that they will never overlap.

17
rattray 2 days ago 2 replies      
This is a fun exercise, but as others have pointed out likely impossible in its current form.

We don't have true constraints on space though; why limit to 33 bits? How could we still provide a meaningful UUID to each person?

A UUID based on time and location of birth might be more feasible than any other approach, since neither will change and it's the least likely to be ambiguous. Capturing UTC at the time of cutting or otherwise removing the umbilical cord could be one way of choosing as precise, non-debatable a timestamp as any. Adding lat/long and, say, the first byte of the UTF-8 character of the mother's name (or an aspect of the mother's UUID?) could get you the rest of the way there.

Of course, this falls over in places without access to precise timing and geolocation.

18
aria 2 days ago 1 reply      
Question 1: What is the first bit in your unique 33-bit string?Question 2: What is the first bit in your unique 33-bit string?...
19
jloughry 2 days ago 2 replies      
These need to be questions that are invariant over a lifetime:

- Were you born in the northern hemisphere or southern?

$2^{33}$ is sufficient for those alive now, but the human population is a dynamic function. Set a bit when the person dies?

20
ramanujam 2 days ago 0 replies      
On a related note, this has a very interesting significance in the world of privacy and anonymous tracking.

http://33bits.org/about/

21
gradys 2 days ago 0 replies      
I thought it would be more plausible and probably more interesting to do this in maybe 40 questions. To do this in 33, as several others have pointed out, would require 33 questions that each almost perfectly bisect the population and are almost perfectly independent of each other.

With 40 or 45, we could relax that a bit and use questions that are actually meaningful. Two people who are within a few bits of each other would actually be similar in ways we care about, unlike two people who are similar because their transliterated last names both appear in the last half of the alphabet.

22
jloughry 2 days ago 0 replies      
Added pull requests to extend the address space from 33 bits to 36 bits to accommodate our revered ancestors, and a bit to indicate liveness.

TODO: don't implement zombies or ghosts at this time (YAGNI principle).

23
dkokelley 2 days ago 0 replies      
Wouldn't the best way to do this be to ask questions related to genetic markers? You require 33 yes/no questions that independently divide the population in half, but has near-uniform distribution otherwise (each populace half has no relationship to the other questions).

Are there 33 genetic markers that each has no correlation on the presence of the others?

24
ealexhudson 2 days ago 0 replies      
A useful question might revolve around language or concepts a person knows, but then this becomes a lot more difficult if the questioner doesn't know which language/concepts a person wouldn't understand (and therefore whether they could even answer the question) - and if they do know, there is a priori knowledge effectively.
25
cbr 1 day ago 0 replies      
A boring solution:

Question n. Consider the number of your birth out of all people currently alive. When you divide by 2^n and take the remainder, is it odd?

26
vacri 2 days ago 1 reply      
How many questions would you need to differentiate between identical twins, particularly if they live and work together? Take identical twin sons of a subsistence farmer - they live together, work together on the same things, know the same people, have the same genetic makeup, and whichever was the first twin born may not have been recorded. You could ask their names, but that's not a yes/no question.

Or even twins who are still babies, no work required? Some cultures wouldn't even have named them yet.

27
powertower 2 days ago 0 replies      
Another problem not mentioned is that the questions should be about the content that does not allow for the answer to change over time. Otherwise the ID is no good.
28
benstein 2 days ago 0 replies      
Do the answers have to be knowable? Time independent?

For example, "are you below the median age at this exact second?" That is not a knowable answer, and changes by the second, but it does give you an exact 50/50 split.

Repeat N times for each split and we're getting very very close.

29
Strilanc 1 day ago 0 replies      
An easy way to construct the questions is to ask for increasingly precise time and location of birth.

There will be corner cases, but then so does asking if someone is male.

30
DigitalSea 2 days ago 0 replies      
This seems like it would be a lot of work. The intensity and specificity of the questions that would need to be asked would have to be quite unique. It might be possible, but without excluding people of the world because they get lumped into a group, it seems like maybe 33 questions might not be enough to uniquely identify everyone in the world.
31
anilshanbhag 2 days ago 1 reply      
33 is not the constraint. If we increase the limit to 50 and these 50 questions can fingerprint an individual then that will be really interesting.

Some hard problems :-1. Distinguish twins2. Using characters in names as some like Chinese use non-ascii names.

32
stevewilhelm 2 days ago 0 replies      
Is the intent of this exercise to build a unique identifier that the individual could reproduce over the course of their life, or does it just uniquely identify them at the time they answered the questions?

I ask because questions like number of siblings, favorite movie, etc. would change over time.

33
k__ 1 day ago 0 replies      
Isn't this solvable to a degree by just asking a big amount of yes/no questions to a big amout of people and then removing all those questions that didn't identify people any further?
34
deletes 2 days ago 1 reply      
Seems impossible to me, for example what question would separate two identical twins( identical in dna and when born ).

And let's say you find such a question, there is no way that question would divide half of the population.

35
S4M 2 days ago 0 replies      
Do the questions have to be constant over time? If not it can trivially be solved by asking:Are you born before or after time t? 33 times, where t is the median date of birth of your population. You just need to recompute t 33 times (and know the date of birth of every single person in the world).
36
tlongren 2 days ago 2 replies      
"As an example, having the questions "Are you male?" and "Are you below the median age?" will not work "

First question is "Are you male?". Made me laugh.

37
fela 2 days ago 1 reply      
Even if the questions were perfect (each question splitting the population in two exact halves, and all questions totally independent from each other) and therefore the algorithm would give each person a perfectly random number, the birthday paradox [1] tells us that even for just square(2^33)=~ 93k people we would have 50% probability of having a collision. To work we would need more bits. (Either that or create questions that are _not_ independent, so crafted in a way to make sure each person gets a different number)

[1] http://en.wikipedia.org/wiki/Birthday_problem

38
yiransheng 2 days ago 0 replies      
If anyone is interested in seeing such a application in a fictional setting, I suggest the anime Death Note, if nothing else for its entertainment value. For those who are familiar with the story, the questions L asked in order to narrow down Kira suspects to a limited demographics in a small region in Japan, among billions of candidates, were some good ones. A good article that analyzes the plot from a information theory perspective: [http://www.gwern.net/Death%20Note%20Anonymity](http://www.gw....
39
fat0wl 2 days ago 1 reply      
The 33-question issue is a tough one for sure.

I'm instead left wondering how many extra questions (35 bits? 36 bits?) it would have to be expanded to in order to produce unique results but without having to be particularly clever in producing the questions. I bet it wouldn't take as many extra as one might be inclined to think.

40
tehwalrus 2 days ago 0 replies      
The set of questions that would do this is probably a list of genetic questions.

"do you have the mumble allele?" etc.

41
loganu 2 days ago 0 replies      
Could you not have way more than 33 questions created, (maybe a couple hundred) but change what questions are asked based on previous answers? Use the previous answers to determine the strongest next question to ask?

If an early answer states the candidate lives in the north hemisphere, there's no point in asking them if they live on a landlocked African country... or whatever much more complicated questions could arrive.

42
obilgic 2 days ago 0 replies      
Possible, You need 33 answers but more than 33 questions.
43
aleprok 2 days ago 2 replies      
If the goal is to have questions which can be answered only with yes or no. I don't think asking for location of the person is good thing, because there would be so many questions as there is locations.

"Do you live in China, India, The United States, Indonesia, Brazil or Pakistan?" is not good question.

44
jv22222 2 days ago 0 replies      
Less than half the population will be able to "read" the questions due to not speaking English...
45
dinkumthinkum 2 days ago 0 replies      
It's an interesting idea ... But no I don't think it is possible in any way that is not turning the list into a set of questions about their genetics or DNA.
46
unfamiliar 2 days ago 0 replies      
Are you male? This will not split the population 50/50. One group will be slightly larger, and you then only have 32 questions to subdivide this larger group into further categories which is impossible.

This is not possible unless the categories _precisely_ bisect the group each time.

47
ye 2 days ago 1 reply      
First 33 bits of SHA512(your DNA)

First 33 bits of SHA512(your 3D GPS location)

48
jayd16 2 days ago 0 replies      
I bet you could make a lot of progress by dividing GPS coordinates evenly by population. Simple binary search by primary residence and then leave some space for division within a household.
49
josscrowcroft 2 days ago 0 replies      
People have so much time on their hands.
50
jpalioto 2 days ago 0 replies      
Fun version of that ...

http://en.akinator.com/

51
IsNotMyIp 2 days ago 0 replies      
Do you speak english as your main language? Could be a good question too? What do u think?
52
Houshalter 2 days ago 1 reply      
This is really interesting actually. Your entire "uniqueness" can be summed up in 33 yes or no questions, in theory.
29
Newegg trial: Crypto legend takes the stand, goes for knockout patent punch arstechnica.com
270 points by Suraj-Sun  1 day ago   86 comments top 11
1
cperciva 1 day ago 5 replies      
Lawyers can be a pain at times, but sometimes they set up punchlines perfectly:

"We've heard a good bit in this courtroom about public key encryption," said Albright. "Are you familiar with that?

"Yes, I am," said Diffie, in what surely qualified as the biggest understatement of the trial.

"And how is it that you're familiar with public key encryption?"

"I invented it."

2
SwellJoe 1 day ago 1 reply      
Finally, we're starting to see some people stand up to trolls, and judges and juries are beginning to understand what these people are.

I don't follow patent troll cases too closely because they make me so angry, especially given that so many of them end up with the troll winning, or at least going right back to doing what they were doing after losing in court in a specific instance, but not losing the patent (or they just continue attacking people with other patents in their portfolio in the rare event the patent is invalidated). So, when I see a positive story...and this one looks pretty positive to me.

I worry vaguely that the judge or jury might not recognize the vast difference between Diffie and some unknown asshole who makes his living testifying in court as an "expert witness" in patent cases.

3
ChuckMcM 1 day ago 1 reply      
Awesome stuff, I really hope they nail this one dead.

That said, I'm pleased that we've finally gotten a number of jurists now who are better able to navigate the complexity of the Internet, programming, and 'process patents' when they involve Internet programming. I also find it remarkable that the Trolls have trained up a specific jurisdiction by over using it and are now at a disadvantage there. No doubt they will start looking for somewhere else to file soon but the effect will be the same as opinions and case law flow out of the east Texas courts.

I am perhaps an optimist, but I believe we have turned the corner on stupid patents. And more and more of them will be brought down and fewer of them will be of use to trolls. With luck in another 10 years people will be able to talk about the 'bad old days' of patent trolls as being behind them.

4
rurounijones 1 day ago 5 replies      
Wow, talk about strawman attacks by the lawyer "You do not have a PHd, you are not an academic" etc.

I am surprised he was allowed to get away with it.

5
moocowduckquack 1 day ago 0 replies      
I thought I'd go and have a looksee who Dr. Rhyne, the opposing expert witness, is and I found this:

Dr. Rhyne, who will lead the three-day Boot Camp, is a broadly experienced expert witness who has provided in-courtroom testimony in over three dozen federal patent cases and ITC hearings over the past thirty years. He will be joined by legal and technical staff members from Patent Calls, as well as selected guest lecturers.

Carefully designed to be a program that will be Conducted by Experts for Experts, Boot Camp participants will learn that serving as an expert witness is a unique and productive way to use their technical knowledge. By their nature, patent trials are highly charged competitive environments for companies and attorneys who have a great deal at stake, Dr. Rhyne has explained, and an expert witness is often a key part of that process. I am pleased to have this opportunity to share my experience as a witness with others.

Patent Calls Patent Expert Witness Boot Camp will provide an opportunity for individuals who have some patent expert witness experience or who are aspiring to become an expert witness to benefit from both intensive instruction and interactive training that are intended to increase their effectiveness and appeal to potential clients. Participants will receive a highly integrated combination of classroom instruction, team exercises, and simulated examination and testimony that will serve to challenge them and thus maximize their learning experiences.

http://www.businesswire.com/news/home/20101115007716/en/Pate...

Which contrasts nicely with the questions put to Diffie about his status as an expert witness:

Fenster noted that while Diffie was testifying in court for the first time, he had other expert witness work lined up. His rate varies from $500 to $600 per hour, and it's $700 for testifying in court.

"Your agent helps you to get expert witness jobs, is that right?"

"Actually, no," said Diffie. "My agent handles the arrangements with my clients. All of the jobs have come in directly through me."

6
Ar-Curunir 1 day ago 4 replies      
Whitfield Diffie looks a lot like Keanu Reeves.

Also weren't most of the questions that the TQP lawyer asked ad hominem? I understand that he asked those to discredit Diffie, but they still seemed very disrespectful. Almost anyone in the crypto field would back up Diffie's reputation, so that was a rather stupid move on the part of the TQP lawyer IMO.

7
tzakrajs 1 day ago 0 replies      
Read the book "Crypto" if you haven't already and want more background on how public key encryption got started and how it got to where it is today. It was suggested previously on HN, and I am so glad I gave it a read.
8
pera 1 day ago 7 replies      
Does anyone knows some case where patents were used for the good of humankind?
9
walid 1 day ago 1 reply      
The funny thing is that the lawyer tries to discredit Diffie by trying to present prior art. He was trying to shift focus away from the case and into attacking the witness by presenting prior art to the prior art. Kind of a moot move.
10
wnevets 1 day ago 0 replies      
TQP should be forced to pay back all the money they stole with this silly patent
11
BorisMelnik 1 day ago 1 reply      
is Whitfield wearing a cape?
30
Vote Now: Who Should Be Time's Person of the Year? Edward Snowden time.com
265 points by ghosh  17 hours ago   127 comments top 32
1
lignuist 16 hours ago 4 replies      
2
baby 15 hours ago 1 reply      
I'm wondering why some people are on the list, like Miley Cyrus or the guy behind Netflix. If those are, why not others? I'd personally vote for the head of Naughty Dog and The Last of Us over Netflix.

PS: thinking about it, I'd vote for Satoshi. We should do a HN POY.

PS2 : done! https://news.ycombinator.com/item?id=6800515

3
chollida1 14 hours ago 4 replies      
Can't vote as I have to login with Twitter or Facebook to vote.

Why?

5
kolbe 13 hours ago 2 replies      
This is one of the more disheartening polls I've seen. It speaks to a new, disturbing dynamic in our social order where people do not defend their beliefs with any sort of force or passion.

I had no clue so many people shared in my belief that Snowden has done something great. I figured the media had corrupted most of the people into thinking he'd done something deplorable, because no one has been taking to the streets or to the polls or to the anything to demand changes based on his revelations.

Past generations would have, but that's not the case today. We think he's great. We just don't care to actually support him.

Historically, I feel like there was a notion that someone was "too popular to execute." But, even though the vast majority appear to support Snowden so much that they declare him POY, I don't think we'd do a thing if the US raided his home in Russia and put a bullet in him. We'd be mad, and we'd write blog posts about it, and maybe some people would DDoS attack a website or send a bunch of pizzas to John Kerry, but there would be no political turnover. There would be no justice on Snowden's behalf. At best, it would be like Guantanamo, where some new POTUS candidate promises change so we elect him, then does absolutely nothing. And we'd happily just not care.

6
user24 16 hours ago 0 replies      
Are these polls typically good indicators of TIME's official choice? Or is this going to be a way of making Snowden "the people's POY" without TIME officially making it so?
7
beaker52 16 hours ago 4 replies      
Government: Snowden is a terrorist.

People: Snowden is a hero.

What a great way to bring this difference of opinion to the fore.

8
brokenparser 17 hours ago 3 replies      
Poll requires Facebook

0/10 would not vote again

9
thesadman 16 hours ago 1 reply      
Ugh, do people still waste their time with TIME magazine and their "storied" polls? Sorry, but it is the equivalent of a tabloid because for the last 10 years the quality of articles have been continuously dropping. To see it linked here on HN is unfortunate. Lets spend our time on more interesting topics.
10
benmarks 13 hours ago 1 reply      
"Authorize Poptip to use your account?

"This application will be able to:* Read Tweets from your timeline.* See who you follow, and follow new people.* Update your profile.* Post Tweets for you."

The hell.

11
trekky1700 10 hours ago 0 replies      
I'd nominate Bill Gates, he's done more in the past year to support and make actual change and improve the world than anyone on the list. Then again, not exactly newsworthy.
12
beaker52 14 hours ago 0 replies      
2012: Anonymous got peoples vote, but Obama was chosen.
13
ChrisArchitect 13 hours ago 0 replies      
Putin is a good one but I feel like he wasn't a top newsmaker this year, and further that he will most likely be a newsmaker next year (look at current happenings in Ukraine, continued things like that, and then the mother of all craziness that's going to be Sochi Olympics)

Snowden seems like the best bet for sheer impact/newsmaker. Tho I'm not sure how 'international' TIME is these days, and perhaps Snowden is only really know in the West.

14
beshrkayali 12 hours ago 1 reply      
I may not be correct, but Edward said specifically at the beginning of all of this: Please don't make this about me! Correct?
15
ck2 15 hours ago 1 reply      
As much as I appreciate Edward Snowden and what he did, Malala Yousafzai deserves even more attention for her deeds.
16
knodi 12 hours ago 1 reply      
I don't think Edward Snowden should be person of the year. Not because I don't think he did something important because he did. I truly think what he did was the right thing. But he needs to stand trial in the US so the we the people can see how extreme our government has become.

He needs to be like batman, sacrifice his mind, body and freedom for a cause that people will remember him for.

But he's in Russia and he's never coming back to the US so this will be a on going thing for years even decades where no one but few people will remember him and our blight.

17
Hovertruck 11 hours ago 0 replies      
Thanks for the helpful error message, Time.

   Error (api.go:209) forerunner/api.getPollById: exception: can't connect to new replica set master [ec2-54-225-59-0.compute-1.amazonaws.com:27017], err: couldn't connect to server ec2-54-225-59-0.compute-1.amazonaws.com:27017

18
Tosh108 4 hours ago 0 replies      
The numbers are completely different from this afternoon, seems like some people are hacking the votes for fun: http://www.dailydot.com/news/time-person-of-the-year-miley-c...
19
mkohlmyr 16 hours ago 0 replies      
http://i.imgur.com/jv9t4X0.png

I'd say those numbers sound about right...

20
mydpy 5 hours ago 2 replies      
Miley Cyrus has more votes than Edward Snowden. How can that be?
21
wehadfun 11 hours ago 0 replies      
Wonder how Zimmerman and Travon Martin did not get on this list.
22
rdl 15 hours ago 0 replies      
I don't see how this is even a question capable of admitting debate (although maybe Poitras and Greenwald could get in.)
23
esamek 14 hours ago 1 reply      
Chrome won't let me vote..throwing shitloads of errors in the console.

GPT LOADEDEVENT LISTENER EXECUTEDload listener, textContentGPT LOADEDBlocked a frame with origin "http://poy.time.com" from accessing a frame with origin "http://tags.bluekai.com". Protocols, domains, and ports must match.GPT LOADEDEVENT LISTENER EXECUTEDload listener, textContentBlocked a frame with origin "http://googleads.g.doubleclick.net" from accessing a frame with origin "http://poy.time.com". Protocols, domains, and ports must match.EVENT LISTENER EXECUTEDold height: old width: scroll height: 55 scroll width: 275new height: new width: Blocked a frame with origin "http://poy.time.com" from accessing a frame with origin "http://tags.bluekai.com". Protocols, domains, and ports must match.Uncaught SecurityError: An attempt was made to break through the security policy of the user agent. 93c0d7430d30e77dc6a5f0275dfcb679.js:48Uncaught TypeError: Object #<Page> has no method 'init' 528c2242c903451bee0013d3:812Blocked a frame with origin "http://poy.time.com" from accessing a frame with origin "http://tags.bluekai.com". Protocols, domains, and ports must match.Invalid App Id: Must be a number or numeric string representing the application id. all.js:56The "fb-root" div has not been created, auto-creating all.js:56FB.getLoginStatus() called before calling FB.init(). all.js:562Blocked a frame with origin "http://poy.time.com" from accessing a frame with origin "http://tags.bluekai.com". Protocols, domains, and ports must match.Posted 2 errors to errorception.com 50eb3228903069e001000036.js:12Blocked a frame with origin "http://poy.time.com" from accessing a frame with origin "http://tags.bluekai.com". Protocols, domains, and ports must match.

24
bcRIPster 10 hours ago 0 replies      
...and provide Time marketing info by giving them access to your Facebook data. Not happening, thank-you very much.
25
Glyptodon 11 hours ago 0 replies      
Their voting system provider keeps failing.
26
Kiro 16 hours ago 0 replies      
I voted Lebron James.
27
avisk 14 hours ago 0 replies      
The irony is to vote for Snowden,I need to login using Facebook or twitter :), so that time.com can track me.
28
jotm 10 hours ago 0 replies      
Voted (with my "fake" FB account cause I don't have a real one).
29
deadly 49 minutes ago 0 replies      
Narendramodi
30
nitesh887 9 hours ago 0 replies      
Narendra Modi.
31
Nilmay 13 hours ago 1 reply      
The future PM CANDIDATE OF INDIA SHREE NARENDRA MODI JI!deserved to the time person of the year! Cos he thoroughly deserves it!
32
PixelPusher 16 hours ago 4 replies      
No way, I don't vote for traitors. All his US supporters are a bunch of Benedict Arnolds.

BTW, just voted for Miley. She definitely made my year.

       cached 27 November 2013 03:11:01 GMT