hacker news with inline top comments    .. more ..    18 Apr 2014 News
home   ask   best   4 years ago   
1
Ubuntu 14.04 LTS ubuntu.com
144 points by jacklight  4 hours ago   64 comments top 17
1
Zardoz84 37 minutes ago 1 reply      
I really recommend trying out the Ubuntu KDE flavour (KUbuntu).I really like it as being more usable and configurable than Unity and Gnome.
2
sandGorgon 2 hours ago 4 replies      
I really recommend trying out the Ubuntu Gnome flavor [1] - I really like it as being more usable than Unity.

Plus https://extensions.gnome.org/ is incredible.

P.S. - [2] this is my personal optimization script for a lean and developer friendly Gnome Ubuntu 14.04 install. YMMV.

[1] https://wiki.ubuntu.com/UbuntuGNOME/GetUbuntuGNOME

[2] https://gist.github.com/sandys/6030823#file-lean_install_ubu...

3
josteink 34 minutes ago 0 replies      
As someone who just installed the beta 2 on my laptop a few days ago, I have to say I'm impressed.

This thing cold boots on my non-UEFI laptop in 4-5 seconds. That's at the same level as Windows 8.1, which also impressed me greatly.

Now if they can only get systemd and the "online in 50ms" updates implemented, this thing will be super-sweet.

4
pan69 2 hours ago 3 replies      
I just took XUbuntu for a spin. It's just as great as the 13.10 release. If you're a GNOME refugee and looking for an excellent desktop then I can't recommend XUbuntu enough.
5
crb 2 hours ago 0 replies      
6
NathanOsullivan 2 hours ago 5 replies      
I really don't get the intention with the default visual style Ubuntu has settled on. I'm sure a lot of work has gone into it but it's just not attractive.

I previously thought it was growing pains and they would eventually land on a great style that was still "theirs", but at this point it feels like a lost cause. Personally I've stopped recommending Ubuntu on the desktop because I already know what the initial reaction to a fresh install is going to be.

7
cies 1 hour ago 1 reply      
It seems may use this opportunity to recommend an Ubuntu derivative. I'm very happy with Netrunner-OS[1]. It come with KDE and gets "sane defaults" right. AdBlocker, YT-downloaders, codecs, etc. -- all pre-installed.

And they also gave it some thought to make sure it looks good out of the box.

All Unity-refuge-seeking, but otherwise Ubuntu (Debian++) lovers should have a look at it. :)

1: http://www.netrunner-os.com/

8
chmike 27 minutes ago 0 replies      
Failed to install here on my 13.10 (French) version because of an error related to an invalid ASCII code. I'll wait before trying again.
9
fotcorn 2 hours ago 0 replies      
Droplets on DigitalOcean are already available with 14.04 LTS. Now I only need time to upgrade our servers...
10
gkya 2 hours ago 0 replies      
Wouldn't submitting a link to release notes than the desktop download page be better?

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

11
spindritf 2 hours ago 0 replies      
I just upgraded my personal server from 13.10 and it was pretty painless. Although many third party repos are not yet ready for Trusty.
12
tatqx 2 hours ago 0 replies      
I love how the rounded window corners are now (finally!) properly anti-aliased [1].

http://www.webupd8.org/2014/01/unity-7-to-get-new-window-dec...

13
walshemj 1 hour ago 0 replies      
so have they put back the key bits of xwindows they removed in a provious LTS - I was not happy after setting up my small hadoop home lab to find that some idiot PFY had removed teh fuctionaly that made remote login possible!
14
ing33k 2 hours ago 2 replies      
Just upgraded from 12.04 LTS to 14.04 .already liking the locally integrated menus.
15
troels 2 hours ago 1 reply      
Anyone has a guess of when there'll be an official image on aws?
16
shacharz 1 hour ago 3 replies      
Why is the download so slow? Why not add a torrent link?
17
samolang 29 minutes ago 2 replies      
The first comment I saw said it was impressive. The second said I should use Gnome instead of Unity. The third said I should try something called localepurge for a leaner install. The fourth said KDE was better than Gnome and Unity. The fifth recommended Xubuntu over Gnome.

This is the sort of fragmentation that makes popular adoption difficult, but is also what makes Linux awesome.

2
Eccentric axe uses physics to make splitting wood a lot easier. boingboing.net
116 points by sinned  5 hours ago   39 comments top 18
1
linhat 1 hour ago 1 reply      
This axe looks really awesome, physics for the win.

Also, instead of using an old rubber tire, I highly recommend building a variable length, tensioning chain, much like this one: http://www.youtube.com/watch?v=wrLiSMQGHvYMakes chopping wood so much more fun.

And then, there is also the stikkan: http://www.stikkan.com/Perfect to hang it up next to your fireplace to do some more fine grained wood chopping, cutting larger pieces into smaller ones.

2
ghshephard 3 hours ago 4 replies      
That has to be some of most seasoned knot free wood ever split. I wonder how many logs he had to go through to find stuff that split that well - 20 logs for every one that did that?

Sledge Hammer, Splitting Maul gets the job done 95% of the time. "Eccentric Axe" the other 5%.

Well - maybe, 85% Splitting Maul 12% Eccentric Axe, and 3% splitting wedge (which typically has a torsion in it to create a turning effect to split the wood.)

I would love to hear of an independent comparison of the Eccentric Axe versus a Splitting maul.

The tire is a really great idea though.

3
barkingcat 7 minutes ago 0 replies      
4
binarymax 1 hour ago 0 replies      
I, as well as others, have a history of strained wrists when splitting wood with a traditional axe. Clicking through to their website they recommend a loose grip when the head is about to strike, allowing for the rotation to take place. This makes sense because before you'd need a strong grip to hold onto your axe to make sure it doesn't slip out of your hands when giving a swing strong enough to split. Since you are swinging much more gently, this may actually work! At the very least I am glad that I don't need to split wood these days otherwise I'd give this a try in a second.
5
tzury 55 minutes ago 0 replies      
This is the page you need to read!

http://www.vipukirves.fi/english/description.htm

6
easy_rider 1 hour ago 1 reply      
This is really cool. I've never split more than a couple dozen logs in my life but Ray Mears teached me to wedge the blade to the side on impact. This seems to emulate these physics. I'm not a big guy, and it's a pretty hard motion to get in to when you're swinging down as hard as possible.

This seems like a very capable survival/bushcrafting tool for less accomplished wood cutters.

tl;dr I wood buy.

7
UweSchmidt 1 hour ago 2 replies      
Alternative title:

"New axe design uses lever action to make splitting wood a lot easier"

"Uses physics" sounds like "Stand back: I'm going to try science" to me.

8
fit2rule 3 hours ago 0 replies      
I thought it was already common knowledge among the axe-wielding cognescenti that the way you chop wood, without tiring yourself out completely, is that you add a little 'twist' to your down-swing, just as the blade makes contact, that has the same leverage effect - albeit with a 'normal' blade.

I dunno, I guess I just learned that little twist from my uncle and grandfather, and never really thought it was so magical. Not sure how I feel about the safety of a mighty wood-cutting sharp blade being off-balance on the downswing - sure, the guy in the video has a fairly safe setup, but if you don't have the luxury (i.e. are a consumer who just bought one of these Wonder Axes) of having a safety rig, the potential for mis-direction and glancing blows from the axe being redirected towards the user seems pretty high ..

9
lafar6502 2 hours ago 0 replies      
For me the greatest part of this innovation was putting the log into a car tire.
10
dirktheman 2 hours ago 0 replies      
Accurate axeman, but with that kind of neat log I guess you can have the same results with a regular axe and a good technique.

It's very hard to improve something that has been around for the past 35,000 years, I guess. Maybe they're better of working on that car tire so that it can accomodate logs of different diameters.

11
hrkristian 3 hours ago 2 replies      
I've spent nearly every winter growing up swinging axes, and I cringed a bit when I saw him strike branches.

For the most part the axe does a wonderful job against anything, and that guy has ridiculously good aim, but anyone who have at some point been bad at chopping wood probably knows those twists to the side can do a real number on your wrists and hands. It seems to happen quite a bit.

It's still an amazing innovation, and I hope to be able to pick one up as a gift. The article is sadly not very informative.

12
bprater 3 hours ago 3 replies      
Knotted wood is much more challenging to split.

Even with a hydraulic splitter, chunks of wood with lots of branches can stall a machine pressing with tons of force. Great demo, but unrealistic unless you only chop beautiful limbless tall oaks.

13
jotm 2 hours ago 0 replies      
This would hurt sooo many newbies... You're much better off with a normal axe that won't try to twist and jump out of your hand every single time.

For wood cutting, the size of the axe head matters a lot - too small and it doesn't have enough force, too large and it gets stuck very easily.

The length of the handle is also important - you'll fare much better with longer ones, but the longer it is, the harder it is to aim and control.

As with anything, practice is key, but I'm pretty sure you don't want to start with this axe.

14
baq 3 hours ago 2 replies      
if you ever tried to use an axe, the video in the article will look like sorcery.
15
emiliobumachar 1 hour ago 0 replies      
This is one of those inventions, like the hot-air baloon, for which we had all the prerequisite technology for ages.
16
KhalilK 2 hours ago 0 replies      
I am most impressed by the use of the tire to keep the wood upright and positioned.
17
marktangotango 1 hour ago 0 replies      
I'm amazed how many of you have used an axe. Nothing to add, you all have said it all :)
18
zomg 1 hour ago 0 replies      
there's nothing new or innovative here. people have been doing this for years... it really depends on the KIND of wood you're chopping. this guy has it down to an art: https://www.youtube.com/watch?v=2vThcK-idm0
3
Growing-ups aeon.co
24 points by timw6n  2 hours ago   2 comments top
1
tommo123 16 minutes ago 1 reply      
Surely that should be 'growings-up'? Motion to burn at the stake the author/poster until we reach a conclusion
4
How to exploit home routers for anonymity danmcinerney.org
14 points by DanMcInerney  1 hour ago   discuss
5
Whither print.css? A Rallying Cry for a Web Thats Fit to Print (2013) modrenman.com
17 points by pessimism  2 hours ago   6 comments top 4
1
theandrewbailey 1 minute ago 0 replies      
I don't find this post that useful. It goes through some sites, criticizing what they do wrong, but offers no examples of sites that get it all (or all-1) right. I'm not a designer, but I still like to create nice looking things. No points are given on what makes a nice printout, even though some of the examples given look completely fine to me.
2
onli 27 minutes ago 0 replies      
The premise is questionable. A webpage lives in its own medium, or it is its own medium. It is normally meant to be read with an electronic device, to be interactive and linked, though those devices can be quite different of course.

But by printing a webarticle, you are transforming it into another medium, onto paper. Therefore it is actually a very strong assumption - and one I don't follow - to "expect them to support the habits of people who prefer to read longer articles in print".

And that even before talking about protecting the nature from wasteful habits like this.

But sure, if you enjoy it, build a small print.css, removing everything unnecessary and making it readable in black&white. Just be aware of the medium change, and that a good design in the one medium won't necessary work in the other.

3
Raphmedia 33 minutes ago 2 replies      
I've heard of people printing whole articles and reading them in paper format before.

Personally, I would rather have a print.css that hides everything but a message that says "save the trees, don't print an article only to discard it after".

We do have some clients that ask for their website to be printable, but it's mostly clients that are in the legal or medical business. It doesn't come in regular contracts, the client has to ask for it.

4
kalleboo 23 minutes ago 0 replies      
The only thing that I usually bother to ensure really works when printed is receipt pages. I haven't even owned a printer in years, but printing receipts to PDF is something I do daily.
6
Gabriel Garca Mrquez, Literary Pioneer, Dies at 87 nytimes.com
419 points by antr  16 hours ago   129 comments top 29
1
simonsarris 15 hours ago 8 replies      
Oh my. A paragon of magical realism and my second favorite author. Rest in peace.

Liking storytelling alone is sometimes not enough to like Marquez, you have to love language too. He uses (some might say abuses) language to impact his storytelling, often using incredibly long, convoluted sentences to weave his narrative. It can be hard to follow, sometimes intentionally, but I find it enormously satisfying to read and follow along with his brain. Like slowly drinking a maple syrup of words.

One of the best examples is the first 15 or so[1] pages of Autumn of the Patriarch[2], where the narrator winds this thread of what has happened slowly, using sentences that span pages, until you realize a shift from what has happened to a sort of what is about to happen. Then a fist slams on the table and the realization strikes you that the first part of the description was a kind of set up, this beautiful ruse. I wish I could be more descriptive but it would give away the delight. It's a great book about terror and despotism.

Marquez is not the kind of thing you can read in a noisy environment. At least I can't. I adore him so much. I could write a eulogy for days.

If you've never read him, please take a moment to read one of my favorite short stories, A Very Old Man With Enormous Wings

http://simonsarris.com/lit/a-very-old-man-with-enormous-wing...

(I've hosted a copy of it (and many more short stories) for ages because most of the copies on the web are plagued with ads and miserable formatting)

If One Hundred Years of Solitude seems too long for you, I urge you to look into some of his very excellent shorter books, such as Autumn but also Of Love and Other Demons[3] and Love in the Time of Cholera.[4]

(Chronicle of a Death Foretold is even shorter, but I do not recommend it as the first Marquez book you read!)

[1] It could be the first 10 or 30 pages, it's been several years, but I am certain it's one of the better (and shorter) examples of his style.

[2] http://www.amazon.com/dp/0060882867

[3] http://www.amazon.com/dp/1400034922

[4] http://www.amazon.com/dp/0307389731

2
rjtavares 15 hours ago 9 replies      
Many years later, in front of the firing squad, colonel Aureliano Buenda would remember that distant afternoon his father took him to see ice."

Best opening line of a book ever. RIP.

3
jortiz81 9 minutes ago 0 replies      
Many people in the US, when asked about Colombia, think of negative things; drugs, etc. Also, down in Colombia, the Caribbean coast was often looked down upon by those from the capital -- seen mainly as uneducated people with strange customs and a different way of talking. Gabriel Garcia Marquez uplifted not only the image of Colombia world-wide, but also the image and culture of the Caribbean coast. I am proud that my family is from that region and he made me proud to tell the world that I am Colombian. May he rest in Peace.

Also, I think this quote (from an article in NPR) sums up why he's so admired in Latin America:

"Garcia Marquez is speaking about all the people who are marginal to history, who have not had a voice. He gives a voice to all those who died. He gives a voice to all those who are not born yet. He gives a voice to Latin America."

4
chimeracoder 15 hours ago 0 replies      
I read six of Garcia Mrquez's stories in school - my favorite was "The Handsomest Drowned Man in the World"[0] ("El Ahogado ms Hermoso del Mundo"). If you're looking to get a taste of his writing but don't have time to read an entire book, this short story captures his style very well.

In a similar vein is "An Old Man with Very Enormous Wings" [1] ("Un seor muy viejo con unas alas enormes"), which was referenced in R.E.M's music video for "Losing My Religion[2]

[0] https://hutchinson-page.wikispaces.com/file/view/The_Most_Ha...

[1] http://www.ndsu.edu/pubweb/~cinichol/CreativeWriting/323/Mar...

[2] https://www.youtube.com/watch?v=if-UzXIQ5vw

5
tdees40 14 hours ago 1 reply      
My favorite Marquez story is that he never used adverbs ending in -mente, so he called his English language translator (Edith Grossman) and requested that she not use any adverbs ending in -ly.
6
nfc 3 hours ago 0 replies      
RIP. I felt a shock when I discovered this when I woke up. He is probably the author that has influenced me more strongly in my literary tastes. Gabo wasn't just a great writer but loved by many in spanish speaking countries. His mastery of the prose and outstanding ability on his craft are laudable but Gabo was as well a great person. A friend had the chance to meet him in the context of his PhD, I guess I'll forever envy her for that.

Being this my first HN comment (thanks Gabo for the strength) I'll go all out with a second unrelated part of the comment, less emotionally charged but perhaps more HN-like:

There are many comments in the thread about the translation of the first sentence of "One Hundred Years of Solitude". Translating is such a hard task, there is no way part of the meaning/subtleties will not be lost in it since languages are not one to one. And even if we get to pass most of the meaning keeping the flow will be so hard except for very similar languages (spanish/portuguese). What is an impossible problem for computers is one as well for translators, we can only hope they give as a tasty human take on the task. I wonder if one day automatic translations of literary works will have a "style" options to simulate different translator sensibilities or we will settle for a winner-algorithm takes all the translations.

Something more I'd like to share, I'm very curious if other people feel the same because part of my homemade theory of the language depends on this :). I'm lucky enough to read high level literature in different languages, however even if I can appreciate it, the pleasure I experience while reading Spanish is in a different level. Somehow a similar experience happens while talking, I feel more strongly bound to spanish in a very subtle way, it's not something I usually notice, just on some occasions. I only started learning languages after 8. A possible reason is how the brain gets bound to words and language when very young, another is that I have lived more emotional experiences in that language. The second hypothesis would have to explain why I do not feel that way in french even if I've lived in Paris 10 years. Obviously one case study, apply a grain of salt ;)

7
jmadsen 1 hour ago 0 replies      
Much of what I read of his was in Spanish, a second language for me, but even so I could see his command of language was incredible.

Things like, in "Relato de un nufrago", the story teller has emotional ups and downs each chapter - and Garca Mrquez carefully chose words that sounded emotionally up or down, giving a sense of rising and falling on waves through the whole story.

That, detected by someone whose Spanish was "solid" at best - what a joy it must be for a native to read.

8
russell 15 hours ago 0 replies      
"One Hundred Years of Solitude" was the only book that everyone in my family ever read, me, my wife and my three kids.

"Mr. Garca Mrquez, who received the Nobel Prize for Literature in 1982, wrote fiction rooted in a mythical Latin American landscape of his own creation, but his appeal was universal. His books were translated into dozens of languages. He was among a select roster of canonical writers Dickens, Tolstoy and Hemingway among them who were embraced both by critics and by a mass audience." from the article.

But the article doesnt begin to do the book justice. The mythology is Colombian but it all is real to the reader. It is very worthwnile to read One Hundred Years along with a literary biography of Marquez. It was a wonderful experience for me. BTW my taste is purely science fiction.

9
r4pha 15 hours ago 0 replies      
I absolutely adore this man. I was lucky to be given a portuguese-translated copy of "one hundred years of solitude" at the age of 16. I read it back then and loved the story itself and specially the beautiful writing style. About four or five years later I read it again in the original (even though I don't speak spanish very well) and was even more amazed about the beauty of it and about how _my_ interpretation of it changed. I loved everything I have ever read from him, but I loved "one hundred years" so much I even feel ashamed of trying to use my own words to describe it.
10
maceo 11 hours ago 1 reply      
Let's not forget that GGM was a life-long socialist and a supporter of the Cuban revolution.

He spent many years living and Cuba and he considered Castro to be one of his best friends. He was a firm supporter of Chavez, and looked forward to the day that Simon Bolivar's idea of a united Pan-America would be realized. Because of this, he was prohibited from entering the US during the Reagan administration.

As much as I love his works of fiction, my favorite book of his is the first volume of his autobiography, Living to Tell The Tale. I've been patiently waiting for news about volume 2 and 3 ever since the first one came out in 2002. I have never heard anything about these -- whether they were ever written remains a mystery. RIP to a magnificent man who brought so much pride to the people of our scarred continent.

11
paul_f 16 hours ago 5 replies      
Can someone provide a quick summary of what was it that made Marquez so prominent? I had not know much about him at all.

FYI, if like me, you have trouble accessing the article, and using Chrome, right-click and open in an incognito window.

12
anuraj 3 hours ago 1 reply      
I read Marquez's master piece '100 Years of Solitude' in my native language Malayalam 25 years ago and it got etched into my mind forever. In the next 2-3 years I read almost all the works of Marquez available in English. Later a lot more Latin American Authors including Borges, Huan Rulfo, Carpentier, Manuel Puig, Fuentes,Cortazar, Paz etc. became popular among the reading public in my region, but GGM was the one who started it all and remains one of my favourite writers of 20th century. Have a feeling that Marquez' did his best in the short story genre - despite his reputation as a novelist.

It is a coincidence that one of the first magical realist novels of my mother tongue - 'Khazakinte Ithihasam (Legends of Khazak)' also got published almost at the same time as '100 years of Solitude' was being published in Spanish and both works present highly resplendent and almost spiritual language and journeys (almost untranslatable).

Opening line of 'Legends of Khazak'- 'When the bus finally reached Kooman Kavu, the place did not seem unfamiliar to Ravi'.

13
3am 15 hours ago 2 replies      
Oh no... GGM was an underappreciated author in non-spanish speaking world (in spite of wonderful, gift translators... he was just an intrinsically difficult author to translate because of the poetic quality of his writing IMHO). Cien Anos de Soledad was one of the first non-trivial, non-english books I read. RIP.

edit: okay, removed 'really'.. I think he was underappreciated on a popular level, even though he was very well appreciated on a critical level.

14
kartikkumar 6 hours ago 0 replies      
A deeply thoughtful literary great, to rank among the likes of Dickens, Cervantes and Dostoyevsky in my mind. Love in the Time of Cholera changed me, just like Crime and Punishment did. It affected me more than any other book has. At the time of my life when I read it, I felt that it spoke to my personal sensibilities. I followed that up with Memories of My Melancholy Whores, which I honestly think is his absolute masterpiece.

Gracias Seor Garca Mrquez.

15
noname123 14 hours ago 2 replies      
Can someone tell me what is the theme of "One Hundred Years of Solitude" as applied to modern society? I read the book awhile ago and appreciated greatly the various character sketches.

Unfortunately, the literary criticism that I sought out back then at liberal arts college, focused mostly on the metaphor of the European colonialism on Latin America (industrialization of the town with the rubber-plant, and the subsequent massacre of the residents after some kind of rubber-plant revolution, consequences of military rule and violent overthrows as embodied by Colonel Buendia and circular nature of the history, Spanish colonialism past long felt after Latin America became independent).

Tbh, I'm not really interested in the whole multiculturalism and ethnic studies rehashing the white guilt trope. However, I find the obsession of the various characters fascinating, the scientific obsession of the original patriach that eventually descended into madness, Colonel Buendia making little gold fishes, the incestuous natures of the whole family, some ethereal nympho character that doesn't speak a word and then one day transcend to haven much to the horror of the venerable matriarch. What is your interpretation of the book?

16
jpdlla 9 hours ago 0 replies      
My first favorite novel in spanish was of GGM, "Relato de un nufrago"(The Story of a Shipwrecked Sailor). Many don't know but the full title is actually "Relato de un nufrago que estuvo diez das a la deriva en una balsa sin comer ni beber, que fue proclamado hroe de la patria, besado por las reinas de la belleza y hecho rico por la publicidad, y luego aborrecido por el gobierno y olvidado para siempre."(The Story of a Shipwrecked Sailor: Who Drifted on a Liferaft for Ten Days Without Food or Water, Was Proclaimed a National Hero, Kissed by Beauty Queens, Made Rich Through Publicity, and Then Spurned by the Government and Forgotten for All Time.)
17
Myrmornis 5 hours ago 0 replies      
The spanish department at Princeton was kind enough to let me take a Spanish class while a visiting post-doc. It was great for learning Spanish. But it was so, so painful to see the sorts of pretentious bullshit that the undergraduates were inspired to produce by reading pieces by Marquez, and other South American authors writing in the "magical realist" style. I remember enjoying "Love in the Time of Cholera", and I am sure Marquez himself was great, but in general that sort of crap is exactly what you don't want your children wasting their time studying at university, and potentially misdirecting their professional lives thereafter through an underapprecation of the fact that there is aesthetic beauty in actual real stuff and facts about how the world really works.
18
Narretz 4 hours ago 0 replies      
Coincidentially, a few weeks ago I put Marquez on my to-read list. Bein a native German, my first choice was naturally the German translation. English wouldn't be a problem though, so I wonder which one I should choose. Granted, in the end the books shouldn't suffer a lot from translating.
19
interpares 7 hours ago 0 replies      
Here is a great interview with him in The Paris Review [1]. I love when he says,

"It always amuses me that the biggest praise for my work comes for the imagination, while the truth is that theres not a single line in all my work that does not have a basis in reality. The problem is that Caribbean reality resembles the wildest imagination."

I know the Caribbean very well and could not agree more.

[1] http://www.theparisreview.org/interviews/3196/the-art-of-fic...

20
KhalilK 15 hours ago 1 reply      
His books were part of my adolescence, but "One Hundred Years of Solitude" was the essence of my formal education, I am sad he died but I am utterly glad he's lived.
21
noname123 15 hours ago 7 replies      
OT but tangential any magical realism authors to read? So far, I got Marquez, Jorge Louis Borges and Murakami. And preferably recommendation should be good to provide philosophical consolation to a code monkey worker-bee in the capitalist society.
22
ch4s3 15 hours ago 1 reply      
And now we may never know why Mario Vargas Llosa punched him in the face.
23
dvidsilva 12 hours ago 1 reply      
I'm so 'proud' to see this here, hard to think of something to say so I'll put one of my fav quotes from him:

"She discovered with great delight that one does not love one's children just because they are one's children but because of the friendship formed while raising them."

24
maceo 11 hours ago 0 replies      
In his autobiography he tells a story I love.

While writing 100 Years of Solitude he listened to The Beatles' A Hard Day Night album on repeat. After the book was published he received a letter from a group of Mexican college students who asked him if he was listening to A Hard Day's Night when writing the book, because they felt the album in his words.

25
deckardt 13 hours ago 1 reply      
This is one of the reasons I keep reading Hacker News. It's a great source for cutting-edge tech news; more importantly, it's also a great source for important news.
26
camus2 15 hours ago 0 replies      
As a programmer and a poetry/book lover, it is sad news, plrease have a 5min break from whatever code you are writing(on your free time of course) to check this author out!
27
jseip 8 hours ago 0 replies      
100 years of solitude will stand as one of the world's greatest literary works for the next ~1000 years. RIP GGM
28
rafaelvega 11 hours ago 0 replies      
I once met this american guy who told me in fluent spanish that he went and studied the spanish language after reading one of GGM's books because he wanted to read it in it's original language.
29
iraikov 11 hours ago 0 replies      
His writing was like poetry and song, all in one.Second best opening after "A Hundred Years of Solitude":

"It was inevitable: the scent of bitter almonds always reminded him of the fate of unrequited love. Dr. Juvenal Urbino noticed it as soon as he entered the still darkened house where he had hurried on an urgent call to attend a case that for him had lost all urgency many years before. The Antillean refugee Jeremiah de Saint-Amour, disabled war veteran, photographer of children, and his most sympathetic opponent in chess, had escaped the torments of memory with the aromatic fumes of gold cyanide."

7
The Birth and Death of JavaScript [video] destroyallsoftware.com
413 points by gary_bernhardt  17 hours ago   159 comments top 30
1
jliechti1 16 hours ago 7 replies      
For those unfamiliar, Gary Bernhardt is the same guy who did the famous "Wat" talk on JavaScript:

https://www.destroyallsoftware.com/talks/wat

2
tinco 13 hours ago 1 reply      
The reason why metal doesn't exist now is because you can't turn the memory protection stuff off in modern CPU's.

For some weird reason (I'm not an OS/CPU developer) switching to long mode on an x86 cpu also turns on the mmu stuff. You just can't have one without the other.

There's a whole bunch of research done on VM software managed operating systems, back when the VM's started becoming really good. Microsoft's Singularity OS was the hippest I think.[0]

Perhaps that ARM cpu's don't have this restriction, and we will benefit from ARM's upmarch sometime?

[0] http://research.microsoft.com/en-us/projects/singularity/

3
lelandbatey 13 hours ago 1 reply      
First, I very much love the material of the talk, and the idea of Metal. It's fascinating, really makes me think about the future.

However, I also want to rave a bit about his presentation in general! That was very nicely delivered, for many reasons. His commitment to the story, of programming from the perspective in 2035, was excellent and in many cases subtle. His deadpan delivery really added to the humor; the fact that he didn't even smile during any of the moments when the audience was laughing just made it all the more engaging.

Fantastic talk, I totally loved it!

4
jerf 15 hours ago 0 replies      
It's not far off my predictions: https://news.ycombinator.com/item?id=6923758

Though I'm far less funny about it.

5
vanderZwan 13 hours ago 2 replies      
I guess this is in a way a response to Bret Victor's "The Future of Programming"?

https://vimeo.com/71278954

6
jongalloway2 7 hours ago 0 replies      
Coincidentally, I just released a podcast interview with Gary right after he gave this talk at NDC London in December 2013: http://herdingcode.com/herding-code-189-gary-bernhardt-on-th...

It's an 18 minute interview, and the show notes are detailed and timestamped. I especially liked the references to the Singularity project.

7
spyder 12 hours ago 0 replies      
Looks like Erlang is already getting one step closer to the metal:

http://erlangonxen.org/http://kerlnel.org/

Also there is another project that can be related to that goal:

"Our aim is to remove the bloated layer that sits between hardware and the running application, such as CouchDB or Node.js"

http://www.returninfinity.com/

8
cjbprime 17 hours ago 2 replies      
For context, this was one of the most enjoyed talks at PyCon this year.
9
granttimmerman 16 hours ago 8 replies      
> xs = ['10', '10', '10']

> xs.map(parseInt)

[10, NaN, 2]

Javascript is beautiful.

10
nkozyra 16 hours ago 0 replies      
Extraordinarily entertaining and well presented.
11
igravious 16 hours ago 0 replies      
Stellar stuff. Hugely enjoyable. Very interesting thought experiment. I won't spoil it for any of you, just go and watch! Mr. Bernhardt, you have outdone yourself sir :)
12
joelangeway 14 hours ago 4 replies      
He says several times that JavaScript succeeded in spite of being a bad language because it was the only choice. How come we're not all writing Java applets or Flash apps?
13
vorg 14 hours ago 0 replies      
I suspect Nashorn, the just released edition of JavaScript for the JVM, will be heavily promoted by Oracle and become heavily used for quick and dirties manipulating and testing Java classes, putting a dent into use of Groovy and Xtend in Java shops. After all, people who learn and work in Java will want to learn JavaScript for the same sort of reasons.
14
mgr86 17 hours ago 7 replies      
I'm missing some obvious joke...but why is he pronouncing it yava-script.
15
Sivart13 6 hours ago 1 reply      
Where did you get the footage of Epic Citadel used in the talk?

http://unrealengine.com/html5 seems to have been purged from the internet (possibly due to this year's UE4 announcements?) and I can't find any mirrors anywhere.

Which is a shame, because that demo was how I used to prove to people that asm.js and the like were a Real Thing.

16
dsparry 16 hours ago 1 reply      
Very impressive to have been recorded "April 2014" and released "April 2013." Seriously, though, great presentation.
17
Kiro 3 hours ago 0 replies      
A bit OT but what is the problem with omitting function arguments?
18
steveklabnik 15 hours ago 1 reply      
Consider the relationship between Chromebooks and METAL.

(I'm typing this from my Pixel...)

19
base698 16 hours ago 1 reply      
I wish some of those talks were available for purchase on their own and not in the season packets. Definitely a few I'd buy since I liked this talk and the demo on the site.

Guy has good vim skills for sure.

20
h1karu 3 hours ago 0 replies      
somebody tell this to the node.js crowd
21
camus2 16 hours ago 2 replies      
nice nice,ultimatly languages dont die,unless they are closed source and used for a single purpose ( AS3 ). In 2035,people will still be writing Javascript. I wonder what the language will look like though. Will it get type hinting like PHP? or type coercion? will it enforce strict encapsulation and message passing like Ruby ? will I be able to create adhoc functions just by implementing call/apply on an object? or subclass Array? Anyway , i guess we'll still be writing a lot of ES5 in the 5 years to come.
22
pookiepookie 9 hours ago 0 replies      
I'm not sure I understand the claims toward the end of the talk about there no longer being binaries and debuggers and linkers, etc. with METAL.

I mean, instead of machine code "binaries", don't we now have asm blobs instead? What happens when I need to debug some opaque asm blob that I don't have the source to? Wouldn't I use something not so unlike gdb?

Or what happens when one asm blob wants to reuse code from another asm blob -- won't there have to be something fairly analogous to a linker to match them up and put names from both into the VM's namespace?

23
ika 8 hours ago 0 replies      
I always enjoy Gary's tasks
24
angersock 5 hours ago 0 replies      
It's been kind of fun watching JS developers reinventing good chunks of computer science and operating systems research while developing node.

This talk has convinced me that their next step will be attempting to reinvent computer engineering itself.

It's a pretty cool time to be alive.

25
jokoon 5 hours ago 1 reply      
I want a C interpreter
26
yoamro 17 hours ago 0 replies      
I absolutely loved this.
27
slashnull 7 hours ago 0 replies      
ha-zum yavascript
28
adamman 16 hours ago 1 reply      
"It's not pro- or anti-JavaScript;"

OK

29
Fasebook 14 hours ago 0 replies      
"I get back to the DOM"
30
inglor 16 hours ago 0 replies      
This is actually not a bad lecture. Very interesting, a nice idea and surprising.
8
An experiment in iAd metarain.com
9 points by faizanaziz  1 hour ago   discuss
9
Raindrop.io Smart Bookmarks raindrop.io
39 points by johngreen  5 hours ago   37 comments top 14
1
yardie 2 hours ago 2 replies      
This is great! I've got 40 tabs open in firefox and my browser is straining under the pressure. A lot of them is research for work (servers, services, reviews). Some of it I don't care to bookmark. And some of it I'd like to read at home but have no way to get it there except emailing the link to myself.
2
ishansharma 4 hours ago 1 reply      
I already use Pinboard and it works well for my needs. What differentiates this from Pinboard?

And of course, how is it going to make money?

3
superasn 4 hours ago 2 replies      
Why does chrome keep showing a "Translate this" button on pages (i'm on signup page). There is something which is making it think it is not in English. Just a heads-up!
4
nemasu 4 hours ago 0 replies      
It appears to be down for me.
5
antr 3 hours ago 0 replies      
When I upload my html bookmark file, Raindrop shows all of the folders within the file, but when I click on import all I get told that I have no bookmarks. Something seems to be broken.
6
arvinsim 1 hour ago 1 reply      
Shame that they don't seem to have an Android app yet. I am just going to wait until they have until I consider it as a replacement for pocket.
7
iaskwhy 4 hours ago 0 replies      
How is this going to make money?
8
n8m 4 hours ago 2 replies      
Ahw crap :( I was working on something similar. But this looks really good. Now they just have to get over that "hug of death" and pay for a bigger server.

As for money generation- sooner or later you will probably see ads in certain areas unless you sign up for a premium plan.

9
tijs 4 hours ago 0 replies      
Is it Pinterest for men?
10
adityar 4 hours ago 0 replies      
I loved clipboard and then it died. How long before you go away? Signed up anyway.
11
unfunco 3 hours ago 0 replies      
It's entirely possible that two or more people share the same name.
12
fakenBisEsRult 5 hours ago 1 reply      
Now make this self-hosted and then I'm in.
13
chintan39 4 hours ago 0 replies      
Clean and Simple, I like it
14
wololo_ 4 hours ago 1 reply      
Exactly what I hate about pocket is fixed here
10
The New Linode Cloud: SSDs, Double RAM and much more linode.com
485 points by qmr  22 hours ago   250 comments top 53
1
madsushi 21 hours ago 5 replies      
Why do I pay Linode $20/month instead of paying DO $5/month(1)?

Because Linode treats their servers like kittens (upgrades, addons/options, support), and DO treats their servers like cattle. There's nothing wrong with the cattle model of managing servers. But I'm not using Chef or Puppet, I just have one server that I use to put stuff up on the internet and host a few services. And Linode treats that one solitary server better than any other VPS host in the world.

(1) I do have one DO box as a simple secondary DNS server, for provider redundancy

2
kyrra 21 hours ago 6 replies      
I forgot to benchmark the disk before I upgraded but here are some simple disk benchmarks on an upgraded linode (the $20 plan, now with SSD)

  $ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync  1024+0 records in  1024+0 records out  1073741824 bytes (1.1 GB) copied, 1.31593 s, 816 MB/s  $ hdparm -tT /dev/xvda  /dev/xvda:   Timing cached reads:   19872 MB in  1.98 seconds = 10020.63 MB/sec   Timing buffered disk reads: 2558 MB in  3.00 seconds = 852.57 MB/sec
Upgraded cpuinfo model: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz

Old cpuinfo model: Intel(R) Xeon(R) CPU L5520 @ 2.27GHz

CPUs compared: http://ark.intel.com/compare/75277,40201

3
nivla 22 hours ago 3 replies      
Awesome News. Competition really pushes companies to please their customers. Ever since Digital Ocean became the new hip, Linode has been pushing harder. My experience with them has been mixed. Forgiving their previous mishaps and the feeling that the level of Customer Service has gone down, they have been decent year long. I wouldn't mind recommending them.

[Edit: Removed the bit about DigitalOcean Plans. If you have Ghostery running, it apparently takes out the html block listing different plans]

4
rjknight 22 hours ago 10 replies      
It looks like Linode are still leaving the "incredibly cheap tiny box" market to DO. Linode's cheapest option is $20/month, which makes it slightly less useful for the kind of "so cheap you don't even think about it" boxes that DO provide.
5
pavanky 20 hours ago 2 replies      
I wish Linode (or anyone else other than Amazon) provides a reasonable Plan[1] with GPUs on them.

[1]: Amazon charges $2 an hour thats about $1500 a month.

6
conorh 20 hours ago 2 replies      
Benchmarking using wrk the smallest linode (1024 now 2048) serving a page from an untuned Rails application using nginx/passenger getting almost no other traffic. Hard to compare of course given the various other factors, but produced slightly lower performance after the upgrade. Serving a page from nginx directly (no Rails) had no appreciable difference in performance, I guess the Rails web serving is more vCPU bound?

Before Upgrade:

  Running 30s test @ http://...    5 threads and 20 connections    Thread Stats   Avg      Stdev     Max   +/- Stdev      Latency   308.91ms  135.01ms 985.82ms   80.00%      Req/Sec    14.15      4.61    24.00     66.36%    2206 requests in 30.00s, 28.51MB read  Requests/sec:     73.53  Transfer/sec:      0.95MB
After Upgrade:

  Running 30s test @ http://..    5 threads and 20 connections    Thread Stats   Avg      Stdev     Max   +/- Stdev      Latency   321.74ms  102.45ms 957.74ms   87.32%      Req/Sec    12.02      2.18    17.00     80.75%    1858 requests in 30.01s, 24.03MB read  Requests/sec:     61.92  Transfer/sec:    819.98KB

7
vbtechguy 1 hour ago 0 replies      
Updated benchmark results with 2GB vs 4GB vs 8GB vs 16gb plans from Linode vs DigitalOcean https://blog.centminmod.com/346. Definitely Linode has the faster cpus and disk i/o as you move up in plans >2GB. 16GB plans are pretty close though if you look at subtests in UnixBench and ignore the subtests affected by different base Linux Kernel versions used.
8
endijs 22 hours ago 3 replies      
Most interesting part in this great upgrade is that they went from 8CPU setup to 2CPU setup.But yeah - 2x more RAM, SSDs will guarantee that I'm not going to switch anytime soon. Sadly I need to wait a week until this will be available in London.
9
__xtrimsky 18 hours ago 3 replies      
I still prefer OVH.comhttp://www.ovh.com/us/vps/vps-classic.xml

for $7 you get:2 cores2GB RAM

for 10$ you get:3 cores4GB RAM

They don't have SSD, but SSD doesn't do everything, I prefer more ram.

EDIT: If some of you don't know OVH, it's because its new in America, but its not some cheap company, it's a European company that is very successful there. And just recently created a datacenter in North America. (I used to live in France, and have known them for some years).

10
raverbashing 22 hours ago 0 replies      
Congratulation on Linode

I stopped being a customer since migrating to DO but my needs were really small

But I think their strategy of keeping the price and increasing capabilities are good. Between $5 and $20 is a "big" difference for one person (still, it's a day's lunch), for a company it's nothing.

However, I would definitely go to Linode for CPU/IO intensive tasks. Amazon sucks at these (more benchmarks between the providers are of course welcome)

11
giulianob 22 hours ago 0 replies      
Holy crap this is awesome. Good job guys at Linode. I said I would switch if the prices dropped about 25% because RAM was pricey.... So now I have to switch.
12
SCdF 16 hours ago 1 reply      
> Linodes are now SSD. This is not a hybrid solution its fully native SSD servers using battery-backed hardware RAID. No spinning rust! And, no consumer SSDs either were using only reliable, insanely fast, datacenter-grade SSDs that wont slow down over time. These suckers are not cheap.

http://techreport.com/review/26058/the-ssd-endurance-experim...

Not to slam what Linode is doing here, and I'm sure there are probably lots of great reasons to buy datacentre-grade SSDs, but just thought I'd point out that slowing down over time (or data integrity issues) are not really consumer-grade problems any more :-)

13
relaxatorium 22 hours ago 2 replies      
This seems pretty fantastic, I am excited to upgrade and think the SSD storage is going to be really helpful for improving the performance of my applications hosted there.

That said, I am not an expert on CPU virtualization but I did notice that the new plans are differently phrased than the old ones here. The old plans all talked about 8 CPU cores with various 1x, 2x priority levels (https://blog.linode.com/2013/04/09/linode-nextgen-ram-upgrad... for examples), while the new plans all talk about 1, 2, etc. core counts.

Could anyone with more expertise here tell me whether this is a sneaky reduction in CPU power for the lower tiered plans, or just a simpler way of saying the same thing as the old plans?

14
ksec 5 hours ago 0 replies      
Sometimes i just wish the pricing system would get better as you go larger.

What is the difference between the 16GB - 96GB Plan and a dedicated server? And why would i pay 3x the price? The advantage of those who offer Cloud / VPS and Dedicated Servers Hosting company is they can mix and match depending usage. If you are actually building an any sort of infrastructure with Linode those large box are extremely expensive.

15
munger 21 hours ago 1 reply      
Rackspace cloud customer here These Linode upgrades are very tempting to entice me to switch.

I get I might not be their target market (small business with about $1000/month on IaaS spending) but there are a couple things preventing me from doing so:1) $10/month size suitable for a dev instance.2) Some kind of scalable file storage solution with CDN integration like RS CloudFiles/Akamai or AWS S3/Cloudfront or block storage to attach to an individual server.

I guess you get what you pay for infrastructure components and flexibility AWS > RS > Linode > DO which roughly matches the price point.

16
orthecreedence 21 hours ago 3 replies      
Bummer, they're taking away 8 cores for the cheap plans and replacing it with 2. Does anyone know if the new processors will offset this difference? I don't know the specs of the processors.

Linode's announcements usually come in triples...I'm excited for number three. Let's hope its some kind of cheap storage service.

17
ihowlatthemoon 20 hours ago 1 reply      
VPSBench result:

Before

-------

  CPU model:  Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz  Number of cores: 8  CPU frequency:  2266.788 MHz  Total amount of RAM: 988 MB  Total amount of swap: 255 MB  System uptime:   8 days, 12:03,  I/O speed:  69.9 MB/s  Bzip 25MB: 8.96s  Download 100MB file: 47.2MB/s
After

------

  CPU model:  Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz  Number of cores: 2  CPU frequency:  2800.086 MHz  Total amount of RAM: 1993 MB  Total amount of swap: 255 MB  System uptime:   2 min,  I/O speed:  638 MB/s  Bzip 25MB: 5.10s  Download 100MB file: 146MB/s
Test: https://github.com/mgutz/vpsbench

18
mark_lee 59 minutes ago 0 replies      
awesome, linode or DO, if you're small or media companies, no other options should be considered at all, even AWS or Google Cloud.
19
harrystone 20 hours ago 0 replies      
I would love to see them still keep all those old disks and sell me some huge, cheap, and slow storage on them.
20
jrockway 19 hours ago 1 reply      
A nice reward for those of us who have been using Linode from before they even had x86_64 images.
21
vidyesh 21 hours ago 1 reply      
So this makes Lindode practically on par with DO's $20 plan. Up till now $20 plan at DO was better now its just the choice of the brand.

But here is one thing that DO provides and I think Linode too should, you get the choice to spin up a $5 instance anytime in your account for any small project or a test instance which you cannot on Linode.

22
rdl 18 hours ago 0 replies      
Semi-related: does anyone know of any good (but still fairly cheap) providers doing Atom C2750/C2758 servers yet?
23
davexunit 22 hours ago 4 replies      
Cool news, but their website now has the same lame design as DigitalOcean. I liked the old site layout better.
24
mwexler 22 hours ago 1 reply      
There's similar and then there's alike. I guess it makes comparison easy, but imitation certainly must be the sincerest form of flattery:

Compare the look and feel of https://www.linode.com/pricing/ and https://www.digitalocean.com/pricing/

25
extesy 22 hours ago 2 replies      
So now they match DigitalOcean prices but offer slightly more SSD space for each plan. I wonder what DO answer to this would be. They haven't changed their pricing for quite a while.
26
jebblue 9 hours ago 0 replies      
I was looking into alternatives but now I'll stick with them, I can't find another cloud provider whose stuff works so well.

edit: I just finished the migration, my disk speed test is through the roof, free ram is phenomenal!

27
corford 16 hours ago 1 reply      
Big shame the new $20 plan now only offers 2 cores versus 8 with the current plan. For my workloads, I don't need 2GB RAM or SSD disks, I just need the cores :(
28
jevinskie 22 hours ago 0 replies      
I resized a 1024 instance to 2048 last night and it looks like it is already running on the new processors (from /proc/cpuinfo): model name: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz

Should I upgrade? Do I want 2 x RAM for 1/2 vCPUs? =)

29
ausjke 18 hours ago 0 replies      
This is great indeed. I'm happy Linode did this.I ran below command 10 times and used the average below:

dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync

Linode:1073741824 bytes (1.1 GB) copied, 1.09063 s, 985 MB/sD.O:1073741824 bytes (1.1 GB) copied, 3.23998 s, 331 MB/s

dd if=/dev/zero of=test bs=512 count=1500 oflag=dsync

Linode:768000 bytes (768 kB) copied, 0.478633 s, 1.6 MB/sD.O:768000 bytes (768 kB) copied, 1.01716 s, 755 kB/s

30
__xtrimsky 17 hours ago 0 replies      
Could someone please explain what improvements can we get from SSD for web applications ?

I know it would read files faster, but in most cases reading a couple of PHP files is not such a big improvement.

My guess would be maybe databases ? Read time improvement for MySQL ?

31
filmgirlcw 19 hours ago 1 reply      
Shall we call this the DigialOcean effect?
32
icantthinkofone 1 hour ago 0 replies      
Without FreeBSD support, it means nothing to me.
33
bfrog 22 hours ago 3 replies      
I'm actually a little unhappy, it looks like they reduced the CPU count for my $20/mo instance. At this point there's basically no reason to stay with them now.
34
h4pless 22 hours ago 2 replies      
I notice that Linode talked a good bit about their bandwidth and included outbound bandwidth in their pricing model which DO does not. I wonder if DO has a similar model or if transfer capacity the only thing you have control over.
35
level09 21 hours ago 0 replies      
I would probably move back from Digital Ocean if they allow a 10$/mo plan.

I know that's not a big price difference, but some website really don't need a lot of resources. they work well on D.O's 5$ server, and I have really a lot of them.

36
funkyy 14 hours ago 0 replies      
I would love to see Linode going to large HDD drives option for storage as well. I am dying to find really inexpensive cloud provider with cheap data space (SATA is fine), reasonable bandwidth but low cpu and ram and Linode style support/caring. Give server with ~500 GB hard drive, 2 TB outgoing transfer, 1 core and 1 GB ram for ~$20-30 and I am all yours.
37
jaequery 22 hours ago 0 replies      
im really impressed by their new CPU specs. from experience those aren't cheap and it's possibly the fastest CPU out in the market. combined with the SSDs, it may be that Linode currently is the fastest of any cloud hosting right now.
38
shiloa 19 hours ago 1 reply      
I have mixed feelings about this. We're in the process of moving from Linode to Rackspace but haven't flicked the switch just yet - was planning to this weekend.

Our Linode server (16 GB plan) has been performing terrible lately wrt I/O (compared to, say, a Macbook Pro running the same computations), and we decided we've had enough. I guess we'll have to compare the two after the upgrade and decide.

39
kijin 12 hours ago 0 replies      
About a week ago, I wrote a comment in another Linode-related thread asking how the new usage patterns that hourly billing encourages might affect CPU contention. At the time, I received 11 upvotes but no replies. Apparently, quite a few people were interested in my question but had no useful conjectures to share.

https://news.ycombinator.com/item?id=7564764

Now it's obvious what Linode's answer to that question is: Lower "burstable" CPU for lower plans.

The $20 plan used to be able to burst to 8 cores for short periods, but now it only has access to 2 vcores. The "guaranteed" processing power is probably higher with the newer CPUs, but at the expense of short-term burst performance.

Another minor detail that I find interesting is that the transfer cap for the $20 plan has been increased to 3TB, whereas the $40 plan still gets 4TB. Apart from the transfer cap plateau-ing at the extreme high end, this is the first time that Linode has broken its 11-year-old policy of "pay X times as much money, get X times as much RAM/disk/transfer".

40
Kudos 17 hours ago 0 replies      
Ubuntu 14.04 LTS is now available on Linode too.
41
jaequery 21 hours ago 1 reply      
DO's biggest problem is their lack of "zero-downtime snapshot backup and upgrading". i've not used Linode but anyone know if theirs is any different?
42
Justen 22 hours ago 1 reply      
Higher specs sound really nice, but on HN I see people commenting on the ease of DO's admin tools. How does Linode's compare?
43
beedogs 19 hours ago 0 replies      
This is nice to see. SSD has gotten ridiculously cheap lately.
44
nilved 17 hours ago 2 replies      
Linode's recent upgrades are awesome, but people are very quick to forget the period where they were being hacked left and right and didn't communicate with their customers until a defensive blog post weeks after the fact. No matter how good the servers may be, Linode should be a non-starter for anybody who cares about the security of their droplet; and, if you don't, why would you pay Linode's premium fee?
45
jdprgm 18 hours ago 0 replies      
This is really a fantastic upgrade. I've been hosting with Linode for a few months now and been very happy with them. I run a relatively transfer intense SaaS app and a 50% transfer increase makes quite an improvement.
46
ff_ 21 hours ago 0 replies      
Wow, that's beautiful. Currently I'm a DO customer (10$ plan), and if they had a 10$ plan I'd make the switch instantly.
47
dharma1 21 hours ago 0 replies      
ohhh yesss. DO is good for some locations like Southeast Asia but loving this upgrade for my London and Tokyo Linodes
48
hyptos 19 hours ago 1 reply      
wow EC2 instance free plan :

$ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 35.8268 s, 30.0 MB/s

49
EGreg 7 hours ago 0 replies      
I love linode. I switched from slicehost for its 32bitness back in the day, stayed for the awesome culture and independence. Slice host got sold to rack space.

However, I am seriously considering a move to Amazon Web services for one main reason: I need to decouple the hard drive space from the ram. The hard drive space is so expensive on linodes!!

50
kolev 14 hours ago 1 reply      
Goodbye, Digital Ocean!
51
notastartup 21 hours ago 1 reply      
These upgrades are impressive but they are a bit too late to the game. DO still has these advantages besides the cheap monthly price:

- DO has excellent and easy to understand API- Step by step guides on setting up and running anything- Minimal and simple

To entice me, it's no longer just a matter of price, DO has extra value added, largely due to their simplicity.

52
zak_mc_kracken 21 hours ago 1 reply      
Does any of LINode or DigitalOcean offer plans without any SSD? I couldn't find any.

I just want to install some personal projects there for which even SSD's are overkill...

53
izietto 22 hours ago 0 replies      
Do you know cheaper alternatives? Like DigitalOcean, as @catinsocks suggests
11
How to crash any media player on Android abecassis.me
3 points by etix  11 minutes ago   discuss
12
Another Big Milestone for Servo: Acid2 mozilla.org
279 points by dherman  18 hours ago   88 comments top 10
1
brson 17 hours ago 2 replies      
Servo is the kind of project that launches a thousand research papers. Some of the early results are staggering and the project is still just getting going. It is a great example of doing serious research to practical ends.

Some examples:

- firstly, the entire foundation, Rust, is itself an ambitious research project that solves many long-standing problems in the domain.

- Servo has, or has plans for, parallelism (combinations of task-, data-parallelism, SIMD, GPU) at every level of the stack.

- The entirety of CSS layout (one of the most difficult and important parts of the stack) is already parallelized, and it's fast.

- It puts all DOM objects in the JS heap, eliminating the nightmarish cross-heap reference counting that historically plagues browser architectures (this is part of Blink's "oilpan" architecture).

2
ChuckMcM 16 hours ago 3 replies      
This is awesome, I wonder if there is a more constrained web rendering engine somewhere. Something where rather than 'render everything we've ever seen' is 'render the following html 'standards' correctly' (or at least predictably). I was looking for something like this for a modern day sort of serial terminal thing.
3
modeless 14 hours ago 3 replies      
I want Rust scripting support in Servo.

  <script type="text/x-rust" src="foo.rs">
Since Rust is a safe language this should be possible without compromising security, though I don't think anyone's yet attempted to write a JIT compiler for Rust. Has the Servo team considered this as a possibility?

4
bithush 15 hours ago 1 reply      
With the bad press Mozilla has had the past few weeks it is easy for people to forget about some of the awesome things Mozilla are working on such as Rust and Servo. I really like the look of Rust and feel it might be the future native language for high performance applications. It is very exciting!
5
talklittle 17 hours ago 1 reply      
6
schmrz 17 hours ago 4 replies      
> Many kinds of browser security bugs, such as the recent Heartbleed vulnerability, are prevented automatically by the Rust compiler.

Does anyone care to explain how this would work? If you used OpenSSL from Rust you would still be vulnerable to Heartbleed. Or am I missing something?

7
macinjosh 15 hours ago 1 reply      
This is what I see when I run Acid2 in Servo. Perhaps they haven't merged the changes in to the public repo yet.

http://cl.ly/image/1b123r220P3u

8
acqq 17 hours ago 2 replies      
Is Servo using GC?
9
camus2 17 hours ago 5 replies      
When can I expect Servo to be in Firefox instead of the current engine? 2015/2016? do you have a rough idea?
10
sgarlatm 16 hours ago 1 reply      
I'm curious what Chrome's plans are for the future, in particular related to parallelization. Has anyone seen any articles about that anywhere?
13
Ask HN: What encryption algorithms should we take as compromised?
27 points by Comkid  2 hours ago   11 comments top 8
1
pja 2 hours ago 1 reply      
No ssh2-rsa is not known to be broken, although it's suspected that the NSA can factor some small (<=1024 bits) RSA keys if they really want to.

It's believed that any elliptic curve algorithm that doesn't have a transparent process for choosing the curve points may have been backdoored by the NSA choosing points that they already knew how to factor. If you use those curves, then you're revealing your secrets to the NSA but not to anyone else, because the discrete log problem is still (mostly) just as hard as it ever was.

Specifically, the elliptic curve random number generator in NIST SP 800-90A is believed to have been backdoored by the NSA. For obvious reasons no one has any hard proof, just very strong circumstantial evidence.

You can continue to use SSH2-RSA with decent size (2048 bit as a minimum) keys & AES. Those are not believed to be breakable at the current time, although as ever you can never have absolute certainty in these matters!

2
yk 36 minutes ago 0 replies      
Both Snowden and Schneier said something to the effect of "trust the math." [1,2] Additionally the leaked Tor presentation [3] seems to indicate, that the NSA can not break the primitives used in Tor. So the algorithms that were considered secure before the Snowden leaks seem to be secure. ( But this is purely a statement about algorithms, you still need to use a well studied and tested implementation of these.)

[1] Schneier: http://www.theguardian.com/world/2013/sep/05/nsa-how-to-rema...

[2] Snowden: http://www.theregister.co.uk/2014/03/10/snowden_a_few_good_d...

[3] http://www.theguardian.com/world/interactive/2013/oct/04/tor...

3
silenteh 46 minutes ago 0 replies      
In general you should prefer crypto constructions which are a result of global competitions. For example AES and SHA3.

You should avoid at all costs anything that has been standardized by NIST without going through years of reviews by international cryptographers. Dual_EC_DRBG is a clear example of crypto construction which falls into this category.

This is my general rule of thumb.

However knowing which ciphers one should use is not enough! You absolutely need to know HOW to use them.A basic and superficial example is AES in ECB mode, which is semantically secure as long as you use a key to encrypt one and only one single block.Another one is, for example, after how many encrypted blocks a key should be rotated, based on the underlying cipher used.

Once you have learnt how to use the basic building blocks of crypto you are then NOT supposed to write your own implementation and instead use existing ones....there is a small problem with this....they are broken or they either not implement all the necessary crypto constructions you need. OpenSSL is an example of broken crypto implementation, and instead NaCl does not have TLS implemented.

So this is a short summary and my personal opinion of why crypto is hard. On top of all this there are not enough experts out there which have the time to review crypto implementations or new and old constructions, and we are living a historical period where we desperately need crypto to protect our privacy.

So my final suggestions is to take some of your spare time and go through Dan Boneh Crypto 1 at Coursera: https://www.coursera.org/course/crypto

It is worth every single minute.

Once you have done that, I would also suggest you to take the Matasano Crypto challenges: http://www.matasano.com/articles/crypto-challenges/

Finally I want to thank everybody who have taken their time to create and maintain both Crypto 1 course and the Matasano challenges.

4
p4bl0 1 hour ago 0 replies      
This question only makes sense if you give the threat-model to consider.

Is it only classical cryptanalysis on the cryptographic algorithm? Or do you take into account the programming mistakes (not necessarily related to crypto) of specific implementations? Or do you allow side-channel or fault-injection attacks, which will be able to break most algorithms, if they are not implemented with specific countermeasures?

In anyway, it is a very difficult question which doesn't have a single definite answer.

5
sillysaurus3 46 minutes ago 0 replies      
If you're wondering what isn't compromised, the information here has withstood the test of time and scrutiny from the crypto community: http://www.daemonology.net/blog/2009-06-11-cryptographic-rig...

Barring some major advance in breaking crypto (which is entirely possible) it will probably stand for a long time to come.

6
cliveowen 1 hour ago 1 reply      
It's not just about compromised encryption algorithms, it's also about picking the right algorithm for a given purpose.

For instance, an hashing algorithm can be used to securely store passwords, and must therefore be slow, or to find duplicate files, a task which greatly benefits from speed. If you use a fast hashing algorithm to "securely" store passwords you might as well use a compromised algorithm since the security is nonexistent in both cases.

I think the same applies to crypto algorithms: it doesn't matter if the building blocks are individually secure if you don't know how to put them together in a secure fashion.

7
KhalilK 1 hour ago 0 replies      
For an n-bit RSA key "The absolute minimum size for n is 2048 bits or so if you want to protect your data for 20 years. [...] If you can afford it in your application, let n be 4096 bits long, or as close to this size as you can get it."

http://www.javamex.com/tutorials/cryptography/rsa_key_length...

8
joetech 20 minutes ago 0 replies      
I'm of the opinion that trusting any of them at this point could disappoint.
14
Rust for C++ programmers part 1: Hello world featherweightmusings.blogspot.co.nz
30 points by adamnemecek  5 hours ago   10 comments top 3
1
coldtea 1 hour ago 1 reply      
God, I hate the single quote character.

Perhaps I've been conditioned by decades of languages with balanced quotes (for strings etc), plus some OCD, I can't stand an open '.

2
acqq 2 hours ago 1 reply      
How do you format the arguments passed to println? Do you have to make new types just to do that?
3
dcsommer 2 hours ago 0 replies      
I like this article. It's short and easy for the target audience and has enough content to let C++ developers start to understand Rust. It also hints at and leaves unexplained other features in the language, piqueing the curiosity of the reader.
15
Illumina Accelerator Program illumina.com
29 points by sakai  5 hours ago   5 comments top 2
1
pvnick 2 hours ago 1 reply      
Oh, that is really cool. From the FAQ:

>Candidate teams are limited to five members. They must be genomic researchers, entrepreneurs, startups, or early-stage companies from academia or industry that aim to take their promising NGS applications to market.

If I were starting a company in the genomics space (maybe someday), I would definitely apply with Illumina. They recently hit a milestone whereby a complete human genome can be sequenced for $1000 [1], which has been a goal for over a decade since it "neatly highlights the chasm between the actual cost of the Human Genome Project, estimated at $2.7 billion over a decade, and the benchmark for routine, affordable personal genome sequencing" [2].

This will be one of the more exciting, and yet at the same time terrifying, areas of research and innovation. Personalized medicine is the future of healthcare, but we'll need brilliant, well-intentioned people to lead us there in a way that benefits us while avoiding the numerous ethical challenges along the way.

[1] http://www.illumina.com/systems/hiseq-x-sequencing-system.il...

[2] http://en.wikipedia.org/wiki/$1,000_genome

2
bruceb 3 hours ago 1 reply      
Financial support, including $100,000 instrument access (MiSeq System and NextSeq 500 System), sequencing reagents, 20% research assistant time, $100,000 convertible notes, and an equity line of $20,000 or more

I have no knowledge in this area, any thoughts on this deal by someone who does?

16
The Linux Security Circus: On GUI isolation theinvisiblethings.blogspot.fr
123 points by simonbrown  13 hours ago   52 comments top 19
1
schoen 12 hours ago 1 reply      
I agree with the concern and appreciate the solution that Qubes offers.

It might be worth pointing out that some of the "hippies" also became concerned with this and implemented their own partial solution within the X framework decades ago -- the "secure keyboard" feature. You can see it in action: run an xterm, Ctrl-Left Click in the window, and choose "Secure Keyboard" from the menu. Now all X11 keyboard events are sent exclusively to that xterm, not to any other X application (even if the other application has focus). Caution: for these purposes the window manager is also an "application", so you won't even be able to Alt-Tab to switch applications while the secure keyboard mode is active!

The X11 developers assumed that you would use the Secure Keyboard feature while typing important passwords. There's a pretty elaborate discussion of this in the man page for xterm(1).

There, the threat that they're most focused on is remote display access from applications on other hosts, rather than malicious software potentially running on the local host. For example, the authors of xterm fear that you'll be using machines foo and bar in the same computer lab, and run "xhost foo" on bar, and then an attacker will rsh into foo and log in to their account there and then run "DISPLAY=bar:0 keyboard_sniffer" from their account on foo, whereupon the X server on bar will conclude that the remote client request from foo is perfectly legitimate because you told it to accept all X11 connections originating from foo.

Of course the secure keyboard feature only partially mitigates one aspect of this threat, while Qubes offers a much more thoroughgoing and useful mitigation to a larger range of things. (I might also mention ptrace, where recent Linux distributions have activated a kernel policy which forbids, by default, having a non-root process start to ptrace another running process that isn't its child. I believe this was in response to malware grabbing private keys from programs like ssh-agent via the ptrace interface. Oops. It does also mean that you can't strace -p or gdb attach something that's already running, unless you become root and change the policy.)

In general, the idea of protecting against software that's running locally as the same user is something of a novelty on most desktop OSes.

https://en.wikipedia.org/wiki/Discretionary_access_control

2
zobzu 12 hours ago 2 replies      
SELinux has xorg hooks [1] to isolate the gui, actually... yes that means with SELinux you can control which window can talk to which, what clipboard can be used where, by what window, etc.

not that it makes the xorg code better in any way, but, when you're going to say the world is ignorant and you know it all, at least get your stuff right.

Thats why people "hate" you as you describe in this post. Having an aggressive style doesn't make you more accurate. It just makes you annoying.

[1] http://www.nsa.gov/research/_files/selinux/papers/xorg07-pap...

3
eliteraspberrie 11 hours ago 3 replies      
BTW, Windows is the only one mainstream OS I'm aware of, that actually attempts to implement some form of GUI-level isolation, starting from Windows Vista.

Is GetAsyncKeyState still available in Windows Vista? I'm guessing it is, and processes can still read each others' keystrokes without problem. The MSDN documentation doesn't mention it being deprecated. http://msdn.microsoft.com/en-us/library/windows/desktop/ms64...

(By the way this article is from 2011. It would be interesting to read an update.)

4
bm98 11 hours ago 1 reply      
The very first comment below the article (correctly) contradicts the author's claims about SELINUX sandbox. The author acknowledges the comment, and criticizes the SELINUX implementation, but does not dispute the fact that SELINUX sandbox ("sandbox -X xterm" in RHEL/CentOS/Fedora/SL) does in fact defeat the keystroke logger attack described in the article.
5
sandGorgon 6 hours ago 1 reply      
Can someone (who is more familiar than me on this topic) comment on whether this is a by-product of Xorg running with root privileges.

The systemd-logind work on Fedora [1] is attempting to run Xorg without root rights (and Wayland/Weston[2] in the future) and I'm wondering if that is a much more effective fix than all this sandboxing ?

[1] https://fedoraproject.org/wiki/Changes/XorgWithoutRootRights

[2] https://plus.google.com/+DavidHerrmann/posts/ggK1tStCvJH

EDIT: longer discussion on Reddit - http://www.reddit.com/r/linux/comments/1viqpt/x_server_runni...

6
xenophonf 46 minutes ago 0 replies      
Invisible Things Lab published a similar analysis of Windows GUI isolation earlier this year:

http://theinvisiblethings.blogspot.fr/2014/01/shattering-myt...

I find the concept behind Qubes fascinating. I only wish that it was more stable or that it supported native isolation mechanisms like FreeBSD jails.

7
mdwrigh2 9 hours ago 0 replies      
Well, if you count Android as mainstream OS then it too has GUI isolation. One app can't receive key events when another has focus, nor can it inject events into other applications.
8
doctorfoo 3 hours ago 0 replies      
On my last OS reinstall, I tested out Qubes but was put off by the lack of AppVM graphics card support (I do game development) and lack of bluetooth (wireless mouse + headphones) support. I understand these are key security holes and are absent for a reason, but in my view being able to enable these would have been an acceptable compromise between usability and security - the reality is that my personal security situation is not so critical that I am going to go without these modern conveniences, but I would still like to make it as difficult as possible for attackers otherwise.

Surely Qubes even with these security holes would be better than vanilla Linux? (This was roundly shot down as a proposition on the Qubes mailing list)

9
ams6110 10 hours ago 0 replies      
In the 1990s I worked at an investment bank, all the developers had X terminals (these were workstations that just ran X servers, they did not have any other local compute or storage resources). I discovered one day that I could run a program and direct the display to any X terminal in the network. There were some screensaver-type programs that would make everything on your screen a mirror image, or appear to melt down and run off the bottom of the screen. It made for some fun pranks.
10
shmerl 9 hours ago 2 replies      
It's an old post, but it applies to X.org all the same. Wasn't Wayland planning on addressing these issues?
11
simondedalus 12 hours ago 1 reply      
i've been saying for awhile now (to anyone bored enough to listen) that qubes style isolation is the only safe idea on offer these days. i just wish qubes were usable.

i'm fairly sure that 10 years from now people are going to be citing joanna rutkowska articles with regularity. her flatfooted denial that security is possible without isolation is the only sane perspective.

the point is that starting from full isolation and straining to get things to talk to each other in the hope of achieving productivity makes sense. starting from full interactivity and straining to isolate but just when it's necessary "for security" is a battle we will all eventually lose, and we should know it already.

12
leoc 1 hour ago 0 replies      
Will Wayland improve matters significantly?
13
lmz 12 hours ago 1 reply      
I hear Solaris has Trusted Extensions for X11 (e.g. [1]). Would that handle this case better?

[1]: http://www.desktopsummit.org/sites/www.desktopsummit.org/fil...

14
joveian 10 hours ago 2 replies      
For a lot of end user systems it doesn't really matter if cross-user protection boundaries exist or not because there is only one user on the system. It is the lack of sub-user protection boundaries that is a big concern.

I hadn't heard of Qubes, looks interesting.

15
microcolonel 10 hours ago 0 replies      
Glad to see this reposted(although it being this specific article is of no particular value, there are other articles on this which don't have such shameless plugs).

We now have Wayland well on its way to replacing X(I'd say within two to three years, some users won't ever know they're running Wayland), and from what I can tell, it will offer fairly good isolation for input.

Another important step is graphics drivers being cleaned up to improve isolation between graphical clients. For the time being, though, I can create a GL context and not draw anything, but still see parts of the framebuffer in my context, which I find giggle-worthy.

I think the basic problem here is that graphics systems are hard enough to make work at all, let alone make secure. I will probably not trust a system this complex to protect input or content for a long time, if ever.

16
ecma 13 hours ago 2 replies      
Interesting that this comes up again with Ubuntu Trusty continuing to retain Xorg. Is anyone familiar enough with Mir and Wayland to comment on how those packages approach this kind of thing?
17
okasaki 7 hours ago 2 replies      
People bring this up sometimes, but I've never heard of this being used in any real attack/malware. It appears to be a purely theoretical threat.
18
ciupicri 12 hours ago 0 replies      
The title should have (2011) at the end.
19
carver 12 hours ago 0 replies      
Yikes! Yet another reason to use paper bitcoin wallets.
17
The Design Flaw That Almost Wiped Out an NYC Skyscraper slate.com
169 points by x43b  16 hours ago   53 comments top 16
1
ot 5 hours ago 0 replies      
Reminded me of the design flaw of the Millennium Bridge in London, where the engineers accounted for all resonance modes but one, the one that can be caused by pedestrians:

> Resonant vibrational modes due to vertical loads (such as trains, traffic, pedestrians) and wind loads are well understood in bridge design. In the case of the Millennium Bridge, because the lateral motion caused the pedestrians loading the bridge to directly participate with the bridge, the vibrational modes had not been anticipated by the designers. The crucial point is that when the bridge lurches to one side, the pedestrians must adjust to keep from falling over, and they all do this at exactly the same time.

http://en.wikipedia.org/wiki/Millennium_Bridge_(London)

2
wallflower 14 hours ago 1 reply      
What this story leaves out is that the EVP of Citicorp at the time was a MIT-trained scientist (physical metallurgy). So when this crisis bubbled up - there was no hesitation in action since there was a scientist/engineer in top leadership who was able to communicate to the board the severity of the situation.

"Together they flew to New York City to confront the executive officers of Citicorp with the dilemma. "I have a real problem for you, sir," LeMessurier said to Citicorp's executive vice-president, John S. Reed. The two men outlined the design flaw and described their proposed solution: to systematically reinforce all 200+ bolted joints by welding two-inch-thick steel plates over them."

http://www.damninteresting.com/a-potentially-disastrous-desi...

http://en.wikipedia.org/wiki/John_S._Reed

3
Timothee 13 hours ago 3 replies      
One point that this article doesn't mention but the video does (starting about here: http://www.youtube.com/watch?v=TZhgTewKhTQ#t=350) is that the building wasn't built exactly as designed.

In particular, the 8-story-high diagonal parts were done in multiple splices that were supposed to be welded together but ended up being bolted together. It sounds like it made things much worse.

4
sitkack 5 hours ago 4 replies      
"But what I found out at that meeting were that all factors of safety were gone."

Many catastrophic "accidents", and I use quotes because they could have been averted had people not cut corners w/o knowing the full context.

  * Chernobyl (after hours test by an untrained crew with an inverted fail safe design)  * http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse (design change in the field, very similar to the citicorp flaw)  * 3 Mile Island (indicator that triggered on switch rather than valve)  * Fukashima (cost cutting on seawall and generator snorkels)  * Ariane-5 (code reuse, dead code)
If you want to look at good engineering, look at the Brooklyn Bridge[5] and the DC-3[6].

Too many people don't design with proper safety factors. You build it, you test it, you test it till it fails and you understand those failures. I would trust another citicorp wouldn't happen because we can do realistic wind model, we can do an earthquake model, an anything model. Maybe we can get to a safety factor of 1 when everything is automated, when everyone has an off-site backup of their own brain but until then. Safety factor 6.

[5] http://www.asce.org/People-and-Projects/Projects/Landmarks/B...[6] http://en.wikipedia.org/wiki/Douglas_DC-3

5
gkop 15 hours ago 1 reply      
6
salem 14 hours ago 0 replies      
This fantastic validation for a person's undergraduate thesis. It's a real shame that she wasn't given proper credit at the time.
7
dm2 15 hours ago 1 reply      
Reminds me of: http://en.wikipedia.org/wiki/File:CCTV_Beijing_April_2008.jp...

I'm sure the CCTV building is safe, but I get a small panic attack just thinking about walking or jumping up and down in that overhanging corner of the building.

How did they fix it? The article says they "welded" but doesn't say what was added to increase the strength of the building.

8
rajacombinator 12 hours ago 1 reply      
That's really scummy that the student did not receive appropriate credit at the time or after. That should have been a career maker story.
9
sscalia 14 hours ago 1 reply      
More interesting reading around "Tuned Mass Dampers" http://en.wikipedia.org/wiki/Tuned_mass_damper
10
sea6ear 9 hours ago 0 replies      
I think there was an episode of Numb3rs that was based on this. Maybe Season 1 episode 4 (Structural Corruption)?

It was fun to see the premise turn up in the show and go, oh, I think I know where they got this idea from.

11
cratermoon 13 hours ago 1 reply      
If only software flaws in large commercial proprietary/closed source systems were subject this kind of discovery and mitigation before things break.
12
bayouborne 16 hours ago 2 replies      
This has always been a fascinating story - I can't believe Slate's just now discovering it. There's a much longer, more detailed account somewhere.
13
paul_f 15 hours ago 2 replies      
Maybe this is a dumb question, but why couldn't they have found another location for the new church? Why did it have to be on the same plot as the Citicorp building?
14
morley 15 hours ago 1 reply      
Maybe this is covered in the New Yorker article marklabedz linked to, but I'm curious what their solution to quartering winds was. (They mention welding as a part of the solution but don't go into more detail.)
15
philgr 16 hours ago 0 replies      
This podcast is fantastical.
16
cmapes 13 hours ago 0 replies      
That was a great read, I usually HATE stories that come from Slate but this partially redeemed them in my mind.
18
How Can Yahoo Be Worth Less Than Zero? bloombergview.com
128 points by foobarqux  14 hours ago   88 comments top 19
1
nostromo 10 hours ago 1 reply      
The liquidation value of a company only serves as a floor for the value of the company if there's actually a chance that the company will be liquidated (or sold).

Yahoo's board and management have shown that they take a long view on Yahoo and will not liquidate or sell the company. So it's completely logical that the market price of Yahoo be less than the sum of its parts, so long as you think that the value of their businesses will continue to decline.

2
JumpCrisscross 9 hours ago 0 replies      
In the 1950s and 60s, American capital markets produced conglomerates. These conglomerates offered an unsophisticated investing public pre-packaged diversification. They were also able to leverage their mass to reliably tap the capital markets. Through the 1970s and 80s, the American finance matured. Investors found portfolios better vehicles of diversification than conglomerates. New capital markets negated size as a pre-requisite to financing. The inefficiencies of having unrelated businesses under one roof became a greater liability than any prior advantages. The LBO tigers dismantled the titans.

A similar story seems to be playing out in tech. The dot com bubble scarred a generation of management. These firms hoard cash, disdain debt and covet the reliability size brings. This is not irrationalthe technology capital markets are notoriously capricious. LBOs don't work on an equity-rich capital structure where management holds all the voting rights. Perhaps this will bring an alternative to the acid pens of activist investors.

3
nicholas73 8 hours ago 4 replies      
The answer is really simple: nobody with billions actually thinks Alibaba and YHJ is worth as much as their share price, and not many common people know about Alibaba and YHJ.

It's the same reason you don't see high flying stocks like FB, LNKD, YELP, etc. actually getting buyout or tender offers for their shares.

In the event of having to sell a large block of shares, all of the share prices will crater, Alibaba included. Prices are set at the margins, so until there is a stampede the stock price will go with the flavor of the month.

Stock price != company value. Nobody will enter a position they can't get out of easily unless there is real worth in holding.

4
dragonwriter 12 hours ago 2 replies      
The idea that market cap (current marginal stock price total outstanding shares) is valuation is, well, convenient to get an easy way to calculate a number and call it the overall valuation, but doesn't really hold up.

All this article does is point out that if you use that (flawed) method to value Yahoo, and then use the same method to value Yahoo Japan, and then use even less-reliable means to value the non-public Alibaba, and then subtract the last two from the first, you get an unexpected number.

And even if the assumptions underlying the "negative" valuation of the core business were inassailable, its quite easy to have a business to have a negative $10 billion value. Suppose a business has $11 billion in total liabilities, and $1 billion is total assets. Voila, Owner's Equity is -$10 billion.

And this can still be a profitable company. Obviously, making profits means that profit stream has a value (the current value of the stream of future income), but that doesn't mean that you don't have liabilities that exceed that (and the valuation of that stream is not just based on current profit, but expectation of its future continuation. A company can be profitable but the market can lack confidence that it will continue to be profitable.)

5
dpcheng2003 12 hours ago 0 replies      
Some additional reasons:

Yahoo's different parts cannot be traded and thus have no liquidity. There's a huge liquidity discount associated with that.

If Yahoo were to sell the pieces of Y!Japan, Alibaba, etc. on the open market, it would have to do it a structured and delayed process or else it would flood the market, dropping the respective stocks. More discount.

6
penguindev 12 hours ago 2 replies      
You can read about this in Securities Analysis (1940). Basically you don't trust management, so companies can trade for less than liquidation value [edit: and investors don't trust / overlook that a negative subsidiary can be unwound, although that's not technically true in yahoos case if all subs are profitable].

It's really amazing how nothing changes in finance; it's just that memories are short.

7
danieltillett 13 hours ago 5 replies      
The article explores a few theories and none really make sense. The only theory that can explain why Yahoo can be worth $13 billion less than two of its three components is that the market believes that its management is so bad that they are going train wreck the group - the hp effect.
8
ldd- 11 hours ago 0 replies      
Basically, the Yahoo valuation builds in some expectation that it will sell its shares in Alibaba and Yahoo Japan, but instead of returning the proceeds to shareholders (fully realizing its value), they will attempt to reinvest in Yahoo and destroy some of the value
9
pbreit 13 hours ago 2 replies      
What are good ways to play this besides just buying the stock? In or out of the money calls? Leaps?
10
Retric 13 hours ago 2 replies      
Yahoo would need to pay taxes on it's gains on the other companies stock before it could return that money to it's shareholders so effectively that stock is worth less than it's stated value. Also, Alibaba is not a public company so there is a fair amount of uncertainty in it's value.

Thus, Yahoo is not worth negative 13 billion.

11
Mikeb85 9 hours ago 1 reply      
Market cap isn't the value of a company, but rather the value investors place on the shares times outstanding shares.

In Yahoo's case it's priced so cheap because investors aren't betting they'll get a lot of return (dividends and buybacks).

12
johnrob 12 hours ago 0 replies      
For most companies, the value hits zero when the share price hits zero. When a stock drops from $10 to $5, investors see this as "halfway to zero" and adjust. For yahoo, given its valuable holdings, the zero line happens to be somewhere above actual zero (let's just say $10). However, when yahoo drops from $20 to $15 investors are not switching to "halfway to zero" mode.

As simple as it sounds, I think the elevated numbers have allowed the market to go too low. The whole thing is very psychologically driven after all.

13
nroose 6 hours ago 0 replies      
As many have said, this happens often in the stock market. And in other markets. All of these numbers are just numbers on paper, not actual cash in someone's pocket. This is one of the ways Mitt made so much money - buying companies for less than what he could make by selling the pieces or repackaging the whole.
14
ithinkso 12 hours ago 2 replies      
On the not so unrelated topic: does anyone know where you could learn 'basic business'? Something like programming languages first-steps tutorials to be able to understand what they actually are talking about.
15
clef 13 hours ago 3 replies      
Last time I checked years and years ago, yahoo was kind of a search engine/directory. What does it do now?
16
spcoll 13 hours ago 0 replies      
In the absence of a direct way of arbitraging between YHOO, Alibaba and Yahoo Japan, is it not surprising that the respective valuations of these entities will become disconnected.

The market value of a company does not represent its actual worth. Yahoo is not worth negative 13B.

17
adventured 12 hours ago 0 replies      
This isn't even remotely a puzzle.

The cash value of the Alibaba etc. holdings is always going to be discounted some % (often a significant %). The supposed puzzle implies that those assets are valued at max value inside the market cap of Yahoo. They are not, and historically, cash, cash equivalents, or future expected cash value, is always hit with a discount as far as the market cap is concerned. You can see this in action across every type of public company (from Apple to Berkshire).

Investors simply do not put a full value on cash holdings. They prize earnings and growth drastically more than cash on a balance sheet.

18
whoismua 9 hours ago 0 replies      
Meaning that Yahoo's actual business -- Yglesias calls it "Tumblr and Flickr and the iOS weather app that I love and all the news sites and the mail and the fantasy sports stuff" -- is worth a negative amount of money

Poor Yahoo! Let me have them for $0 which is a lot more than negative billions.

Of course, Y! Japan and Alibaba cannot be sold without incurring taxes or lowering the price. And they are worth this much, right now. Tomorrow their stock might drop

19
lnanek2 11 hours ago 1 reply      
Yahoo has a pretty bad history with investors, at least with me. I had a ton of options ready for if they sold to Microsoft at the price point Microsoft offered, it was a huge win for share holders, but they didn't do it. I don't think I'll ever trust them to do what is right for the share holders ever again...
19
Merkle Patricia Tree github.com
23 points by jc123  5 hours ago   1 comment top
1
shin_lao 5 hours ago 0 replies      
This should mention that the purpose of a Merkle Patricia Tree is to permit safe and fast comparison of large blocks of data.

A Merkle Patricia Tree isn't faster than a regular Patricia tree because of the added hashing.

20
Dropbox acquires Hackpad (YC W12) hackpad.com
216 points by yukichan  19 hours ago   138 comments top 18
1
UVB-76 18 hours ago 10 replies      
This is how it was always going to go.

Dropbox's core business is unsustainable, and they can't compete long-term with rivals like Google and Apple.

They're flailing in all directions at the moment; pushing for the enterprise/government market with the appointment of Condoleezza Rice, now burning a load of money acquiring businesses offering tangential services, in the hope they can diversify their business model.

It won't work. Acquisitions like this never go to plan, and they are almost always a waste of money.

2
amykhar 19 hours ago 5 replies      
Just a little side note. I really wish people would give an overview of what their service is or does in press releases like this. Quite often, I see 'Facebook bought x' or 'Dropbox bought y' and I click to see what it is, and if I would want to use it. More often than not, there's no little blurb that lets me know what their product even does.
3
ChuckMcM 19 hours ago 1 reply      
One of those 'no brainer' moves, glad to see it got done. Love the irony of a YC exit as an acquisition by a YC company :-) Congratulations, hackpad is an awesome product and the combination with Dropbox has excellent potential.
4
rattray 19 hours ago 1 reply      
I think this makes a lot of sense for Dropbox. Documents are moving online, which means people won't need Dropbox for them.

I have a half-written blog post from months ago on why Dropbox should by Quip for this reason - they should be trying to leapfrog Google Docs to stay competitive.

Best of luck to the team!

5
xianshou 19 hours ago 2 replies      
Acquihires are pretty much the default hiring method these days, so "victory" now requires keeping the product active after acquisition.

A toast, then, to Hackpad. Well done.

6
kunle 17 hours ago 0 replies      
This is an exceptional deal. The hackpad team is awesome, the product makes sense, and I remember thinking after Box bought Crocodoc, that Hackpad would make sense as part of Dropbox, especially as it went enterprise and started competing with Word, Google Docs etc.

Congrats to the Hackpad team and to Dropbox here. Solid deal.

7
unhush 17 hours ago 1 reply      
My favorite parts of Hackpad were the features that weren't intended to have mass-market appeal (ex: code syntax highlighting, markdown-inspired keybindings, ability to easily create/delete accounts). These will likely be gone in whatever notes product that Dropbox makes with the help of the (wonderful) Hackpad team.

So for me, this acquisition seems like a loss. I realize that Hackpad has said that they'll keep the site alive, but I expect it to be less functional if everyone maintaining it is a full-time Dropbox employee now. Fingers crossed that there will someday exist a good collaborative doc editor for hackers that doesn't fall over when >10 people connect or require a Google account!

Full disclosure: I have written code and done security auditing for Hackpad. I tried to get them to add vim mode. :)

8
bitsweet 19 hours ago 0 replies      
Hackpad is pretty awesome. Glad it will still be running after the acquisition. Congrats Alex & Igor
9
quadrangle 17 hours ago 2 replies      
Ugh, Etherpad shoulda been copyleft. Hackpad is a travesty for being so non-transparent that they're just a tweak of Etherpad.
10
jdp23 19 hours ago 1 reply      
Congrats Alex and Igore, Hackpad is really impressive. Great move by Dropbox.

What's the state of the opensource alternatives? Etherpad development seems to have plateaued a while ago.

11
Shank 11 hours ago 0 replies      
Please don't shut it down, like Readmill. I use Hackpad daily, and I'd hate to see it go by the way of Readmill and shoved down the toilet.
12
brianr 18 hours ago 0 replies      
We love Hackpad at Rollbar. Congrats guys, keep up the good work!
13
matthuggins 17 hours ago 1 reply      
Is the landing page terribly jittery for anyone else? I can barely tell what's going on and scrolling takes a bit to respond.
14
dduvnjak 16 hours ago 0 replies      
Well this is not very promissing: http://i.imgur.com/1vCvZzI.png
15
orik 19 hours ago 3 replies      
Hackpad and Loom? Dropbox is on a bit of a feeding frenzy.
16
elliott34 17 hours ago 0 replies      
hahahaha ctl-f for "journey"
17
Numberwang 18 hours ago 1 reply      
Hackpad seems down at the moment. Wouldn't trust them with my documents.
18
matthewcford 17 hours ago 0 replies      
Sounds like redirection for the Condoleezza Rice fallout.
21
Dropbox acquires Loom (YC W12) loom.com
201 points by ukd1  20 hours ago   135 comments top 29
1
swanson 20 hours ago 4 replies      
In case you're playing the Acquisition Post Drinking Game:

"Its been a long road and we feel that we have come a long way in solving this problem. We are elated to announce the next step in this journey"

"Its been an immensely exciting journey and we are humbled by the support we received along the way."

http://ourincrediblejourney.tumblr.com/

2
kylec 18 hours ago 8 replies      
This is really disappointing. I have been a paying Loom member since shortly after the service was released, it's too bad it wasn't enough to keep it an independent company. After DivvyShot, Everpix, Snapjoy, and now Loom, I don't think I can trust another photo management startup again.
3
subpixel 19 hours ago 2 replies      
Crap. I bought a year of Loom for precisely the reason that I do not want my photos taking up space on my drive. Now I have to explain to my extended family why all the confusion they put up with setting things up was for naught.
4
cschmidt 19 hours ago 3 replies      
I think the photo management space is like blog platforms. They die so rapidly, I don't want to invest the considerable time to try out a new startup. Dropbox bought out Snapjoy and now Loom. Everpix is gone. I guess Picturelife is still around.
5
rafeed 20 hours ago 0 replies      
I think this is the first time in recent history that a company actually took care of its users after acquisition. Most of the time after the deadline to export your content, it's a big "fuck you" to the customer. At least Loom is making it easy to transfer content to Dropbox and is providing users with the same space on Dropbox as well free for a year. I can respect that.
7
inklesspen 18 hours ago 1 reply      
Well, this is awfully disappointing. I'm in the process of migrating my stuff off of Dropbox due to the Dr. Rice issue (only problem is a lot of apps use it as their sole syncing service); I have no desire to move gigabytes of photos into Dropbox's control.
8
otikik 19 hours ago 1 reply      
Every time I read that name all I can think of is:

"Welcome to the age of the Great Guilds"

And then I open things with "ECED".

9
deciplex 11 hours ago 0 replies      
For those who missed it, this is a company that also acquired Condoleeza Rice on its board recently. Yes, Dropbox, aka 'next' according to a few NSA slides that have been making the rounds, is now taking advice from Condoleeza 'Warrantless Wiretapping' Rice. Run away. Sprint.
10
leejoramo 19 hours ago 0 replies      
What about Snapjoy? This is not the first photo related sight that Dropbox has picked up. So far I have not been impressed with Dropbox's photo efforts. Why should we think this will be different?
11
creativityhurts 17 hours ago 0 replies      
For what is worth, they're better off with Dropbox than with Facebook :)

Loom had a lot of success since Everpix got shut down, now it's interesting to see which startup will fill their place.

12
gkya 14 hours ago 0 replies      
I reckon this "big companies hoovering up smaller ones" phenomenon will eventually cause the startup economy to die. None of the services are reliable; they come up, start their services with a "no warranties, use at your own risk and we go away at our own disposal" policy; then some big company like Google, Facebook or the newfangled giant Dropbox buy them out and mix them into themselves. The service is gone. The users have to recover. The only acquisitions I can recall, which did not disrupt users recently was that of Instagram's and Vine's (also WhatsApp, but that's too recent to be sure, isn't it?). But the wonted story is that money is poured onto the owners, and the services get killed. Now, I am not an entrepreneur, and am not that knowledgeable about business topics, but I can see that people become less likely to invest in startups each day, every time news of these kinds hit the headlines. Sad, that is.
13
anon808 16 hours ago 0 replies      
I see that dropbox (a YC portfolio co) bought 2 other YC portfolio cos. I know it's a stretch, but does YC actively pitch smaller portfolio companies to their more successful portfolio companies? Keep the cash in the family.
14
jscheel 19 hours ago 0 replies      
I was considering signing up for Loom a while ago. It was great value prop for a problem that pretty much everyone runs into eventually. However, there were a few red flags that made me think they were going to go the way of Everpix. Good to see they didn't completely shut down, but I'm also glad I didn't bother spending time with them.
15
aashaykumar92 20 hours ago 2 replies      
Seems like a strategic acquisition to help Carousel grow.
16
kingnight 19 hours ago 1 reply      
The other day when Carousel was announced, loom was mentioned in a hn thread as a competing service. It's offering was certainly compelling in many ways, and had me weighing switching. If I didn't have such a giant library, I probably would have on the spot. Now, in a way, I'm thankful I don't have to, and I hope they integrate the positives that loom had over Dropbox/Carousel. This is the first acquisition I've read about in a while that has me excited as a user.
17
uptown 16 hours ago 0 replies      
For anyone looking for a host-your-own photo solution, I highly recommend taking a look at Koken. I've been using it for awhile, and it's phenomenal.

http://koken.me/

18
smackfu 19 hours ago 1 reply      
Interesting that Loom was less expensive than Dropbox, even with the value add photo stuff.
19
seanmccann 17 hours ago 0 replies      
"couldnt be happier" don't people want to build companies anymore?
20
badusername 18 hours ago 2 replies      
Shutdown an entire service, and roll over to the guys that say you will work on a service that was your competitor till today.Seems like people don't care much about their sweat and tears these days.
21
luser 19 hours ago 0 replies      
Happy for the guys who got acquired....BUT thinking as a consumer, this type of thing makes me very nervous about relying on teeny tiny SAAS businesses for anything long term.
22
nayefc 19 hours ago 1 reply      
iCloud photos work much better than Loom. Loom is buggy. And it will suck more at Dropbox, just like what happened to Mailbox (seriously, what has Mailbox been doing for 6 months after iOS 7 has been released?!).
23
bitsweet 19 hours ago 0 replies      
congrats to the entire Loom team
24
salberts 18 hours ago 0 replies      
It's great to see how photo storage solutions get faster, prettier and cheaper. Now it's time to make them smarter. Most people I know feel swamped with photos as there are no decent tools to clear this photo-mess.
25
lttlrck 18 hours ago 0 replies      
Upon exporting to Carousel:

"This app is in development mode and cannot accept more users. Contact the app developer and ask them to use the Dropbox API App Console to apply for production status."

26
debt 19 hours ago 0 replies      
Awesome. Loom should consider changing that landing video image though on their homepage. It just...something seems...not right. Or maybe make that iPad more prominent.

I don't know.

27
daanlo 19 hours ago 0 replies      
Sad too see a great product go, but happy to see it make its way into a product I also use :-)
28
antoni 19 hours ago 0 replies      
There is already an "Export to Carousel" option for logged users, which leads here: https://loom.com/migrate.
29
sunilkumarc 20 hours ago 1 reply      
So totally how much of space are we gonna get now ?
22
British Path Puts Over 85,000 Historical Films on YouTube openculture.com
204 points by jamesbritt  20 hours ago   28 comments top 9
1
hf 20 hours ago 3 replies      
An impressive collection to be sure. Slightly hyperbolical, the British Path archive puts itthusly:

"This archive is a treasure trove unrivalled in historical and cultural significance that should never be forgotten."[0]

However, I am left wondering why "[u]ploading the films to YouTube seemed like the best way to make sure of that."Perhaps fittingly, there's no clear indication whichlicence, if any, is applicable.

What could've possibly impeded a parallelupload to the Internet Archive?

[0]https://britishpathe.wordpress.com/2014/04/17/british-pathe-...

2
Killah911 15 hours ago 0 replies      
While watching footage of when Hitler came to power, I got a pop up on YouTube to the effect of Obama wanting to take away guns & how I'd vote. It's nice they put it up on YouTube so the masses can see this amazing footage (I saw the Wright Brothers' flight for the first time there). But watching that ad pop up just drove home the point that we just traded humans for pigs to run our farm.
3
gdewilde 19 hours ago 3 replies      
Oh.. moar awesome than the patent database, most don't get that either... innovation, technology etc (?)

Everything from Flying cars...

https://www.youtube.com/watch?v=FNp_iO-2Jfg

...robotic car parks...

https://www.youtube.com/watch?v=f6-GxKtQ0e4

.. the tarring of roads...

https://www.youtube.com/watch?v=OvFdo3tjFTY#t=2m36s

To the imagination of science...

https://www.youtube.com/watch?v=8Z0LprRG2FQ

gravity powered generators..

https://www.youtube.com/watch?v=rxIRaJlTD4Y

something called wind energy(?)

https://www.youtube.com/watch?v=ifOnhGsWMmk

and the miracle of democracy...

https://www.youtube.com/watch?v=3IamKE4AQUQ

4
v1tyaz 10 hours ago 0 replies      
I don't understand the people commenting that YouTube is not a proper archival tool. Obviously. They're not deleting their own copies of these films, they're just making them available to the public in an easy to use manner. Criticism of this is totally misguided.
5
ithkuil 15 hours ago 0 replies      
Here's another historical film archive: http://europeanfilmgateway.eu/
6
maga 18 hours ago 0 replies      
I wish they were with the original commentaries that press used at the time they first appeared. It would allow to see those "news" through the eyes of contemporary viewers.
7
TuxLyn 17 hours ago 0 replies      
Very impressive collection indeed. Now lets wait another 40 years for modern 1977-1990 collection ^_^
8
samstave 16 hours ago 1 reply      
.bash_aliases

dl-2-mp3() {

    #download and save a youtube video, and extract MP3 audio track    youtube-dl -x -k --audio-format mp3 $1
}

alias ytdl=dl-2-mp3

9
dublinben 19 hours ago 1 reply      
It's too bad that this comes across as a marketing ploy. They're still charging for licenses to actually use any of this footage in any way. They haven't actually released this material under a Creative Commons or Public Domain license, so most of it is All Rights Reserved.

At least they don't have to pay for their own hosting now, to show off their video archive!

23
A dirt cheap spectrum analyzer with an RTL-SDR dongle hansvi.be
36 points by sizzle  9 hours ago   6 comments top
1
tzs 6 hours ago 1 reply      
Can someone briefly explain for those of us who are electronically challenged what the purpose of diodes in series is (D1 and D2)?

Googling on "diodes in series", I found some discussion about how you can do this if you need a reverse breakdown voltage that is higher than that of one diode, but aren't D1 and D2 always forward biased in this circuit?

24
Isotopic 256 jamesdonnelly.github.io
151 points by epenn  19 hours ago   34 comments top 16
1
Sharlin 18 hours ago 3 replies      
Stellar nucleosynthesis (somewhat simplified): http://newbrict.github.io/Fe26/
2
hablahaha 16 hours ago 1 reply      
Should you get a game over if you still have radioactive elements on the board? I think you should still be allowed to keep going, especially since you have less tiles and you are moving around with the assumption that things are going to start disappearing.
3
debt 11 hours ago 0 replies      
My jaw is on the floor. This. is. Awesome.
4
kremlin 16 hours ago 0 replies      
Very clever. When I realized why the shakey elements were disappering (not having yet read the text above the game) I smiled.
5
aroman 17 hours ago 1 reply      
Very clever remix! I think it's too easy by default though --beat it on the first try without any issue. Maybe increase the "win" element to 512?
6
elwell 14 hours ago 0 replies      
The fading elements is a nice touch to the game logic. It helps defend against simply mashing buttons (which works surprisingly well in the original).
7
drdaeman 14 hours ago 0 replies      
Playing for almost 2 hours not getting any further than lone Sr-128, now seeing all those "too easy, beat that 3 times already".

Uh. Did scientists came up with a brain replacements already?

8
iLoch 18 hours ago 1 reply      
I don't know the technical name for the disappearing elements, but I'm assuming it's intentional. Interesting twist! I don't think I'm smart enough to play this one though, I'll stick with doge.
9
spcoll 16 hours ago 1 reply      
This is a great variant. I wonder if it would be possible to conceive a chemistry version that can could potentially teach people about atomic bonding?

A bit easy though, I had no trouble beating it after a couple tries.

10
swift 11 hours ago 1 reply      
Definitely the best variant of Threes/2048 I've seen so far. Really nice work!
11
gus_massa 18 hours ago 1 reply      
There is a bug in the code (IE11?)

If the board has:

   4He  4He  --  16O   8Be  4He   4He  4He  2H
and you move up then you get

   4He  4He  4He  16O   2H   2H   4He  4He  --
(The middle 8Be just dissapears.)

12
sarvagyavaish 15 hours ago 0 replies      
The numbers on the top left don't have any correlation with the elements, right?
13
malkia 16 hours ago 0 replies      
Not sure what I did... but got 710...
14
afffsdd 13 hours ago 0 replies      
Freaking Phosphorus
15
az0xff 18 hours ago 0 replies      
Just beat this game. Nice spin on things.
16
batmansbelt 18 hours ago 0 replies      
Pretty fucky that some things just disappear.
25
Seznam, a Czech search company, previews 3D maps mapy.cz
247 points by rplnt  1 day ago   84 comments top 32
1
lubos 22 hours ago 3 replies      
Honestly, I'm surprised to see Seznam on HN. I grew up on Czech internet in 90s and Seznam.cz (or "Directory" in English) has been huge for long time until Google has eventually beaten them. The vibe I get here from comments is like as if Seznam.cz is some new hot company while it is really a dying dinosaur like Yahoo.

Maps are not core competency of this company. They are early Internet pioneers maintaining huge portfolio of various services for almost two decades. Maps is just another service they are working on to keep users from leaving them for Google/Youtube, Facebook etc.

Btw, I spoke to Seznam.cz founder briefly once at some business event in Slovakia back in 2000 when I was 17

edit: their maps are created by Melown.com, see example https://www.melown.com/maps/

2
lars 22 hours ago 0 replies      
The Norwegian site Finn.no got 3D maps that looked exactly like this back in 2008. [0]

As the link explains, the technology originates from the Swedish air force, and was meant to guide missiles through urban landscapes. It was since commercialized for civilian uses by the company C3 Technologies.

This looks like it's exactly the same technology.

[0]: http://labs.finn.no/sesam-3d-map-3d-revolution-the-people/

3
bhouston 1 day ago 1 reply      
I think that given that Google already has 3D depth coverage from its street view machines [1], it should be possible to combine that data with some medium resolution overhead 3D scans to create something similar, and likely even higher quality at the street level.

I wonder why Google hasn't done it yet. I don't think there are any real technical limitations. It may be that getting it fast is hard and the usefulness from an end user perspective isn't there yet?

[1] http://gizmodo.com/depth-maps-hidden-in-google-street-view-c...

4
zk00006 1 day ago 2 replies      
Based on the posts, people think that seznam.cz is a startup and Google will buy it like in 3,2,1. This is complete nonsense. Seznam is far from a startup and I am pretty sure their goal is not to get "only" acquired. Its mapping service is superior to google as far as Czech republic is considered. Well done guys!
5
suoloordi 23 hours ago 0 replies      
Is this different than, Nokia's 3d Maps?This is Stockholm:http://here.com/59.3314885,18.0667682,18.9,344,59,3d.dayedit: I see this covers different regions in Czech Republic, whereas Nokia covers some well known cities all over the world.
6
bitL 3 hours ago 0 replies      
Congrats! Great job guys!

Just a few questions - what algorithm do you use for geometry simplification? Is it based on quadric error metrics edge collapses? How do you join tiles of different LODs? Any papers on reconstructing 3D from your drones?

7
fractalsea 23 hours ago 0 replies      
I find this very impressive. The fact that you can rotate arbitrarily and see correct textures applied to all surfaces of buildings/foliage is amazing.

Can anyone provide any insight into how this is done? Is there a dataset which specifies the detailed 3D layout of the earth? If so, how is it generated? Is there satellite imagery of all possible angles? Is this all automated, or is there a lot of manual work in doing all of this?

8
chris-at 1 day ago 1 reply      
9
helloiamvu 23 hours ago 0 replies      
Seznam is also working on 'Street View'. Check this out: https://scontent-b-lhr.xx.fbcdn.net/hphotos-prn1/l/t1.0-9/10...
10
antjanus 23 hours ago 0 replies      
Not in the time that I've started going here would I have thought that Seznam would make it here. You should check out their tile search feature!

They experiment a TON, all the time.

11
robmcm 23 hours ago 0 replies      
I hate the use of the history API.

I don't want the back button to navigate the map!

12
kome 22 hours ago 0 replies      
Far better than google, bing and apple maps. Nice work, seznam.

Why seznam does non exist in others European languages?

Czech republic is a little market, and if they focus just on Czech republic their economy of scale will be broke very soon. They need investment to update technology, but if their market is so little it became prohibitively expensive very quickly.

13
RankingMember 1 day ago 2 replies      
Very nice. I wonder where the source data (building textures, etc) came from.
14
Piskvorrr 23 hours ago 1 reply      
Why does the error message remind me of "This site is only accessible in IE5. Get it [here]"?

In other words, we seem to be rapidly drifting back into the Bad Old Days, when sites were made for a single browser? Not using Firefox? You're SOL. Not using Chrome? You're SOL elsewhere.

15
dharma1 23 hours ago 0 replies      
same stuff as apple maps, nokia 3d maps - low flying planes and lots of photos. Apple bought a Swedish company from Saab to do this

Nice to see it can be done with a single UAV and camera. Is there any open source software doing this?

16
tomw1808 5 hours ago 0 replies      
Instantly want to play SimCity again. It can't be just me.
17
_mikz 1 day ago 1 reply      
Vypad to skvle. Looking great.
18
felixrieseberg 20 hours ago 0 replies      
I actually think that this is slightly less detailed than the Bing Maps Preview, where I could see my friend's car parked in front of his research institute - I'm impressed that it's running in a browser though.

http://www.bing.com/dev/en-us/maps-preview-app

19
hisham_hm 17 hours ago 0 replies      
Unlike Google 3D Maps, this actually works on my computer!
20
SchizoDuckie 19 hours ago 0 replies      
Someone please build a nextlevel Command&Conquer on top of this. that would be wicked.
21
tarikozket 17 hours ago 0 replies      
Apparently we will see more real world cities in the future games, voila!
22
tristanb 20 hours ago 1 reply      
When is someone going to put maps like this into a flight simulator?
23
aves 16 hours ago 0 replies      
The city reminds me of City 17 in Half Life 2.
24
SchizoDuckie 1 day ago 1 reply      
Sweet holy mamajama.

have they actually scanned this? or are they generating this from google maps imagery?

25
ReshNesh 22 hours ago 0 replies      
That's where I run. Very cool
26
vb1977 21 hours ago 0 replies      
The model is calculated from aerial photographs. The software for this was made by Melown Maps, a Czech computer vision company. See their website http://www.melown.com/maps for more models.
27
evoloution 1 day ago 5 replies      
Would Google try to buy the startup, hire the developers, or just reinvent the wheel in-house?
28
matiasb 20 hours ago 0 replies      
Cool!
29
Almad 1 day ago 0 replies      
Thumbs up!
30
dermatologia 23 hours ago 0 replies      
me gusta
31
toddkazakov 23 hours ago 0 replies      
awesome
32
secfirstmd 1 day ago 2 replies      
Cool, I smell buy out in 5, 4, 3, 2, 1... :)

I like the idea of bringing back more of the contours into maps once again. The move to flat satelite and Google Maps style stuff has meant the act of being able to navigate based on most efficient effort (e.g across contours not just A to B) is rapidly getting lost.

26
Scientists Find an Earth Twin, or Perhaps a Cousin nytimes.com
47 points by joewee  11 hours ago   31 comments top 6
1
kijin 8 hours ago 2 replies      
Kepler 186 is an M1V red dwarf [1]. We don't know how old it is, but red dwarves tend to last tens of billions to trillions of years. (No red dwarf is known to have died of natural causes since the beginning of the Universe.) So this star could be much older than our Sun. The low metallicity of the star also supports the hypothesis that it is older than our Sun.

Which means that if there is life on Kepler 186f, it could be billions of years ahead of us. Would that have been a long enough time scale for an intelligent species to emerge, civilize, and develop a way to traverse the 500 light years between us and them? Or did the dim light (less UV ~ less mutation) and lower availability of heavy elements (less iron) in the star system hemper the evolution of life and/or civilization?

Will there ever be an answer to questions like this, perhaps in a thousand years, a million years, or even a billion years?

[1] https://en.wikipedia.org/wiki/Kepler-186

2
chjj 7 hours ago 1 reply      
I'm surprised this article didn't bother mentioning Gliese 581g: https://en.wikipedia.org/wiki/Gliese_581g
3
wavesounds 6 hours ago 3 replies      
What is physically stopping us from actually seeing this planet with visible light? Is it that we can't build a big enough telescope?

Also would it be possible start broadcasting radio waves or maybe some kind of laser towards this planet incase there's something there that can respond to us a thousand years from now?

4
rhizome 9 hours ago 3 replies      
Sad, but the first thing my sci-fi mind thought of was whether Earth would be able to establish relations with a perhaps-society there without establishing military superiority first, just starting off with a Cold War.
5
KamiCrit 8 hours ago 2 replies      
I wonder what Kepler 186f thinks of Earth?
6
jamesfranco 7 hours ago 0 replies      
OMG they find one of these all the time.
27
QEMU 2.0.0 Released gmane.org
145 points by sciurus  19 hours ago   69 comments top 6
1
doktrin 17 hours ago 4 replies      
QEMU looks interesting. I have a few questions about its intended use case.

1. My understanding is that the primary use case of QEMU is for server virtualization on Linux hosts leveraging KVM. Is there a use-case for QEMU on non-Linux hosts?

2. I have read that QEMU excels in server virtualization but lags behind VirtualBox in desktop virtualization. Is this still the case? [1]

3. Is it considered "slow" for cross-architecture virtualization? [1]

[1] http://superuser.com/questions/447293/does-qemus-performance...

2
awda 17 hours ago 4 replies      
Anyone know if there are good tools for migrating a VMware .vmx and accompanying files to a QEMU machine? Or is this more complicated than I imagine? I'd like to ditch the VMware crap modules that break with every kernel upgrade in favor of KVM.
3
threeio 17 hours ago 1 reply      
Completely read this as QEMM and flashed back to DOS days. :)
4
djschnei 19 hours ago 11 replies      
Why would you use QEMU over virtualbox? (Serious question)
5
ausjke 18 hours ago 0 replies      
This is simply awesome. Congratulations!
6
bjz_ 19 hours ago 8 replies      
What is QEMU and why do I care? It would be helpful if that had been in the first paragraph. I am going to dig deeper, but I'm sure a huge number of people didn't.
28
PourOver: A library for simple, fast filtering and sorting in the browser nytimes.github.io
186 points by jsvine  22 hours ago   48 comments top 22
1
simonsarris 20 hours ago 1 reply      
Half the comments so far are looking for a non-code demo, and I imagine they mean something visual, so here's a bare-bones visualized version of the Basic PourOver sample code:

http://gojs.net/temp/pourover.html

It just takes the queries and uses the resulting data to make some nodes in GoJS (Disclaimer: a Diagramming library I develop. Not free, but easy to set up with this kind of data and see stuff fast).

If I had more time I'd make it prettier. The results are just visual representations of the data results getting filtered. Its very easy to take PourOver's example collections and data-bind some stuff to them (color of the nodes is data-bound to monster gender, etc). I'm sure it can't be hard to do the same in other data-bound visualization libraries.

This is very cool. I'll try to make a much prettier example tonight.

2
esmooov 20 hours ago 1 reply      
Hi, all.

A lot of folks are asking for a demo and, you're right. I should have included one. My apologies. I'll get to working on one as soon as I can.

In the meantime, I encourage readers to check out the source for http://www.nytimes.com/interactive/2014/02/02/fashion/red-ca... That's probably the clearest "demo" of PourOver at the moment.

More to come!

3
dmix 22 hours ago 0 replies      
This page needs a giant "demo" button near the top. The examples are all code.
4
danso 21 hours ago 3 replies      
So I visited one of the PourOver examples, this Academy Awards fashion feature published earlier this year:

http://www.nytimes.com/interactive/2014/02/02/fashion/red-ca...

I opened the dev tools to inspect the traffic and code, and this pops up in the console:

              0000000                         000        0000000            111111111      11111111100          000      111111111            00000        111111111111111111      00000      000000            000        1111111111111111111111111100000         000            000        1111       1111111111111111100          000            000         11       0     1111111100              000            000          1      00             1               000            000               00      00       1               000            000             000    00000       1               000         00000            0000  00000000       1                00000       11111            000 00    000000      000                 11111         00000          0000      000000     00000              00000            000        10000      000000      000              0000            000        00000      000000       1               000            000        000000     10000        1     0         000            000        1000000 00              1    00         000            000         1111111                1 0000          000            000          1111111100           000000           000            0000          111111111111111110000000            0000            111111111        111111111111100000          111111111              0000000              00000000              0000000                     NYTimes.com: All the code that's fit to printf()       We're hiring: http://nytimes.com/careers       
....You sneaky audience-targeting bastards

5
kylebrown 19 hours ago 0 replies      
Noice! Btw, the docs page is acting funky on an iPad (iOS 7.1). The layout seems to be alternating between mobile and desktop with each pan/scroll event. [ps: only happens in landscape orientation. portrait is unaffected]
6
JangoSteve 21 hours ago 1 reply      
It seems similar to our Dynatable plugin [1], which is basically the functionality of this plugin with some additional table-read/write functions included. The main difference being that this library depends on underscore, while Dynatable depends on jQuery (which is mainly used for its browser compatibility functions).

Given both library's emphasis on speed, it looks like I have something to benchmark against!

[1] http://www.dynatable.com

7
barkingcat 21 hours ago 0 replies      
More details at http://open.blogs.nytimes.com/2014/04/16/introducing-pourove...

There are a few links to projects at the nyt that has used these two libraries.

8
nathanhammond 20 hours ago 1 reply      
I much prefer a basic construct that makes it simple to do filtering and sorting that can be simply extended to any complexity. Sorting and filtering are really nothing more than set manipulation (which they state themselves) so with simple data binding this becomes a trivial exercise to build an impressive client-side search.

In Ember that might look like this:

    Ember.ArrayController.extend({        filterA: Ember.computed.filter('fieldName1', function comparator() {}),        filterB: Ember.computed.filter('fieldName2', function comparator() {}),        joined: Ember.computed.union('filterA', 'filterB'),                filtered: Ember.computed.uniq('joined'),        sorted: Ember.computed.sort(function comparator() {})    });

9
mbesto 11 hours ago 0 replies      
Is this basically what Google Refine[0] does with their facet filters? (but obviously open source and allowing anyone to build their own platform)

[0] - http://openrefine.org/

10
nashequilibrium 19 hours ago 0 replies      
If you going to handle that many obejcts in the browser wouldn't you use something like pouchdb? http://pouchdb.com/

Also "Pagination strategies with PouchDB" http://pouchdb.com/2014/04/14/pagination-strategies-with-pou...

11
alixaxel 3 hours ago 0 replies      
Having been recently exposed to the benefits (specially performance-wise) of lodash vs underscore, I wonder why it depends on the later.
12
paulcnichols 21 hours ago 1 reply      
Reminds me of crossfilter (http://square.github.io/crossfilter/) by square. It has a killer demo, however.
13
drv 16 hours ago 1 reply      
I wonder if the misspelling is intentional. (The idiom is "pore over".)
14
rpedela 19 hours ago 1 reply      
Off-topic, but related.

Does anyone know of any JS libraries that allow for Excel-like interaction? However I am looking for something that just implements the presentation layer.

The problem I am running into is that the libraries that do exist conflate the presentation and data layers and try to do everything for you. In other words, they make the assumption that all data will be downloaded to the client and then manipulated on the client. These libraries do not fit my use case.

15
earless1 18 hours ago 0 replies      
I've used the https://mixitup.kunkalabs.com/ library for similar functionality in the past. I am not sure how well it works with hundreds of thousands of items, but it does work well for the 600ish items that I am using it for.
16
grumblestumble 6 hours ago 0 replies      
It would be a great move to isolate the Views/UI part of this out from the rest of the code base, which would make it more usable inside other MV* frameworks like Angular or Ember.
17
Neff 20 hours ago 0 replies      
And here I thought I would be reading about the App.net PourOver posting service[0], which the NY Times Opinion account uses for posting[1]

[0]: https://directory.app.net/app/255/pourover/[1]: https://alpha.app.net/nytopinion

18
bestest 21 hours ago 1 reply      
Benchmarks comparing PourOver to Backbone would be nice. Anyone?
19
reshambabble 16 hours ago 0 replies      
This is awesome, and exactly the type of thing I've been looking for. Thanks so much for sharing! Can't wait to try this out.
20
daemonk 18 hours ago 0 replies      
Similar to crossfilter?
21
b0z0 20 hours ago 1 reply      
I love how New York Times has the coolest devs. Anyone here who works there? What's it like?
22
anarchy8 22 hours ago 0 replies      
Anyone have a demo?
29
An Update on HN Comments
293 points by sama  16 hours ago   237 comments top 41
1
bravura 15 hours ago 6 replies      
I appreciate the changes. But while we're on the topic, could I throw out a thought?

It should be easier for a late-arriver on a post to add a useful comment, and have it be promoted. Have you considered using randomization to adjust the score of certain comments?

HN comments seem to exhibit a rich-get-richer phenomenon. One early comment that is highly rated can dominate the top of the thread. (I will note that, qualitatively, this doesn't seem as bad as a few months ago.)

The problem with this approach is that late commenters are less likely to be able to meaningfully contribute to a discussion, because their comment is likely to be buried.

One thing interesting about the way FB feed appears to work is that they use randomization to test the signal strength of new posts.

Have you considered using randomization in where to display a comment? By adding variation, you should be able to capture more information from voters about the proper eventual location for a comment. It also means more variation is presented to people who are monitoring a post's comments.

2
alain94040 16 hours ago 10 replies      
I'd love to able to fold a nested conversation once I think that particular branch is going nowhere. HN should treat the folding as a signal similar to a down vote on that particular sub-thread. I often don't think any particular comment warrants a down vote, so I have no way to tell HN that the thread should be pushed back.

Plus everyone has been asking for a way to collapse sub-comments (and many plugins do it already).

3
jseliger 16 hours ago 5 replies      
dang and kogir tuned the algorithms to make some downvotes more powerful. We've been monitoring the effects of this change, and it appears to be reducing toxic comments.

That's interesting to me because I find myself downvoting much more often than I used to. But the comments I downvote are not that often toxic in the sense of being nasty. They're more often low-content or low-value comments that don't add to the conversation.

The jerks and trolls are out there but I'm not positive they're most pernicious problem.

4
rdl 16 hours ago 6 replies      
I wish there were multiple kinds of downvotes. "This is actually bad" (spam, etc.) vs. merely useless, vs. factually incorrect but reasonably presented.

I mostly only downvote spam or abuse; I try to ignore "no-op" comments, and would rather reply to someone with information about why they might be wrong vs. downvote, but I'm not sure if this is universal.

5
codegeek 16 hours ago 3 replies      
"make some downvotes more powerful."

Yes this will be great. Any comment that has personal attacks,abusive language, racial slurs, trolling, off-topic self-promotion/marketing etc. should allow downvotes to be more powerful. Usually, comments like these get a lot of downvotes pretty quickly but I am sure there are a few who upvote those comments as well for their own reasons.

May be comments like those should not be allowed upvotes once it reaches a number of downvotes ? Also, not sure if you guys already do this but really bad comments should be killed automatically once downvoted a certain number of times within a short time span ?

Now, when it comes to unpopular comments which are not necessarily outright bad, I am sure those are tough to program because how do you handle the sudden upvotes and downvotes at the same time ?

6
minimaxir 16 hours ago 9 replies      
While on the subject of HN comments, I have a request: could the "avg" score for a user be readdressed?

The avg score is the average amount of points from the previous X comments a user has made. However, this disincentivizes user from posting in new threads which are unlikely to receive upvotes. I've lessened my own commenting in new threads because of this.

7
stormbrew 15 hours ago 1 reply      
Something that I've been finding lately is that replies to my posts have been downvoted when to me they're fairly reasonable disagreements with what I said. I've actually taken to upvoting replies to me that go grey a lot of the time, even though I don't particularly agree with what they're saying.

To me it seems like a lot more stuff is getting downvoted than used to, and I'm not sure I see a meaningful pattern in the places I see it happening.

8
biot 15 hours ago 3 replies      
Will there ever be the ability to upvote a story without it going into your "Saved stories" section? 99% of the time I upvote a story it's because I want to save it for future reference. I'd like the ability to upvote (and downvote) stories based on whether they're HN-worthy without it impacting the "Saved stories" section.
9
chimeracoder 15 hours ago 2 replies      
> The majority of HN users are thoughtful and nice. It's clear from the data that they reliably downvote jerks and trolls

I have to say, I'm a bit confused now. Aren't "trolls" the sorts of comments that are supposed to be flagged[0]? (I understand that spam is meant to be flagged, but HN gets very few true spam comments[1]).

What is the difference between downvoting and flagging for comments specifically - and more importantly, what comments should be downvoted?

I've read conflicting arguments (both sides quoting pg, incidentally) that disagree on whether or not downvotes should be used to signify disagreement, or whether one should downvote comments that are on-topic but have little substance (ie, most one-liners).

[0] I guess this depends on your definition of "troll", but I think a well-executed troll is similar to Poe's law: the reader can't tell whether the commenter is being flippant/rude or sincere. In other words, it's just enough to bait someone into responding, without realizing immediately that it's a worthless comment.

[1] eg, ads for substances one ingests to change the size of a particular masculine organ, or (less blatantly) direct promotions for off-topic products.

10
kposehn 15 hours ago 4 replies      
I'm glad to hear that these changes seem to be working. One thing I am (slightly) concerned about is the occasional funny/witty/hilarious comment that will get downvoted into oblivion rapidly. It isn't necessarily that it is a troll posting, but maybe someone injecting a bit of humor.

That said, I do understand if the mods/community do not feel that witticisms have as great an importance on HN - yes, seriously - so this is not a criticism, just an observation.

11
mbillie1 15 hours ago 1 reply      
> The first is posting feedback in the threads about what's good and bad for HN comments. Right now, dang is the only one doing this, but other moderators may in the future.

I've seen dang do this and I think it's actually quite effective. I'd love to see more of this.

12
chrisBob 42 minutes ago 0 replies      
The biggest problem I see is that the combination of a threaded discussion and the strong ranking provides an incentive for replying to a another comment even if a new comment would be more appropriate.

This, for example, is much more likely to be buried than if I replied a few comments down on the thread from bravura.

13
Serow225 16 hours ago 1 reply      
Dang and friends, any chance of tweaking the layout so that it's not so easy to accidentally click the downvote button when using a mobile browser? This is commonly reported. Thanks!
14
tedks 5 hours ago 0 replies      
>(and specifically, they dont silence minority groupsweve looked into this)

How have you looked into this, and what have the results been?

What efforts are you going to take to ensure it stays true in the future?

There are other comments asking these questions that have so far not been answered; it would be good to answer them. It's very unsettling when people (primarily from a privileged/majority standpoint) proclaim that things "don't silence minority groups" and handwave the justification.

In general I've found HN to be much more positive towards feminism in particular than similar communities like Reddit or others that I won't name, but the tech industry has large issues in this area and it's surprising to me that this would be the case.

In particular, it seems likely to me that HN will selectively not-silence minority voices that tend to agree with the status quo or pander to majority voices. I'd be surprised if your analysis accounted for that, but I'd be very, very happy to be wrong.

15
maaaats 16 hours ago 0 replies      
I like the new openness.
16
joshlegs 14 hours ago 0 replies      
Wow. I am overly happy that you guys have figured out a way to give commenting feedback. i had an account way back when shadowbanned for i never knew what reason. Still dont. I feel like if this system had have been implemented back then I would have had a better idea of what was wrong that I said.

Also, I'm pretty sure you've found the secrets to good Internet moderatorship. So many forums went offcourse from ban-happy moderators that didnt want to actually take the time to moderate the community, instead just banhammering people. Kudos to you guys

17
specialk 15 hours ago 1 reply      
I find the idea that commenters with higher karma having more powerful down-votes slightly disconcerting. My fear is that if people down-vote comments that are well meaning and relevant but they disagree content we will only ever see one train of thought rise to the top of comment threads.

This could start a vicious cycle where voting cabals of power-users form. For example if Idea X becomes popular among some members of HN they will be able to always steer the discussion to talk about Idea X or down-vote a competing valid Idea Y into oblivion. Comment readers could be converted to Idea X, as it is always appearing at the top of relevant comment threads. So now the voting cabal as even more members. Growing the dislike of Idea Y. The cycle then repeats. The discussion is then steered over time by the thoughts of a select few power-users.

Maybe this is just the natural order of things and I'm subconsciously afraid of change. Thoughts?

18
olalonde 5 hours ago 0 replies      
I know it would be a pretty big experiment both technically and conceptually, but I will propose it just in case.

I have noticed that usernames might influence the way I vote. What if usernames were not displayed in comments? Now this leads to two problems: 1) it makes it hard to follow who replied to what in threads 2) it makes it more tempting to post bad comments given the lack of accountability. I think the first problem could be solved by assigning users a per-submission temporary username picked at random from a name/word list. The second problem could be solved by linking those random usernames to the actual profile page of who posted (just like HN currently does). It wouldn't stop deliberate attempts at up/down voting specific users, but it would remove the unintentional bias.

19
User8712 15 hours ago 1 reply      
Are comments ever deleted or hidden from view completely? I've been reading HN for a year or two, and I've never noticed an issue with comment quality. In topics with a larger number of comments, you get one or two heavily downvoted posts, but that's it.

My question, is there an issue with comments I'm not seeing? Do the popular topics on the homepage have dozens of spam or troll comments that are pruned out constantly, so I don't notice the problem? Or is the issue those 1 or 2 downvoted comments I mentioned earlier?

HN receives a small number of comments, so fine tuning algorithms isn't a big deal in my opinion. This isn't Reddit, where the number one post right now has 4,000 comments. That presents a lot of complications, since they need to try and cycle new comments so they all receive some visibility, allowing them a chance to rise if they're of high quality. On HN, you have 20 comments, or 50 comments, so regardless of the sorting, nearly everything gets read. As long as HN generally sorts comments, they're fine.

20
Thrymr 15 hours ago 1 reply      
> posting feedback in the threads about what's good and bad for HN comments.

Am I the only one who thinks that posting more meta-discussion directly in comments reduces the overall quality rather than increases it?

Maybe a downvote should come with a chance to add an explanation that can be seen on a user's page or on a "meta" page, but not dilute the discussion itself.

21
aaronetz 13 hours ago 0 replies      
I have noticed that people oftentimes downvote because of disagreement, even when the comment seems to be okay (to my eyes at least). How about eliminating the downvote, leaving only the "flag" which makes it clearer that it should not be used for disagreement? It would also make comments more consistent with top-level stories (which I sometimes think of as "root-level comments".)
22
camus2 15 hours ago 1 reply      
In my opinion,just like SO, downvotes should actually cost Karma. Yes sometimes some messages are just bad and trolling but sometimes people get downvoted just because they dont "go with the flow",and they have unpopular ideas. So if a downvote cost 2 , the downvoter should lose 1 for instance. And please dont downvote me just because you disagree.

EDIT: just proved my point,why am I being downvoted? it was a simple suggestion yet,someone downvoted me,just because he can and it's free. I was not trolling or anything... I just wanted to participate the debate.

23
Bahamut 15 hours ago 1 reply      
I've seen plenty of downvotes around from people who didn't understand what was being said/wanting to exert opinions. To be honest, that partly gets me to just not want to contribute thoughts since they may be unpopular/do not jive with a hive mentality, and has gotten me to visit the site less for the comments, especially with the recent tweaks.

It'd be nice if something could be figured out to discourage this behavior through reduction of the value of the downvotes of such, especially if a comment has not had a response to explain the downvote.

24
mck- 11 hours ago 0 replies      
May I also suggest an update to the flamewar trigger algorithm? Or at least this is what led me to believe it is a flamewar trigger [1]

Oftentimes a post is doing really well [2], accumulating a dozen up votes within 30 minutes, jumping up the front-page, but then because of two comments, it gets penalized to the third page. I can see it being triggered when there are 40 comments, but there seems to be an awfully low first trigger?

[1] https://news.ycombinator.com/item?id=7204766

[2] https://news.ycombinator.com/item?id=7578670

25
zatkin 11 hours ago 3 replies      
I recently joined Hacker News, and actually read through the guidelines before making an account. If there was one area where I feel that anything convinced me to be smart about what I post, it would be those guidelines.
26
lettergram 15 hours ago 0 replies      
"We believe this has made the comment scores and rankings better reflect the community."

It would be interesting to see how you could actually change the community via comment filtering.

For example, if some individuals are always posting negative comments and were previously not silenced. I wonder if now that they are being silenced if they would leave the community entirely, just keep posting and ignoring the results, or change their comments to fit the community.

27
gautambay 6 hours ago 0 replies      
>> and specifically, they dont silence minority groupsweve looked into this

curious to learn how this analysis was conducted. e.g. how does HN determine which users belong to a minority groups?

28
abdullahkhalids 15 hours ago 0 replies      
It would be interesting if you published stats for each user: how often they upvote and downvote compared to the average for starters.

It would also be useful to know how often other people upvote (downvote) the comments I upvote (downvote).

These stats should only be privately viewable.

29
ballard 10 hours ago 0 replies      
Definitely gotta give you guys a standing ovation for yeomen's work.
30
onmydesk 14 hours ago 3 replies      
"We believe this has made the comment scores and rankings better reflect the community."

Is that desirable? A better debate surely entails more than one opinion. I also don't know what a 'jerk' is, someone that disagrees with the group think?

I just don't think its that big a problem. But thats just one opinion that might differ from the collective and therefore must have no merit? An odd place. Over engineering! To be expected I suppose.

31
darkstar999 13 hours ago 1 reply      
When (if ever) do I get a downvote button?
32
mfrommil 15 hours ago 0 replies      
I've always thought of upvote/downvote as a "thumbs up" or "thumbs down" - do I like your comment?

Sounds like the new algorithm penalizes disrespectful/spammy comments, rather than the "difference in opinion" comments (which is good). Could a 3rd option be added to differentiate this, though? Have option for upvote, downvote, and mark as spam (I'm thinking a "no" symbol).

33
dkarapetyan 15 hours ago 0 replies      
Awesome. Keep up the good work. I am definitely enjoying the new HN much more. The quality of articles is way up and the comment noise is way down.
34
brudgers 16 hours ago 0 replies      
It might make sense to increase the amount of time in which a negativemy scored comment can be edited or deleted.
35
robobro 15 hours ago 0 replies      
Thanks, guys - didn't come to say anything more
36
bertil 15 hours ago 0 replies      
> specifically, they dont silence minority groupsweve looked into this

I would love to have more details about that: what do you define as minority, and how do you measure silencing.

37
Igglyboo 10 hours ago 0 replies      
Could we please get collapsible comments?
38
borat4prez 16 hours ago 0 replies      
Can I use the new HN comments algorithm on my new website? :)
39
darksim905 8 hours ago 0 replies      
Wait, you can downvote?
40
larrys 15 hours ago 1 reply      
"It's clear from the data that they reliably downvote jerks and trolls"

Most people know what a jerk is. Perhaps though you (and others) could define what a troll is for the purpose of interpreting this statement. (Of course I know the online definition [1] but think that there seems to be much latitude in "extraneous, or off-topic messages" or "starting arguments".)

Specifically also from [1]:

"Application of the term troll is subjective. Some readers may characterize a post as trolling, while others may regard the same post as a legitimate contribution to the discussion, even if controversial."

While as mentioned I know what a jerk is, I can also see very easily someone throwing out "troll" to stifle someone else in more or less a parental way. That is to nominalize something as simply not important or worth even of discussion.

[1] http://en.wikipedia.org/wiki/Troll_%28Internet%29

41
pearjuice 16 hours ago 3 replies      
Can anyone explain to me how this is not putting the common denominator in more power even further? At this point, unless you extensively agree with the majority of the echo circle, I doubt you will be able to have any impact on discussions.

Every thread is a rehearsal with same opinions at the top over and over and non-fitting opinions float to the bottom. In which turn, they get less "downvote-power" so they will stay low and can't get their peers above. I am not saying that the current flow of discussion is bad, I am just saying that participation is flawed.

We are simply in a system where you get awarded to fit to the masses and you get more power once you have been accepted into the hive-mind. A circular-reference at some point.

30
Consul, a new tool for service discovery and configuration hashicorp.com
114 points by BummerCloud  20 hours ago   55 comments top 19
1
samstokes 19 hours ago 2 replies      
Registered services and nodes can be queried using both a DNS interface as well as an HTTP interface.

This is very cool. Integrating with a name resolution protocol that every existing programmer and stack knows how to use (often without even thinking about it) should lead to some magical "just works" moments.

2
Loic 3 hours ago 0 replies      
Short question: Can I define the IP of the service in the service definition?

From the service definition[0] it looks like the IP is always the IP of the node hosting `/etc/consul.d/*` files. I am thinking about it in a scenario where each service (running in a container) is getting an IP address on a private network which is not the IP of the node.

[0]: http://www.consul.io/docs/agent/services.html

Update: An external service is possible: http://www.consul.io/docs/guides/external.html

3
stormbrew 15 hours ago 2 replies      
So I'm mostly curious why this isn't just basically serf 2.0. Looking at serf I never really felt like it had much use in the basic form it took, with no ability to advertise extra details about the nodes in a dynamic fashion. Consul seems to build onto serf the things that serf needed to become really useful, so seems more like a successor to serf than a parallel project.

It seems like the right thing to do here would be to take the lessons of building consul into making serf something more like a library on which to build other things rather than a service in its own right.

4
noelwelsh 17 hours ago 0 replies      
Looks like a very cool tool -- could replace Zookeeper with saner admin requirements -- but I'm more interested in the tech. AP systems (such as Serf, on which Consul is built) have many advantages and I think we're only just beginning to see their adoption. I believe CRDTs are the missing ingredient to restore sanity to inconsistent data. Add that and I can see a lot more such systems being deployed in the future (and particularly in mine :-)
5
avitzurel 19 hours ago 0 replies      
Those guys are machines.

Usually when people release open source software, the documentation is lacking, there's no website etc... those guys absolutely nail it every single time.

Kudos for them, really!

6
addisonj 19 hours ago 0 replies      
Very impressed.

This coalesces a lot of different ideas together into what seems to be a really tight package to solve hard problems. In looking around at what most companies are doing, even startupy types, architectures are becoming more distributed and a (hopefully) solid tool for discovery and configuration seems like a big step in the right direction.

7
hardwaresofton 18 hours ago 1 reply      
This is really awesome, distributed system techniques in the real world. I'm really jealous of what they've managed to build.

I was planning to make a tool like this (smaller scale, one machine), and this will certainly serve as a good guide on how to do it right (or whether I should even bother at all).

I can't find a trace of a standard/included slick web interface for managing the clusters and agents -- are they leaving this up to a 3rd party (by just providing the HTTP API and seeing what people will do with it)? Is that a good idea?

8
nemothekid 12 hours ago 0 replies      
I'm VERY impressed, even more impressed by the fact that it speaks DNS. I do with however that it came with a "driver" option rather running a consul client (or even just SkyDNS-like http option, although I'm unsure how you would manage membership). That way you could just "include" consul in your python/ruby/go application, and not have to worry about adding another service to your chef/pupper config and running yet another service.
9
opendais 19 hours ago 1 reply      
This is slightly off topic but I'm curious why none of the service discovery tools run off of something like Cassanandra as the datastore?
10
dantiberian 19 hours ago 2 replies      
How does this differ from http://www.serfdom.io/, another HashiCorp product?
11
igor47 7 hours ago 1 reply      
i am constantly impressed at the hashicorp guys, who continue to release great tools. they actually released serf on the same day as we released nerve and synapse, which comprise airbnb's service registration and discovery platform, smartstack. see https://github.com/airbnb/nerve and https://github.com/airbnb/synapse

that said, as i wrote my blog post on service discovery ( http://nerds.airbnb.com/smartstack-service-discovery-cloud/ ),dns does not make for the greatest interface to service discovery because many apps and libraries cache DNS looksups.

an http interface might be safer, but then you have to build a connector for this into every one of your apps.

i still feel that smartstack is a better approach because it is transparent. haproxy also provides us with great introspection for what's happening in the infrastructure -- who is talking to whom. we can analyse this both in our logs via logstash and in real-time using datadog's haproxy monitoring integration, and it's been invaluable.

however, this definitely deserves a look if you're interested in, for instance, load-balancing UDP traffic

12
MechanicalTwerk 19 hours ago 1 reply      
Seriously, who does design for HashiCorp? Their site designs, though similar, always kill it.
13
Axsuul 19 hours ago 1 reply      
Can anyone care to provide some real world examples? I'm having a hard time wrapping my head around what this exactly does.
14
sagichmal 18 hours ago 0 replies      
The underlying Raft implementation is brand new, and looks much improved on the goraft used by etcd. Very impressed.
15
allengeorge 18 hours ago 0 replies      
This is really impressive - kudos! I'm jealous - these guys are implementing extremely cool stuff in the distributed systems arena :) (serf - http://serfdom.io - comes to mind)

How much time did it take to put this together?

16
djb_hackernews 19 hours ago 1 reply      
Should something be happening with the bar data payload in the HTTP kv example? Or is the value encoded for some reason?
17
peterwwillis 17 hours ago 1 reply      
I'll preface these comments by saying that Consul appears to be the first distributed cluster management tool i've seen in years that gets pretty much everything right (I can't tell exactly what their consistency guarantees are; I suppose it depends on the use case?).

What I will say, in my usually derisive fashion, is I can't tell why the majority of businesses would need decentralized network services like this. If you own your network, and you own all the resources in your network, and you control how they operate, I can't think of a good reason you would need services like this, other than a generalized want for dynamic scaling of a service provider (which doesn't really work without your application being designed for it, or an intermediary/backend application designed for it).

Load balancing an increase of requests by incrementally adding resources is what most people want when they say they want to scale. You don't need decentralized services to provide this. What do decentralized services provide, then? "Resilience". In the face of a random failure of a node or service, another one can take its place. Which is also accomplished with either network or application central load balancing. What you don't get [inherently] from decentralized services is load balancing; sending new requests to some poor additional peer simply swamps it. To distribute the load amongst all the available nodes, now you need a DHT or similar, and take a slight penalty from the efficiency of the algorithm's misses/hits.

All the features that tools like this provide - a replicated key/value store, health checks, auto discovery, network event triggers, service discovery, etc - can all be found in tools that work based on centralized services, while remaining scalable. I guess my point is, before you run off to your boss waving an iPad with Consul's website on it demanding to implement this new technology, try to see if you need it, or if you just think it's really cool.

It's also kind of scary that the ability of an entire network like Consul's to function depends on minimum numbers of nodes, quorums, leaders, etc. If you believe the claims that the distributed network is inherently more robust than a centralized one, you might not build it with fault-tolerant hardware or monitor them adequately, resulting in a wild goose chase where you try to determine if your app failures are due to the app server, the network, or one piece of hardware that the network is randomly hopping between. Could a bad switch port cause a leader to provide false consensus in the network? Could the writes on one node basically never propagate to its peers due to similar issues? How could you tell where the failure was if no health checks show red flags? And is there logging of the inconsistent data/states?

18
justinfranks 18 hours ago 0 replies      
Consul really solves a large problem for most SaaS companies who run or plan to run a Hybrid Cloud, Multi-cloud, or Multi-data center environment
19
contingencies 11 hours ago 1 reply      
This sounds very impressive, at the risk of breaking the chorus of awesome: what problem does this actually solve?

Discovery: The consul page alleges that it provides a DNS compatible DNS alternative for peer discovery but is unclear as to what improvements it offers other than 'health checks', with the documentation leaving failure resolution processes unspecified (as far as I can see) thus mandating a hyper-simplistic architecture strategy like run lots of redundant instances in case one fails. That's not very efficient. (It might be interesting to note that at the ethernet level, IP addresses also provide MAC address discovery. If you are serious about latency, floating IP ownership is generally far faster than other solutions.)

Configuration: We already have many configuration management systems, with many problems[1]. This is just a key/value store, and as such is not as immediately portable to arbitrary services as existing approaches such as "bunch-of-files", instead requiring overhead for each service launched in order to make it function with to this configuration model.

The use of the newer raft consensus algorithm is interesting, but consensus does not a high availability cluster make. You also need elements like formal inter-service dependency definition in order to have any hope of automatically managing cluster state transitions required to recover from failures in non-trivial topologies. Corosync/Pacemaker has this, Consul doesn't. Then there's the potential split-brain issues resulting from non-redundant communications paths... raft doesn't tackle this, as it's an algorithm only. Simply put: given five nodes, one of which fails normally, if the remaining four split in equal halves who is the legitimate ruler? Game of thrones.

As peterwwillis pointed out, for web-oriented cases, the same degree of architectural flexibility and failure detection proposed under consul can be achieved with significantly reduced complexity using traditional means like a frontend proxy. For other services or people wanting serious HA clustering, I would suggest looking elsewhere for the moment.

[1] http://stani.sh/walter/pfcts

       cached 18 April 2014 13:02:01 GMT