hacker news with inline top comments    .. more ..    20 Jul 2016 Best
home   ask   best   12 months ago   
1
Mr Robot S02E01 easter egg 0x41.no
632 points by tilt  4 days ago   189 comments top 31
1
auganov 4 days ago 18 replies      
Is this season going to have a lot of "weird" stuff going on as well?

I watched the first one. I enjoyed the first few episodes, seemed like it was going to be a drama about people working in InfoSec. Definitely appreciated the relative plausibility of the hacks. Was very disappointed when it turned into a psycho-druggie-anonymous-whatever sort of thing. It went so quickly from a semi-plausible portrayal to a fetishization of [black hat] hackers.

I understand how that's more appealing to a wider audience. Just saying what I'd prefer personally for my own sake.

2
rwhitman 4 days ago 3 replies      
The coolest thing about this show I discovered was that every hack depicted is actually tested to see if it would work, and the screenshots cut into the show are always shown in the correct order of execution. Amazing.

Wish there was a clip on YouTube but their tech consultant Kor Adana explains it in the "Hacking Robot" panel discussion they ran after the season premiere (around 8:50 tons of ads sorry): http://www.usanetwork.com/mrrobot/videos/hacking-robot-101

3
Vexs 4 days ago 3 replies      
I think Mr Robot has good tech in it, because it's not a show about tech CSI-Cyber style. It's a drama, which uses tech as a basebone. All the hacking largely stays in quick snippets displayed, and maybe a sort of slow-marathon style of filming where it's clear something takes a while.

It's a show about hacking, but hacking is secondary to the other aspects, it just moves the plot forward, and that's why it works so well.

4
partisan 4 days ago 7 replies      
I am probably going to wait for the season to complete so I can binge watch. Am I the only one?

My big concern is all of the media and spoilers that surround these popular shows. Why do I know who died in Orange is the New Black? Why does Google think I need this as part of my news feed that I check only once or twice a day because I just need the most important news items.

6
breatheoften 4 days ago 2 replies      
On top of what I consider to be utterly extraordinary film making craft, this show really gives me a new appreciation of the nature of anarchy. There is a very real sense in which "anarchy" -- a kind of lack of predictably -- is always the resultant state after a large enough change to a power structure -- if it were more predictable then the rules of the incumbent power structure would be likely to predict and prevent such a large change.

I feel like this idea is present at many levels in the story -- including the dynamics of Elliot's personality disorder and his attempts to construct trust in his perception of reality and his own motivations.

Intermingling these themes with the idea of hacking -- the art of finding/manipulating unexpected behavior in systems -- is incredibly rich -- and mesmerizing.

7
stuxnet79 4 days ago 2 replies      
Mr Robot propelled Kali Linux into the top of the Linux OS rankings. After watching the Season 2 premiere yesterday even I decided I'm going to check Kali Linux out - either on Virtual Machine or my Raspberry Pi 3.

Also I was pleasantly surprised to see Joey Badass in the show (his first acting gig I think). Hopefully they will allow the character he plays to shine. Great show!

8
jfaucett 4 days ago 0 replies      
Thats really good detective work. I was not expecting this much attention to detail at all - even after season one seemingly surpassed every film/tv show in history on that account. I will definitely be screen pausing for the rest of this season myself now.
9
jaegerpicker 4 days ago 0 replies      
This show is amazing IMO. It's like a love letter to all those late nights when I was 15 and glued to my freebsd box until unhealthly late hours!
10
outofstep 4 days ago 1 reply      
Love this show bc it's like Fight Club meets Hackers. Can't wait to watch the full season
11
character 4 days ago 1 reply      
There is a QR code drawn in Elliot's journal that leads to www.conficturaindustries.com (one of the URLs linked to the same SSL cert) - the same company name that is on the front of the journal. And the meticulous zoom-in on the apple skin left on the floor looks very similar to the logo of the company. Looks like they had fun making this :)
12
dclowd9901 4 days ago 0 replies      
"I sincerely believe that banking establishments are more dangerous than standing armies, and that the principle of spending money to be paid by posterity, under the name of funding, is but swindling futurity on a large scale. Thomas Jefferson"

Apparently I have more in common with Jefferson than I thought (Re my numerous comments in the past regarding the danger of consumer credit).

13
bduerst 4 days ago 2 replies      
Can anyone get a read on the QR code from his notebook?

http://i.imgur.com/4H0M2hj.png

I tried to isolate the image but it's not reading:

http://i.imgur.com/fQV1EIn.jpg

14
hmate9 4 days ago 3 replies      
Really nice! But Mr Robot is something I won't watch anymore. It is just too weird and slow at this point.
15
pfooti 4 days ago 1 reply      
Does anyone recall a Mr. Robot giveaway with American Giant hoodies? I went to the AG retail space two days ago to figure out what hoodie size I needed, and made a Mr. Robot joke. The salesperson said, "hey, we did a promo with Mr. Robot, actually, where there was a website in the show and if you followed a bunch of clues, there was some morse code and eventually you got a free Mr. Robot hoodie from us."

At the time, I marked it up as "awww, too bad I missed it", especially since I tend to binge shows after they're all available at once. But I wasn't able to find any evidence of this promo existing on the web, and now there's this thing with morse code and so on. Soooo maybe there's a nice hoodie at the end of this road.

16
jmkni 3 days ago 0 replies      
I feel like I'm the only person on the planet who doesn't drink the Mr Robot kool-aid.

It's cool that they get the tech right, but I just feel as though the protagonist has the mentality of an angsty teenager trying their best to be 'edgy' by 'fighting the man'.

Maybe if the show took itself a little less seriously I might enjoy it more.

17
nickysielicki 4 days ago 0 replies      
There is also an archive quine embedded within the image at http://www.conficturaindustries.com/images/linkexchange_bann... , which is the website corresponding to the QR code that was shown briefly in his notebook.

I didn't look at it closer, there might be more in there.

18
avs733 4 days ago 1 reply      
>AdobeTracking.showSiteFeature = 'Mr. Robot : S2 Easter Egg Sites';

[view-source:http://www.evil-corp-usa.com/]

Sites

I get the feeling this will be ongoing...makes the show more fun and a little interactive

19
pizza 2 days ago 0 replies      
Thought it was intriguing that a schizophrenic uber-hacker's poison of choice was morphine; it seems more in line with methamphetamine use, imo! Hacking on morphine... would.. be... something..... like...... this........
20
SG- 4 days ago 0 replies      
I'm pretty happy they used BitchX for the IRC client.
21
labmixz 4 days ago 0 replies      
Neat easter egg. Though you should get some better servers for your site. That load time is crazy. Especially on my 1GB connection. Granted, I'm going through proxies, but wow, still horrible.

Anyhow, can't wait to see what this season has in store. I find myself relating to a lot of the personal struggles in the last season. Maybe that makes me crazy. ha.

22
yeukhon 4 days ago 0 replies      
Fun fact. Mr Robot S02E01 was doing a scene right around the corner next to AWS NY Loft. Not sure if it was accidental or intentional. I only came to realize until this HN post and looked at what the main character looks like. Yeah the kid with hoodie. He was there.
23
kogir 4 days ago 2 replies      
Stuff like this is a nice touch. I wonder how long it will stay up?

It would be controversial but really interesting if they dropped an actual 0-day during the show as well.

24
blu3gl0w13 4 days ago 0 replies      
Apparently, I'm way behind on this series if we're already in season 2. Yikes. I guess I better catch up!!! no more spoilers please!!!
26
s_q_b 3 days ago 0 replies      
Scan the QR code hand-drawn in Elliot's notebook.
27
rachelbythebay 4 days ago 0 replies      
And the eyeball seems to be delivering ASCII in hex to you, The Martian-style.
28
r4ltman 3 days ago 0 replies      
Clearly no one here has seen blue 'heet me' velvet
29
Cengkaruk 4 days ago 1 reply      
Tell me where Tyrell is...
30
jboogy 4 days ago 0 replies      
This is great. Nice job!
31
GonzoBytes 4 days ago 2 replies      
>btw, having the mailware deliverable ia that auto-exec flash exe is really old school, no? I keep seeing this in plots of procedural crime shows where digital savvy techie or detective shoves a newly discovered victim usb into their working computers. Who does that?

This is still a very real exposure and is used by penetration testers all the time. There are several ways to disable USB autorun across a domain, but there are definitely still a lot of big companies that don't prevent it.

2
HyperTerm JS/HTML/CSS Terminal hyperterm.org
620 points by pandemicsyn  4 days ago   255 comments top 49
1
dsl 4 days ago 7 replies      
The folks who make HyperTerm might have something to say about your product naming...

http://www.hilgraeve.com/hyperterminal/

Edit: I'm surprised more people in the HN crowd aren't familiar with HyperTerm (Yes that is the name and what everyone called it for many many years) It was bundled with Windows up to 7, and is still used heavily in industrial control areas.

2
benbenolson 4 days ago 4 replies      
There was another project that's extremely similar to this posted a week or so ago, and it got a lot of flak for existing; I don't think very many commenters agreed with its existence (except, perhaps, as an educational exercise for the programmer).

That project was posted here:http://rungoterminal.com/

This, however, is executed and presented much better than that other project, and catered towards the correct crowd (web developers using Mac OS X). Although I would never use either, I think that it just goes to show that presentation is extremely important: these two terminal emulators could very well be the exact same implementation (or at least very similar), and far less people would use it because they've chosen the wrong demographic.

Also because of the clean presentation, it creates a perception that HyperTerm is executed more cleanly than Rungo Terminal, even if there are no facts to back that belief.

3
dejawu 4 days ago 4 replies      
Anybody remember TermKit[0]? This was my first thought when I saw "JS/HTML/CSS Terminal". It was built on WebKit (five years ago, before the everything-in-JS craze really began) and had a lot of really promising features like smart MIME-type support... and then development sort of stopped.

I'd love to see the concept revisited with present-day technologies and platforms.

[0]: https://acko.net/blog/on-termkit/

4
nathancahill 4 days ago 2 replies      
You guys just can't stop making cool stuff can you? now.sh, now-serve, micro.. I'm not the biggest Node fan, but these are really well done. Execution on these ideas has always been 10/10.
5
vortico 4 days ago 5 replies      
Just an idea: The only windowed applications I use are a terminal (st), my text editor (Atom), and a browser (Firefox). Suppose I switched to HyperTerm, Atom, and Chromium. I would have three copies of the Chromium renderer and V8 interpreter loaded in memory. Is there a project that merges these things into a single daemon, to improve individual app startup time, save memory, and reduce the binary size from 42 MB (for HyperTerm) to the size of only its own components?
6
rosalinekarr 4 days ago 3 replies      
This looks very well executed and really cool, but I'm honestly really tired of all the node/electron terminals. In its favor, this project looks a lot more clean and less opinionated than that 'Black Screen' monstrocity.
7
masukomi 4 days ago 1 reply      
Can anyone comment on how this compares to Black Screen ? https://github.com/shockone/black-screen
8
Tloewald 4 days ago 2 replies      
This makes me want to re-implemented "commando" from AUX/MPW (I've never understood why this idea didn't take off).

If you don't know, when you typed an ellipsis after a command in AUX or MPW a dialog box would be displayed showing the most popular toggles and options for the command (and allowing graphical file-picking). When you clicked "ok" it would type the command for you, so it made the command line both easier to use and easier to learn.

(I just downloaded and tried it out and lost my enthusiasm. There's no attempt to provide a proper GUI.)

9
pyre 4 days ago 1 reply      
How well does it deal with copious amounts of text scrolling across the screen. It's been a while since I've used it, but IIRC Gnome Terminal had big issues with this for a long time (due to the underlying library that they were using).
10
suyash 4 days ago 0 replies      
This is bad ass job, congratulations. Like someone else mention in the comments, execution and idea you get 100/100 points.
11
parent5446 4 days ago 1 reply      
A while ago I wrote something similar to this as a school project, but instead of being a local terminal, it was for a remote server. There was a sort of virtual file system the user interacted with, with metadata stored in MySQL, as one does for school projects.

The moral of the story is that I hated having a terminal in JavaScript for a remote server, so I have no idea why I would want one for my local computer.

12
ktamura 4 days ago 1 reply      
As an Acme user, I find the extensibility feature very interesting: Once you are used to Acme's contextual (and programmable) handling of text, it's hard to go back to a normal terminal.
13
cocktailpeanuts 4 days ago 0 replies      
I tried to use this but got stuck when i opened up vim with NERDTree, it wouldn't expand folders...
14
GreaterFool 3 days ago 1 reply      
I think it would be great if we could go beyond text in terminal applications. Think Jupyter notebooks but inside terminal. It would be great if terminal window was just an OpenGL surface that one could draw on, with good API for doing just text. Something as simple as "put this interactive plot in this 20x20 area there" would make a big difference for a lot of programs and scripts I'm writing. At the moment if I want something like that I pipe the data through websocket to the browser and that's OK but I'd rather stay in the terminal.
15
webXL 4 days ago 2 replies      
imgcat[1] is one of my favorite terminal innovations as of late. I thought I'd take a long shot and see if it worked since that functionality seems like it would be simple for a browser-engine-based terminal to handle, but sadly no-go. I smell pull request.

[1]https://www.iterm2.com/documentation-images.html

16
juandazapata 4 days ago 2 replies      
It definitely looks cool, but may I ask what's the problem that is being solved here? Does it support tmux or at least window splits?

Which problems does this solve that the regular Terminal didn't solve? I'm kind of lost here, "I need some fireworks while I type in my terminal" is not a problem that I have often to be honest.

17
Neetpeople 4 days ago 2 replies      
Would you recommend this to a beginner? Hilariously I'm actually asking for a friend, I don't have a Mac.
18
627467 4 days ago 1 reply      
"Builds for Windows and Linux are coming very soon"

I can't wait. Anyone knows how I could build this on windows?

19
webXL 4 days ago 0 replies      
Initial commit 15 days ago?? I think I'll hold off on any sudo commands or special permissions requested by the OS.

Looks super cool though. Seems like a no-brainer with Electron.

Edit: I take that back, since it largely leverages hterm by the Chromium team, looks like the real first commit was in 2011 or so.

20
disease 4 days ago 1 reply      
My first reaction is: "This could be the VS Code of terminals". For those stuck on Windows, where the choice in terminals is pretty bad AND more and more Windows dev is done from the command line (hello 'dotnet'), this could be very welcome.
21
guard-of-terra 4 days ago 0 replies      
As a long Konsole user, I wonder if it supports unlimited logging, how it will react to 500M of console backlog, what's up with scrolling/overflow at all?

In my experience, terminal is one of the trickest to get right under extreme conditions.

What's with tabs also? :)

22
overcast 4 days ago 0 replies      
Awesome. However there is just no way someone developing a terminal, hasn't heard of HyperTerminal. It was bundled with every Windows OS. Good luck with that naming cease and desist :/
23
marknadal 4 days ago 0 replies      
Bored at first, then was forced to smile with glee. Now I must download it, even though I was originally poo-pooing "another terminal" in my head. Well done, well done.
24
S3curityPlu5 3 days ago 0 replies      
http://www.trademarkia.com/trademarks-search.aspx?tn=hyperte...

hyperterm SearchApply Now!This name is not found in our database of U.S. trademarks, so Apply for it Now. Check off countries you wish to trademark your name/slogan in.

25
pikachu_is_cool 4 days ago 1 reply      
Hopefully this inspires someone to make a Lua/Python scriptable terminal. I'm not a big fan of the web stack. This is a pretty cool concept though.
26
dgquintas 4 days ago 1 reply      
27
tedmiston 4 days ago 0 replies      
Rendering a webpage in the shell is neat. The concept of a terminal wrapped in a browser has a lot of potential.

It also seems like iTerm is moving in the direction of richer media integration with recent features like inline image rendering.

28
jetblackio 4 days ago 0 replies      
Looks really cool, but honestly the input lag is noticeable when compared to something like iTerm. Not sure that's something I could get used to. Is that just a limitation of Electron?
29
hugozap 4 days ago 0 replies      
I'm trying it and it feels great. The ability to easily extend it is huge!
31
magerleagues 4 days ago 0 replies      
I'm excited to see if a community arises to build and share themes for HyperTerm. I switched from iTerm, and will reply back to this comment if I switch back.
32
ixtli 4 days ago 0 replies      
tiny nitpick for a really cool thing: it's not based on JS/HTML/CSS, it's written in or build on JS/HTML/CSS.
33
67079F105EC467B 3 days ago 0 replies      
Ctrl-a seems to be selecting all, how am I suppose to get out of a screen if I can't Ctrl-a d?

This seems like a oversight.

34
drinchev 4 days ago 0 replies      
If that has the iTerm speed it will be my new terminal app. It looks awesome! Can't wait to see the ecosystem around it.
35
jp_sc 4 days ago 1 reply      
36
laktak 4 days ago 0 replies      
For anybody with Firefox/Autoplay disabled, you need to right click and select play to see the demo...
37
damptowel 3 days ago 0 replies      
Finally, I've always wanted a terminal with particle effects!
38
winter_blue 4 days ago 1 reply      
Is there any way to use HyperTerm as an embeddable terminal emulator in your own web apps?
39
iLemming 4 days ago 0 replies      
oh wow. This is awesome! Now I desperately need a Clojurescript plugin for this.
40
sdegutis 4 days ago 0 replies      
It assumes I have npm installed. Probably shouldn't:

https://github.com/zeit/hyperterm/issues/134

41
cdnsteve 4 days ago 0 replies      
Would be interesting to see a Hypermedia API explorer.
42
rzhikharevich 3 days ago 0 replies      
Personally, I dislike the march of the web technologies towards the desktop. I think, we forgot what they're intended to be used for.
43
daveheq 3 days ago 0 replies      
Why not just call it JSTerminal?
44
decayy 4 days ago 0 replies      
This would work so well with Atom
45
kzahel 4 days ago 0 replies      
How is this different from crosh / hterm?
46
nikolay 4 days ago 0 replies      
Yet another "thin" client...
47
dema_guru 4 days ago 0 replies      
Cool
48
knodi 4 days ago 1 reply      
-_- another node.js term
49
posterboy 4 days ago 1 reply      
the video is a bit silly in the end
3
Coup attempt underway in Turkey theatlantic.com
450 points by Animats  4 days ago   361 comments top 45
1
CSDude 4 days ago 3 replies      
Reporting from Ankara, capital. Ongoing sonic booms followed by bombings. Scary as hell. My niece is 8mo, in each jet and helicopter pass we take cover on her and we baricated home with tables. No one knows what is going on, and media claims control is in place.
2
Animats 3 days ago 3 replies      
Current status as reported by government of Turkey:

- Coup attempt claimed to be unsuccessful.

- About 1500 members of military arrested so far. Two senior generals will be tried for treason.

- About 90 dead reported so far.

- Heavy damage at parliament building and presidential palace.

- Airports supposedly reopening soon. (Airport departure list shows "Delayed" for everything except one flight to Odessa.) [1]

- If Erdoan hadn't been able to make a speech via FaceTime, which was relayed by a TV station, the coup might have succeeded. He also sent a text to his entire contacts list. The amusing thing is that Erdoan is against social media.

[1] http://www.ataturkairport.com/en-EN/flightinfo/Pages/Departu...

3
fatihdonmez 3 days ago 3 replies      
It's same like Reichstag fire. Now Erdogan is more powerful to change constitution with presidential system.

ps: I'm turkish

4
elcapitan 3 days ago 2 replies      
Weird. This coup of July 16 failed in exactly the same way as the coup against Hitler of July 20 failed - a small group of military sub-leadership, a failed bomb attack against the leader, a half-hearted attempt to secure the capital, all breaking down with the leader re-connecting with the non-cooperating rest of government and military. And now most likely horrible revenge against all involved.

I always thought people from the military would at least consider the historic scenarios.

5
scarmig 4 days ago 7 replies      
Anyone have a take on how successful this is likely to be?

Both sides are claiming victory and that they've beaten the other side, naturally. Propaganda is everywhere.

The only thing I figure is that since Erdogan is calling for a mass uprising in the streets, the status quo has him losing. He needs to radically change the dynamics of the situation to stay in power, and mass protests and displays of martyrdom in the streets of Istanbul is the way to do it. Otherwise, if everything were under his control, he'd be encouraging people to stay inside while his forces stabilized the situation.

Is that a fair read of the situation?

6
smarinov 3 days ago 0 replies      
Dear Turkish friends,

This is arguably not the best place and time for what I am about to say, but I am going to do it anyway.

Please know that if things continue the way they are, you are all welcome to Bulgaria. There is some scare mongering by local right-wing parties on the verge of the political spectrum, but we have about half a million ethnic Turks and over 98% of the population are positive towards them since they have been the most unproblematic and hard-working minority that is loyal to the Republic for our last century of independence. We are not paradise on Earth either, but at the very least you should be able to integrate well into our society and have a relatively hassle-free life in no time.

Of course, I don't wish you to be forced to leave your homeland and no normal person is interested in having an unstable and/or unreliable Turkey, but unfortunately things haven't been exactly improving with your current leadership :(

Be safe!

7
einrealist 4 days ago 3 replies      
It is bad that a (mostly) democratic elected government is (tried to be) removed this way. This will make it really difficult to work with whoever is in power, internationally.

But Erdogan forced this on the military. They were told to stand down against ISIS and to fight a new impossible-to-win fight against the kurdish minority.

I really hope the situation will somehow calm down. Otherwise we could have a new civil war at our hands, forcing more people to cross (those stupid) national borders to find safety.

8
patrickmn 4 days ago 0 replies      
9
runesoerensen 4 days ago 2 replies      
Check out the Facebook Live Map, video streams from all over Turkey: https://www.facebook.com/live
10
insulanian 4 days ago 0 replies      
Two days ago: France closes missions in Turkey over security fears [1]

[1] http://www.aljazeera.com/news/2016/07/france-closes-missions...

11
Animats 4 days ago 1 reply      
The crowds appear to have ejected the military from the TRT TV station, which is now back on the air. The studio is full of people.[1] So full that they can't get a clear shot of the presenter.

This is history happening. Some general is being interviewed live in the middle of the crowd that's invaded the TV studio. It's all in Turkish, but a translation should be available later.

[1] http://www.trt.net.tr/Anasayfa/canli.aspx?y=tv&k=trt1

12
Animats 4 days ago 0 replies      
Update:

- The Turkish military said in a statement that it had "taken control"

- Gunshots were heard in the capital Ankara as military jets and helicopters were seen flying overheard

- Footage from Turkey shows tanks and soldiers in the streets

- Hostages have reportedly been taken at the country's military HQ

- Istanbul's Bosphorus Bridge and Fatih Sultan Mehmet Bridge were both closed

[1] http://www.itv.com/news/story/2016-07-15/live-updates-milita...

13
mustaflex 4 days ago 1 reply      
Let me give you guys some more inside details:

From the start it was clear that something was weird: they weren't enough soldiers to make a real coup, only a few hundred soldiers a few helicopters, tanks, etc... And it seems that they knew it and did it despite that. By the way they even bombed the house where the President was supposed to be...

The coup attempt was made by members of a religious group named "Nurcu" leaded by Fetullah Gllen, who is in exile in the U.S.A. ...

It was known that members of this group had infiltrated every possible area of the government (possible, legal branch, army,etc...)

Yes this group was the biggest ally of Erdogan during more than 10 years and it is mainly because of Erdogan that these people infiltrated so well every aspect of the government. They together imprisoned falsely reporters, policemen and soldiers claiming they were attempting to make a coup. They even imprisoned the former Chief of Staff. And those idiots in the AKP government and people who supported Erdogan were cheering them because they were getting rid of "secular" forces in the army. Now irony is these same people are doing exactly what they were supposedly trying to prevent: Make a Coup.

The Turkish army was always saying that the biggest danger for the country was religious backwardness(irtica) and unfortunately it has proven right again.

Although they were shots fired to civilians and to cops, the civilian crowds didn't hurt the few soldiers on the streets they grab them and handed them to the police. The situation is getting better, the Chief of Staff was saved (he was hostage by these people) the commanding officers are calling these soldiers back to their base to be arrested. Most of them are coming back. They even shot a helicopter("rebel") with an F-16.

I think the situation will settle in maximum 24H.

Unfortunately I think this will give Erdogan more power and leverage on the long run and drive the country deeper in insanity. Turkey is on its way to become the "Pakistan" to Syria/Irak, the Afghanistan of the middle-east.

For the better or the worse the only thing keeping Turkey from being another middle-eastern country was(unfortunately) the army.

14
calibraxis 4 days ago 1 reply      
For about a year, some openly suspected Erdogan frustrated the US enough for them to foment a coup. The US gov't loudly signaled its displeasure: "Early on, Obama saw Recep Tayyip Erdoan, the president of Turkey, as the sort of moderate Muslim leader who would bridge the divide between East and Westbut Obama now considers him a failure and an authoritarian, one who refuses to use his enormous army to bring stability to Syria." (http://www.theatlantic.com/magazine/archive/2016/04/the-obam...)

(I haven't yet seen any evidence regarding the US's role is in this coup, if any. I just mention this as a prominent example of how much he frustrates even his allies, so it's natural for many to go to the trouble of replacing him, despite all the risks.)

15
seanccox 3 days ago 0 replies      
Current coup summary:

250 people dead (according to The Guardian), 1440+ injured.

According to Turkey's Chief of Staff, the coup was against President Erdogan's government, launched by troops from the Turkish Air Force and Gendarmerie.

Erdogan asserts the coup is over, but some reports suggest the calm in Ankara indicates the coup succeeded there, and that there is presently a stalemate between government and anti-government forces.

The Turkish government just ordered the dismissal of 2745 judges from their seats. According to Reuters, there were 7604 judges in Turkey total, so ~36% were removed from duty.

16
anon1234321 3 days ago 0 replies      
Amateur hour. They didn't capture or eliminate Erdogan in the first place. Then they failed to cut off internet and television within the country, so Erdogan was able to rally his supporters to the streets, and the soldiers were obviously (and rightfully) not willing to mow down mobs of civilians.

If you ever hoped to see Erdogan removed from power, this is the worst possible outcome. He will now use this as a pretext to solidify his strangehold on civil society, carry out a purge of disloyal elements of the army, and do whatever else he must to solidify his place as dictator for life.

17
countrybama24 4 days ago 3 replies      
https://fr24.com/THY8456/a5a3952

That looks like Erdogan's jet. Was in a holding pattern for the last hour. Now heading to Istanbul. Seems like the coup has failed?

18
Animats 4 days ago 0 replies      
Live TV coverage on Sky seems to be the most current.[1]

[1] https://www.youtube.com/watch?v=y60wDzZt8yg

19
bArray 4 days ago 2 replies      
News flash: Those in power and relying on laws, rules and democracy think it's a good idea it remains.

Well it's completely obvious they would have this stance. I don't think they've really anything to add. Hopefully things stabilise one way or another over there, innocent people getting shot helps nobody.

What's interesting though is whether this instability will affect Turkey's chances for a seat in the EU. I would really like to see a Muslim Country in the EU but at this rate their chances seem to be lessening.

20
tuna-piano 4 days ago 5 replies      
Does anyone have an explanation, or a link to an explanation for all the happenings in Turkey? I'm an occasional follower of this kind of stuff, but it would be great to understand the full situation in Turkey in an easily digestible way.

1. The current coup

2. The president's history

3. The different terrorist groups operating within the country and their beliefs and goals

4. Turkey's legal system and freedoms allowed

5. Information on past coups

6. The general beliefs and sentiments among Turkish people

Edit: And a simple question that I'm sure is a complex answer: is this attempted coup bad or good for a peaceful free turkish people?

22
Animats 4 days ago 0 replies      
"Unclear to me what's happening in Turkey, but Facebook, Twitter & Youtube have all just become inaccessible there."

pic.twitter.com/LIzGAi2HuH

Julia Carmel (@JuliaCarmel__) July 15, 2016

23
Animats 4 days ago 0 replies      
Prime minister makes threats against coup plotters. Coup plotters claim they've taken over.[1]

[1] http://www.itv.com/news/story/2016-07-15/section-of-military...

25
chadcmulligan 4 days ago 0 replies      
live coverage on al jazeera http://www.aljazeera.com/watch_now/
26
Animats 4 days ago 0 replies      
27
steve19 4 days ago 1 reply      
The coup is over.

https://twitter.com/AJENews/status/754108743412047872?ref_sr...

The US announced support for the government where before they were hedging their bets.

28
xordon 4 days ago 0 replies      
29
vmorgulis 4 days ago 0 replies      
> EU source says Turkey coup bid looks substantial, 'not just a few colonels'

http://www.reuters.com/article/us-turkey-security-eu-idUSKCN...

30
huffmsa 4 days ago 1 reply      
Article about CIA clairvoyance about coups a day before a coup.

http://theweek.com/articles/635515/cia-team-clairvoyants

31
faebi 4 days ago 0 replies      
I am following this now since 3 hours live from local people through Youtube, Twitter and Periscope. Amazing what technology can enable. I respect those Platforms now a lot.
32
Animats 4 days ago 0 replies      
Crowds in central square, bridges, and airport waving Turkish flags. News services unclear what this means. Some shooting, but military not acting to crush crowds.
33
woodpanel 4 days ago 0 replies      
Maybe this is the moment in history turkey is becoming coup-proof (a.k.a. next-level maturity).

(if even opposition groups speak out against the coup and the people are taking over the streets peacefully - in favor of the government / against the coup, according to the news I get)

34
mathgenius 3 days ago 0 replies      
I see flights into and out of Istanbul, but nothing near Ankara:

https://www.flightradar24.com/40.33,31.52/7

35
sergiotapia 4 days ago 0 replies      
Just saw a guy get pancaked by a tank trying to pull a tiananmen. You don't stand in the way of the military mid coup.
36
chmike 3 days ago 0 replies      
How easily could communication systems be neutralized today with internet and GSM phone networks ?
37
nikolay 3 days ago 0 replies      
Neither a coup, nor Erdogan resemble democracy! Until Erdogan is in power, Turkey will suffer.
38
andrewclunn 4 days ago 0 replies      
Looks like the Olympic committee made the right call regarding 2020.
39
TruthAndDare 3 days ago 0 replies      
"Hacker news"
40
kyriakos 4 days ago 0 replies      
From the latest news it seems that the coup failed.
41
whack 4 days ago 10 replies      
"Modern Turkey was founded by Mustafa Kemal, a general in the Turkish Army who was later formally granted the surname Ataturk, or father of the Turks. Ataturk set about an aggressive program of modernizing and Westernizing the country, pushing religion to the margins, banning certain apparel like headscarves and fezes, and converting Turkish from Arabic to Latin script. But that secularism has always remained tenuous. Many Turks, especially rural ones, are religious, and not all of the reforms have remained popular.

The military has long seen its role as safeguarding Ataturks secularist agenda, and when it worries the government is shifting too far away, it has tended to take action.

Turkey has thus occupied a strange position in world politics: Although it is prone to coups detat, Western governments have often cheered the coups on, with varying degrees of enthusiasm, because they are in the service of a secular agenda. Periodic deposition of democratically leaders has, somewhat paradoxically, been treated as a small price to pay for ensuring liberalism."

That's actually a very interesting system of government. A democratic arm that rules the country day-to-day, and a "benevolent dictatorship" that will overthrow the democratic arm if it violates the country's "fundamental principles". Such a model would arguably have served Iraq better than the pure democracy that was attempted and failed.

42
cperciva 4 days ago 8 replies      
Why did people support Hitler? Why do people support Trump? Why do people support Putin?

Some people aren't very concerned with democratic principles; some people like the policy agenda and are willing to sacrifice democratic principles in order to get the policies they want; some people are gullible and believe scapegoat arguments; some people, particularly in times of insecurity, seek out a "strong man".

43
aerovistae 4 days ago 3 replies      
Mods: Why do you delete news posts like this one? I understand you're trying to keep Hacker News tech/business/science focused, maybe, but don't you think major world events that are getting tons of interest should be allowed to float up to the front page on their own merit? If the users are interested in it, why is it being deleted?

I feel like major events like yesterday's massacre in Nice or today coup d'tat merit the attention.

44
ZainRiz 4 days ago 3 replies      
How do you define liberal? France's idea of "liberalism" is "follow our culture exactly or face heavy fines/punishment".

They've basically turned their culture into a new religion

45
blahi 3 days ago 2 replies      
Turkey had its chance. Ersogan proves that islamhas a much longer road to go still.
4
Sandstorm An open source operating system for personal and private clouds sandstorm.io
430 points by dack  4 days ago   162 comments top 33
1
anilgulecha 3 days ago 3 replies      
For massive take-off, Sandstorm is almost-there with making everyone's online world de-centralized.

It's solved half of the problem beautifully -- it's very easy, once setup, to launch any new app on your server.

The other half of the problem is moving of server/data. Right now, either you use their self hosted instance of sandstorm, or setup your own server. There's the hassle of keeping track of data/server. There's pseudo-centralization happening here, given the difficulty of changing compute or storage by a layperson.

However, if sandstorm had built in federated storage (either based on bittorrent filesystem or any other p2p storage), then I no longer worry about losing my data. I simply move my domain to a new "server-provider" or use the main project's own service, and then at a later date can spin up my own compute, or use another provider, and point them to my data on the p2p/federated storage.

2
StevePerkins 4 days ago 2 replies      
This is an "operating system" only in a marketing hype sense.

In technical reality, it seems to be a Google Apps alternative that you can self-host.

Still very cool (frankly even MORE cool than yet another hobby OS!)... but "platform" would be a less groan-worthy phrase than "operating system".

3
goodplay 3 days ago 1 reply      
I'm excited. Today was the first time I tried Sandstorm via their demo serivce, and I must say that I am genuinely surprised and impressed with the level of polish. One click gave me a private gitlab, a collaborative document editor, and even a running instance of browserquest!

This level of polish might just be what it takes to get the masses to host their own digital services. I'm strongly considering setting a sandstorm server up for my family.

One question promptly and prominently pops to mind: Given the focus on non-technical users, How does sandstorm mitigate issues that are caused by NAT and NAT traversal?

4
jegoodwin3 3 days ago 2 replies      
I saw this last night and signed up for oasis on my chromebook. I created a Jupyter notebook and entered an expression in it. Nice. Then I closed the lid to my chromebook for the evening.

Next morning I opened up the lid, and it helpfully told me it wasn't connected. I waited a bit longer and updated the expression. The cell text updated but after pressing 'play' the result value did not update, leaving my screen in an inconsistent, incorrect state (wrong value for displayed input to function). There was a little sign telling me I wasn't connected.

I had signed in originally with gmail. People are going to expect a similar interaction model to google docs. There was no obvious way to get back online, short of killing the tab... and getting back how? Remember a link? A bookmark?

This doesn't work yet -- not for most people.

Beautiful, beautiful job. Get the rest of the way there please...

5
studentrob 3 days ago 2 replies      
Love seeing this on top of HN, in light of recent discussion about securing your own email. I've tried Sandstorm a couple of times and loved it. They're always improving.

Kenton himself seems ready to respond in depth both whenever Sandstorm is discussed on HN, and at times when I email Sandstorm for support.

I'm sure he's got his limits. He is setting a high bar for customer support within the company. I think that's smart.

Innovation does not happen in a vacuum. Talking to customers with an open mind is a great way to learn what they want and spur some creative process within yourself.

6
IanCal 3 days ago 1 reply      
So I've been thinking a bit more about sandstorm recently as I'm planning to build something and need to work out how I want to host/distribute it.

It'll process data for people and I'd absolutely love to be able to write the app "ignoring" multiple uploads, user management, team-like sharing, etc.

I'd be expecting this to potentially be a paid app though. In app billing is mentioned as a "coming soon", is there a roadmap or expected timescale for this? Is it a "we kinda plan this sometime" or "currently in development, releasing soon" kind of phase?

If not, I don't know how well it'd fit on their shared hosting, as the processing might be fairly intensive. Is it reasonable for me to install sandstorm on a server with my app and let people use that? Sort of like the shared hosting but with only my app available? Does that make sense or am I completely misusing the intended overall structuring of sandstorm?

7
dack 4 days ago 1 reply      
I just thought this was relevant in light of the recent discussion about how to protect yourself in the event your google account is shut down. Sandstorm has an email client as well - and you can 100% control your data.
8
jalami 3 days ago 0 replies      
Sandstorm is simple an intuitive which is what a lot of our decentralized projects really need a good helping of. I watched the recent Decentralized Web Summit, and there was so much talk about decentralization, yet so little talk about self-hosting, federation or the projects that are out there doing the dirty work today. It made me sad. People are looking for alternatives to silos, not every one of course, but a lot of people just get overwhelmed with the thought of self-hosting. Sandstorm is helping with that it seems. I wrote more about this here[0] if you're curious.

Keep up the good work!

[0] https://www.alami.io/post/decentralized-web-summit-2016/

9
amelius 4 days ago 2 replies      
Does this OS provide building blocks for collaborative applications? Or does every app have to reinvent collaborative editing?

Update: Okay, so I created a flowchart in one instance, then shared the link, and opened it in another tab. Edited the flowchart in one tab, but it didn't change in the other tab. This is imho not how a cloud OS should work.

But the container technology seems quite cool.

10
educar 4 days ago 2 replies      
All the random domain sha1 like domain names are confusing. I also dislike that many of the apps have missing features compared to the actual apps and they get embedded in iframes. I understand all these are done for security.

Related: https://cloudron.io. Unlike sandstorm, it's docker based and has a very good app store - https://cloudron.io/appstore.html.

11
bigbugbag 4 days ago 2 replies      
Still looks a lot like yunohost[1] and cozy cloud[2] which for some reasons I find more attractive than sandstorm.Then we have a rather new contender: ArkOS

[1]: https://yunohost.org/ [2]: https://cozy.io/ [3]: https://arkos.io/

12
ams6110 4 days ago 5 replies      
OK so this looks awesome and I had half a dozen ideas for things I'd like to do with it in about 30 seconds. Is anyone using it? Would love to hear from some experienced commentors.
13
middleclick 4 days ago 2 replies      
I am using Sandstorm and I love it. Nothing beats the power of hosting your own private cloud. However, I really wish there was a way around the wildcard certificate requirement. One of these days I am going to shell out $100 for a wildcard cert or I can just hope Let's Encrypt will support it!
14
theseoafs 3 days ago 1 reply      
Speaking of which, what is the status of Cap'n Proto? I was looking at using it for a side project recently, but it doesn't look great on the surface: there haven't been any updates on the site since March of last year, and that post was to address a security vulnerability.
15
Santosh83 1 day ago 1 reply      
I realise I'm coming late to this thread, but just a quick question. But before that, a big hats off to everyone involved with Sandstorm! Making a complex platform point & click is a huge achievement.

Coming to my question, does Sandstorm have a HTTP server in its app platform? While I'm excited about other apps like file sharing/storage, what I badly want to do right now is to be able to host my site on my own machine using dynamic DNS. Of course I can manually do it by installing Apache, configuring it, updating a dyn DNS provider and so on. But if Sandstorm offers a point & click way to do that, then I'm hooked right now. :-)

16
slewis 4 days ago 2 replies      
Can you explain the "Fine-grained Object Containers" concept a little more?

For example, how is this enforced: "JavaScript running in the users browser can talk only to the document container for which it was loaded, no others."

Do apps need to be SandStorm aware? Thanks!

17
Hupriene 3 days ago 2 replies      
Is there a way to run e.g. an email server on Sandstorm?

Last time I looked it seemed like that would never be supported because it would violate the sandstorm business model, which, as I understand it, appears to be selling very cheap hosting of grains, made possible because only disk resources are used when an grain is not currently in use.

This would seem preclude the hosting of services that need to be running all the time in the background.

18
JD557 3 days ago 1 reply      
All the examples I see on your demo are web apps (and the documentation seems to be focused on web apps as well).

How well are applications without a web-interface supported?

For example, could I easilly package and launch something like a teamspeak server on sandstorm, or would I need to add some mock web server that kept the real server online?

19
conradk 3 days ago 1 reply      
How do app updates work with Sandstorm ? For example, if I host an instance of the Ghost blog. Now, say it updates from version 1.2 to version 2.0 (which means probably a breaking database schema change or something). How would Sandstorm go about updating it ?
20
xg15 3 days ago 1 reply      
Very cool! Last time I checked Sandstorm, the powerbox system was still in development. Now it seems finished, according to the docs. Is there are more in-deptg doxumentation about the UI flows and APIs?
21
andrewstuart2 4 days ago 5 replies      
OS seems a little presumptuous. As does the majority of the "how it works" page. Making wildly ridiculous claims that amount to "microservice are silly" (my paraphrase) when really they've only managed to successfully sandbox applications. Much like Android with its "a user for every app by default."

> Sandstorm takes a wildly different approach [than containerized services]: It containerizes objects, where an object is primarily defined by data

Or... You've containerized the data access service and enforced a per-application sandbox. Plenty of precedent there.

> Microservices are seemingly designed to match the developers mental model of their system. Often, services are literally divided by the team who maintains them.

Good Lord, I sure hope that's not true. Maybe if the non-technical manager designed the system.

> In the Sandstorm model, all of the components (from web server to database) needed to handle a particular grain run on the same machine.

So your container scheduler can optimize for I/O. Again not that far from existing cloud schedulers.

I'm not saying this isn't a powerful model or platform.I am saying that I'm very disinclined to want to work on such a platform, purely because I'm worried its developers buy into the "everybody else is way off base" attitude, when really they're only doing what many many successful architectures have done before them.

22
SNvD7vEJ 3 days ago 2 replies      
How do I report a bug in an app?

The email app Roundcube does not seem to support unicode characters in the senders name in received emails.

So the displayed name in received emails gets garbled if it contains e.g. a swedish character like '', '', ''.

23
brudgers 4 days ago 0 replies      
Scott Hanselman's interview with Kenton Varda about Sandstorm [2015]:

http://hanselminutes.com/497/your-personal-cloud-platform-wi...

24
ipfsuser 4 days ago 0 replies      
Any plans to support saving/loading grains to IPFS?
25
jacke 3 days ago 1 reply      
always wondered by such projects. you have the resources, and you have made analogues of paid services, and that's cool, but what's next? I think you can offer people more than just a free Slack. Do you have any plan to integrate it with commercial products?
26
aBoss 3 days ago 1 reply      
I really like it but it seems it cant work on the old linux server kernel. PLEASE fix that

Thank you

27
kevinSuttle 3 days ago 0 replies      
Looks pretty solid. Is there a comparison to other similar products like OwnCloud?
28
erikb 3 days ago 0 replies      
Why is it an operating system? Looks like owncloud not like linux to me.
29
known 3 days ago 0 replies      
I'll try;
30
ktamura 4 days ago 2 replies      
Previously on HN: https://news.ycombinator.com/item?id=8972066

As a marketer, I am all for launches, re-launches and marketing, but it's unclear to me what's different now compared to a year ago.

31
rosstex 4 days ago 1 reply      
I forget, what's the name of this OS?
32
lechevalierd3on 4 days ago 1 reply      
Whaou I just realized for the first time that this company is named after the Darude's one and only hit.
33
jondubois 3 days ago 1 reply      
It supports too many apps/frameworks - It's too ambitious and not focused enough. Also, not all apps are the same when it comes to scalability - That's a false assumption which pretty much of all PaaS providers make.

There is more to scaling an app than just adding more machines - Services need to be scaled independently of each other based on usage; also, some services should run on their own dedicated hosts, some services require persistent storage also there are needs to be rules to define how services can talk to each other, recover from failures, etc...

Developers NEED some way to specify HOW their services should scale - It can't just be fully 'automatic' - That's not flexible enough.

If you look at typical Kubernetes config files, they are quite long/complex; this is because there are a LOT of factors/variables involved when it comes to running services inside distributed containers and you can't just abstract away from those details without seriously compromising flexibility.

5
Serverless Architectures martinfowler.com
453 points by runesoerensen  1 day ago   197 comments top 39
1
mpdehaan2 1 day ago 7 replies      
One of the bigger problems with serverless architecture (beyond catastrophic lack of good debugging and development tools) is the idea of managing multiple users, working on multiple code branches, and all needing environments that somewhat closely mirror production. This leaves servless as a decent way to hook an event callback to some AWS event (new file uploaded to S3, etc) but IMHO not anything that approximates any kind of business logic - or anything that will need to be iterated on by more than one developer. I feel it is a massive reach to market this to anyone at this point, and if this catches on, would really make me hate programming compared to what local development in VMs feels like - where I have much greater tools. It's inefficient both for the developer and at runtime.
2
themihai 1 day ago 12 replies      
The vendor lock-in alone is enough to make BaaS dead on arrival. Some seem to lock you not only on specific platform APIs but also on a single programming language(i.e. javascript) like it wasn't worse enough to have a single language on the client.
3
zippy786 1 day ago 1 reply      
I truly hope that people stop misleading the mass with buzz words. We already have too many of them being used for clickbait. I don't get why a simple client-server architecture would not suffice the "serverless". Software engineering and architecture design used to be cool in the past with useful design patterns. Now it's full of tricks/buzz words that promise the users of silver bullets that help them to manage code and servers. In reality, no such silver bullet exist and such tricks/architecture fail every day. I've noticed that martinfowler.com is one such place that seems to provide such empty promise of a silver bullet!
4
tomc1985 1 day ago 1 reply      
We used to have serverless architecture... it was called standalone desktop apps! No servers, no backend, no nothing, just you and the code. It was great!
5
skewart 1 day ago 1 reply      
I'm cautiously optimistic that the vendor lock-in will fade away over time. Hopefully open source frameworks can provide a jQuery-like abstraction layer over all of the vendor-specific implementation details. Of course, there are lots of challenges when you start getting into deeper architectures - queues triggering layers of lambdas interacting with various caches and data stores. But I we'll see best practices emerge that help people keep things simple and flexible.

For testing it should be possible to run everything in a mock vendor system on your local machine - not that such a system exists today, but theoretically it could.

Overall, my sense is that Serverless architectures will never be useful for everyone. They'll always be better for smaller, simpler systems that don't see tons of traffic. The Serverless community should focus on these use cases. The thing is, there are tons and tons of apps that are over served by all the flexibility EC2 provides. And there is another set of unborn apps that haven't been created only because the barrier to setting up and managing the backend was a _little_ too high. I'm really excited to see serverless bring these apps to life.

6
rwallace 16 hours ago 0 replies      
It's sometimes okay to use inexact or uninformative terminology, but it becomes a problem when words are used to mean their exact opposite. So if you are going to talk about serverful programs - those which depend on servers and can't operate without them that's fine, but please use some term other than 'serverless'.
7
spotman 20 hours ago 0 replies      
Would have liked to see some real cost analysis, other than it should save you money, because that is only true if scale is small to medium.

That's the thing, I guess the demographic that lambada is going for it will save money, but along with the mentioned vendor lock in, when you really need to have something on and processing requests non stop the costs savings really evaporate.

It's weird to me that the serverless thing catches on when it seems to be best suited for projects in their infancy without need for 24/7 request processing , or things out of the critical path like image resizing or things of this nature. But entire apps? Feels very much like going back in time to shared hosting.

I could see it saving money until about 1m requests a day and then a sharp drop off where you are hemorrhaging cash after that...

With modern auto scaling systems that aws and gce provide I bet most shops don't stand to save any more money than with a normal modern day refactor.

At the end of the day this leaves you more rope to hang yourself with. It makes it easier to run slow or poorly thought out code. 1000 processes might be cool but is never as cheap as 2 working correctly that accomplish the same thing, as an extreme but actual example of a situation I helped unwind with GAE, which was serverless before the buzzword.

8
tschellenbach 1 day ago 2 replies      
There's a big difference between Paas, where you hand over control of your backend. Compared to Faas, where you just outsource a small part of your infrastructure.

FaaS solutions like algolia, getsentry, getstream.io. mapbox, layer, keen.io, twilio, stripe etc only run a small part of your application. They scale well, are more reliable and often more cost-effective than an in-house solution. If you're ever not happy with them it's easy to switch to an alternative. If Algolia no longer works for you, simply switch to Elastic. If you're not happy with Mapbox, use google maps. No longer like, getstream.io, use Stream-Framework. It's relatively easy to change since you're not handing over your entire backend.

Also see my post on HighScalability about this:http://highscalability.com/blog/2014/12/22/scalability-as-a-...

9
eloycoto 1 day ago 0 replies      
I'm early adopter of serverless applications, few weeks ago was released a tool that make manage our Amazon lambda applications even easy.

https://github.com/jorgebastida/gordon

A few examples can be found here, and it's a good point to start if you're using Aws Lambda

https://github.com/jorgebastida/gordon/tree/master/examples

10
kuharich 1 day ago 0 replies      
11
be_erik 1 day ago 0 replies      
We're currently exploring lambda for certain applications as well. My coworker wrote up a nice little post about it:

https://product.reverb.com/2016/07/16/deploying-lambdas-with...

12
rhinoceraptor 1 day ago 5 replies      
Aren't 'serverless' PAASs just a reinvention of PHP hosting?
13
aslom 12 hours ago 0 replies      
The main point of serverless is cost - it is not just "Reduced operational cost" but orders of magnitude difference if one were to build cloud solutions that can scale the the same level and pay for cloud infrastructure (even if not used at minimum something must be running ...), monitoring, and people effort involved on ongoing basis (real cost)

Compare to working with level of abstraction of "entity" (function, could be bigger) and have it running when needed and scale as needed.

14
OldSchoolJohnny 9 hours ago 0 replies      
"Serverless" WTF? I thought it would be a cool article on peer to peer mobile apps that don't rely on a backend server, something I think will be very popular in future for a variety of reasons.

Please, let's not accept this redefinition of a fundamental and well known term.

15
0xmohit 1 day ago 1 reply      
Previous (recent) discussion at https://news.ycombinator.com/item?id=11921208
16
happyslobro 1 day ago 2 replies      
For my first serverless project, I built a service that manages an advertising account, by analysing ads and optimising the spend spread for a configured goal. I was surprised to find that one of the most difficult aspects, was in throttling outgoing API requests. Basically, I needed to avoid abusing a 3rd party service by hammering it with parallel or rapid fire requests.

I discovered all kinds of weird solutions:

- Push a "please wait" message onto an SQS queue, and set a Cloud watch alarm to fire when the queue length was >= 1 for at least one second. Have a variety of lambdas respond to the alert by checking if the have any work to do, and then racing to pop the queue and do one operation. I think I needed SNS in there too, for some reason. Fanout?

- Build a town clock on an EC2 and have it post "admit one" tokens to SNS. Nice and simple, but I was evaluating serverless, and adding a server felt like defeat. Might as well just run the whole thing on that EC2.

- Have on lambda pop request descriptions from a queue, and sleep for one second after sending each one. The request responses would have been pushed onto another queue. All of the other lambdas could just handle the events they cared about as quickly as data became available. Sleeping in a lambda just felt wrong, but for one second, it probably would have been fine.

In the end, I was still experimenting and exploring when my estimate loomed near, so I just wrapped up all of the application code and piped it all together with delays in loops that made API requests. Deployed it to Heroku in a hurry with minimal fuss.

What I learned:

- There are bad use cases for serverless, and I had started on one (SLS doesn't wait on no one)

- Serverless does encourage a clean architecture, from a code perspective: I was able to quickly transform it into a completely different type of application.

- AWS's services have lots of little surprises in their feature set: SQS doesn't fire any kind of event. SNS can't schedule events in the future (maybe I should have expected that one). Cloud watch events can only trigger a single lambda.

- There is a need for a town clock / scheduler as a service in AWS'. How many people have built that themselves?

17
idoco 1 day ago 0 replies      
Sort of related, I've implemented a GitHub pages based serverless app demo, that uses GitHub API to execute git operations from the browser. Though slightly weird, I really enjoyed this experiment, and think there might be something to this concept.

https://github.com/idoco/GitMap

18
Sanddancer 1 day ago 2 replies      
One of the big issues here that is only very very slightly glossed over is the security you give up. The only sort of security filtering you get is AWS' WAF, which is considerably weaker than a firewall like mod_security with a default ruleset, or even apache's .htaccess. Inbound filtering, you're limited to a tiny ruleset with only a few conditions, and outbound filtering doesn't exist at all.

This lack of firewalling continues up and down the stack. As such, it's a lot harder to create rules regarding any API calls you make to third party services, harder to audit how your app interacts with any datastores, and generally an administrative nightmare. It may be useful for some apps, but just feels like a nightmare to maintain for any decently sized setup.

19
glic3rinu 1 day ago 0 replies      
I expected the article to be about the goodness of some peer-to-peer architecture that doesn't require central servers, not a twisted definition of cloud computing.
20
klodolph 1 day ago 1 reply      
I recently evaluated these, and I'm dying to use something like this. I would really love to be able to do this with sandboxed apps written in arbitrary languages like Go, Haskell, Rust, or even C++. Ideally, I wouldn't be managing docker containers or VMs to get certain jobs done.

Unfortunately, it's limited to JavaScript, Python, and Java for now. Google Cloud Functions is similarly limited to JavaScript.

I know that these languages were chosen because they can be easily sandboxed, but it would be nice to support something more generic without having to create docker containers for everything.

21
bitwize 1 day ago 1 reply      
So apparently "serverless" means you don't have to requisition a cloud instance to run the service.

I remember this movie the first time around, when it was called "virtual hosting". What a revolution -- transitioning from needing a Unix box running Apache to run your PHP e-commerce site to having your host take care of that for you -- and everybody else.

Just because you're doing something old "in the cloud" now doesn't make it a new thing. Just like you can't append "...with a computer" to an existing method and get something patentable.

22
gshx 1 day ago 2 replies      
One big challenge is the expense that is otherwise avoided via usage of connection pooling. Without the hack to keep the backing containers alive, tearing down and restarting them is expensive and wipes out all hot path optimizations made by JIT compilers besides having to reestablish connections and pay the handshake prices. Apart from the simplest of use-cases at this time, imho this is not an efficient model if sustained performance and low latency are of interest.
23
hackdroid 21 hours ago 0 replies      
There is google's appscript(based on javascript), which could also be termed as serverless. So ingenious for individual purposes, I have used it as a database(with google spreadsheet), scraping and saving 1000's of html/xml, deploying and demoing bootstrap websites. It also has a time limit of 5mins 30 seconds which could be overcome by controlling triggers or by using patt0's CBL http://patt0.blogspot.in/2014/08/continuous-batch-library-up....
24
SNvD7vEJ 1 day ago 11 replies      
What a ridiculous and strange definition this is.

Its an architecture that relies on multiple, supposedly distributed services, all of which are hosted on servers...

Why not just call it "Multiserver" instead.

If they mean "containerless servers" or "microservices", then why don't they say so?

If they mean "distributed servers" why not call it that.

If they mean that the client relies on multiple services (each hosted on different servers) why not just call it a thick/fat/stateful client?

Even a database is a remote service (if not embedded), that is run on some sort of server.

Calling anything serverless is just ridiculous as long as it depends on services on the network.

This smells really bad of just another attempt at marketing a pointless definition just to get more business.

25
buckbova 1 day ago 0 replies      
I wanted a good tutorial/walkthrough on this topic.

Has anyone gone through https://pragprog.com/book/brapps/serverless-single-page-apps?

The example material looks good. Overall it could be a bit short.

26
eldavido 1 day ago 1 reply      
Aren't we kind of already there (serverless)?

A lot of web frameworks enforce the shared-not-that-much (not quire "shared nothing", but close) style of programming where every HTTP request is essentially just an "event" that hooks into a controller-handler? And if you want anything to stick around from one request to the next, you keep it in memcache/db/BaaS/third-party service, not your own process?

I don't think this is as radical a shift as many here might think, especially given the move to containers is already well underway.

27
zpallin 1 day ago 0 replies      
I think a major problem with Serverless architecture, as much as it is awesome, is that to manage it you would need to be already familiar with the underlying systems that provide it. Hence, it would imperative that you already have worked with the architecture you are planning on replacing.

Unless there is some elegant factor to Serverless that I am missing. I just can't shake the feeling that at the end of the day this is just a sales tactic.

Nevertheless, I opt to use serverless BaaS solutions whenever I can afford to.

28
cyberpanther 1 day ago 1 reply      
Serverless sounds like CGI evolved past needing a server anymore.

Oh yeah and remember spawning CGI processes for every request never scaled well. :-)

29
urza 18 hours ago 0 replies      
Also see "Unhosted web apps": https://unhosted.org/
30
srcreigh 1 day ago 1 reply      
One issue people are talking about is vendor lock-in. I would like to note that this drawback is only valid for non-prototype software. For prototypes, hackathons, or "one time use" websites, these BaaS tools (such as Firebase) are essential.

Here is a case study: Last year I was working on an SMS app, QKSMS [1]. We offered a premium version for $2. We did a promotion on reddit where we gave away 10k free premium versions. So, take a second and ask yourself: how quickly do you think you could implement a promo code system + a website for distributing codes for one time use?

We did it in about 5 hours. It costed about $100. The website was (and still is) statically hosted on Github. [2] The website source code is ~22 lines of JavaScript. It pulled 12 promo codes from Firebase; and when a promo code was removed, it would (in realtime!) collapse it from the list and display a new promo code at the bottom.

The mobile app code was also very simple. First, check if the promo code is available (one API call); if so, enable premium and remove the code (another API call).

The reason why it costed about $100 is because we had too many concurrent users: the Firebase free plan allows only 50 concurrent users, and at our peak we were seeing ~500. Since the promo was only for a day, we bought an expensive plan that got pro-rated for just that day.

It was an extremely successful promotion. [3] The final result was very interactive. It was amazing to watch the codes disappear in real-time. It was like a game: you had to be fast to enter the code, because the codes were being used so quickly.

All in all, I believe we made more money in people buying it anyways (despite the promotion) than it costed to serve it. And keep in mind we built the entire system in about 5 hours. And I'm not even a web developer. An actual web developer could have implemented this in an hour or two.

For reference here is the entire JavaScript powering the promo code website:

 var all_codes = new Firebase("https://qksms-promo.firebaseio.com/public_codes"); all_codes.on('value', function(data){ $('#remaining').text(data.numChildren()); if (data.numChildren() === 0) { $('#status').text('No more codes!'); } }) all_codes.orderByValue().limitToFirst(12).on('child_added', function(data){ $('#status').remove(); var str = '<div class="code" id="'; str = str.concat(data.key()); str = str.concat('">'); str = str.concat(data.key()); str = str.concat('</div>'); $('#wrapper').append(str); $('#'.concat(data.key())).hide().fadeIn(300); }); all_codes.orderByValue().limitToFirst(12).on('child_removed', function(data){ $('#'.concat(data.key())).slideUp(300, function(){ $(this).remove(); }); });
[1] https://github.com/moezbhatti/qksms

[2] http://qklabs.com/qksms-promo/

[3] https://www.reddit.com/r/Android/comments/36eix7/dev_a_year_...

31
rainhacker 1 day ago 2 replies      
If successful this could lead reduction in developers needed in companies that use say AWS lambda.
32
jokoon 1 day ago 0 replies      
It doesn't seem the author knows how network programming works. This is all high level abstraction and diagrams, but no simple comparison of what are the pro/cons, what are the constraints, etc.
33
hopfog 1 day ago 1 reply      
I'm using the following setup for one of my sites:

Client -> post -> API Gateway -> AWS Lambda -> upload JSON to S3

Client -> get -> S3

Works pretty good!

34
naveen99 1 day ago 0 replies      
What if the web had the equivalent of 900 numbers. Something like http900://foo.bar.com
35
sinzone 1 day ago 1 reply      
If you're looking for an open-source API Gateway (less lock-in) with Lambda Integration then Kong (https://github.com/Mashape/kong) could be a useful complementary tool for your serverless architecture.
36
EGreg 1 day ago 1 reply      
Why is it called serverless if it uses Amazon Lambda instances?
37
up_and_up 1 day ago 1 reply      
How is this different then something like Heroku and their add-on ecosystem?
38
ianamartin 1 day ago 1 reply      
The main problem I see here is the problem we run into with every hot new abstraction of a thing that some people find troublesome as developers.

Someone already mentioned virtual hosting. I'd like to bring up ORMs. There's this idea that you can forget about the datastore--abstract it away, manage it in code, whatever. You don't have to learn about databases, table structures, or SQL if you don't want to. Just think about your models and don't worry about the rest.

I'm not against ORMS and find them both amazing and useful in certain ways. What I am against are systems developed entirely by developers who refuse to think about these things. Especially when I have to come into the situation after they have left and do something genuinely unforeseeable, like provide reporting and analytics from the back end, integrate with a third-party system, or--god forbid--add new functionality.

What you're left with is a dysfunctional, slow, mess that's incredibly difficult to work with, a data layer that makes no sense unless you're working with that particular ORM, and no ability to tell exactly where in a quarter-million lines of code the exact logic you want to fix or modify actually is.

Yes, I'm quite certain that no one would recommend that anyone use serverless architecture that way. It's the wrong tool for the job. Just like ORMs are the wrong tool when performance is critical and you need to manage complexity for 30 services from the same data layer.

But someone is going to do it. In reality, probably lots of people will. Some CTOs will buy into the hype and hire teams of consultants to do it or hire their actual dev team full of serverless experts.

I'd echo what someone else said to the effect of abstractions are great so long as you understand what you're abstracting and know when it's a good idea to do so.

Viewed as a tool for accomplishing a small thing pretty quickly, it sounds cheap, fun, and exciting.

Viewed as a new way of taking shortcuts around quality infrastructure that's needed to support large, complex systems, I feel like it's probably a bad idea.

Given my attitudes above, it's probably not a surprise that I find the description a bit sketchy. I think calling it Serverless is a misnomer for all of the many good reasons people have pointed out in this thread. But I also question calling it architecture.

What I expect we will see in fairly short order is a collection of libraries or frameworks that resemble PhP not only in function (a service that pops in and out of existence on demand) but also a language-like form that evolves rather than being intentionally built--and evolves towards ease-of-beginner use rather than being guided by a coherent architecture.

And, in the grand scheme of things, that's fine with me. I just hope I never inherit a system like that.

39
darkboltyoutube 21 hours ago 0 replies      
6
I built a fusion reactor in my bedroom AMA reddit.com
425 points by lsllc  1 day ago   120 comments top 23
1
sthatipamala 1 day ago 15 replies      
This kid is awesome but this particular comment really made me sad:

"Spending 3+ hours a day on a project during junior and senior year did not help my grades. My counselor told me that I wouldn't get into the top colleges because of this reason. I believed her and didn't apply to my dream colleges."

2
kriro 19 hours ago 0 replies      
I have to wonder if college is actually the right choice altogether. The brothers basically learned everything on their own and most importantly actually shipped. Seems like a solid investment for some tech company to just offer them a job doing whatever research work they want for a year or two and seeing how it goes. Supposedly there's a job shortage in tech...here's an opportunity to sign a couple of HS students that build a reactor (and some other cool things) for kicks. I mean yeah sure they didn't build a fancy webapp but surely people at Tesla, Google whoever read Reddit.

The pros for college are access to lab stuff and some smart people + what you learn. I believe they can do the learning on their own just fine and well access to cool tech toys is also readily available at most tech companies...and there's also a lot of smart people around.

3
tristanj 1 day ago 1 reply      
A video of their reactor setup, including a part where they prep and run it.

https://www.youtube.com/watch?v=92M5qcjDkaU

4
ChuckMcM 1 day ago 1 reply      
I read these things and I smile because the stories will help others step outside their self imposed limitations and do things. When I was growing up I had a number of people who allowed me to dream big things and try them out when others would consider them too much work or too crazy. I got there by reading stories about some of the earliest inventors of our time and their home spun laboratories.

There is a lot you can do with a supportive family.

5
eggy 23 hours ago 2 replies      
I made a lengthy comment below about my son, but now after watching the YouTube video, I am sad I do not have the resources to let him have such equipment.

My son pulls off some cool experiments all on his own (I stay out of it apart from the phone call consultation, or YouTube video call) with salvage and other workarounds, with the money he earns and my contributions.

These brothers have a freakin' mini-Tony-Stark lab in their house! Good for them and their parents. I like to see money spent this way rather than traveling team sports and uniforms. My take is sciency types, aka nerds, indulge more in solitary sports - rock climbing, skating, biking if they have time away from building a frigging nuclear fusion reactor in the bedroom!

6
yeahwhatever10 1 day ago 2 replies      
Stuff like this always reminds me of this kid (adult now): https://en.wikipedia.org/wiki/Taylor_Wilson
7
jaytaylor 1 day ago 1 reply      
This is pretty neat, and I managed easily located plans for a DIY fusion reactor at home [0]. I'm curious if OP's build differs from these plans, and if so, how/why.

Digging into the wikipedia article I found this interesting/surprising:

Recent Developments [1]:

"Most recently, the fusor has gained popularity among amateurs, who choose them as home projects due to their relatively low space, money, and power requirements. An online community of "fusioneers", The Open Source Fusor Research Consortium, or Fusor.net, is dedicated to reporting developments in the world of fusors and aiding other amateurs in their projects. The site includes forums, articles and papers done on the fusor, including Farnsworth's original patent, as well as Hirsch's patent of his version of the invention."

Apparently there's an entire online community of amateur home nuclear reactor builders [2].

[0] http://www.instructables.com/id/Build-A-Fusion-Reactor/

[1] https://en.wikipedia.org/wiki/Fusor#Recent_developments

[2] http://www.fusor.net/

8
chrisbennet 1 day ago 0 replies      
Reminds me of Phineas and Ferb:

"Aren't you a little young to be building a nuclear reactor?"

"Yes, yes we are."

9
eggy 23 hours ago 3 replies      
Amazing work by this young man!My children will love this.

My son and daughter are both doing well, but my son loves to experiment outside of school.

I have been encouraging my son to experiment and learn, and he has taken it to a higher level than I had imagined. I have bought him a telescope, and a 3D printer over the years. He works for his own equipment and supplies too. More each year.

Some of his experiments or demonstrations are recreated and slightly modified YouTube projects, and others are very original. I always tell him that being able to duplicate somebody else's work is good practice to learn proper rigor and familiarity with equipment and practices.

He is now a senior in high school, maintaining an A+ (>= 95) average in honor and AP classes. He won best Chemistry project in the whole county.

His biggest complaint is the amount of homework schools still dish out, several hours (3 to 4) each night. It takes away from his self-teaching and experimentation. Disclaimer: I am not too keen on public or institutionalized education. I think you learn more by doing projects that tie-in various disciplines, and accordingly I dropped out of college in the 80s to follow my passions. I've done well for myself considering I grew up below the poverty line when I was younger.

I was smelting metals in the 90s in my rural backyard without the internet or YouTube, and playing with TV cathode ray tubes in the late 70s. The former I could not have done where I grew up in Brooklyn, the latter was good to do most anywhere while tube sets were still around ;) I once tied the negative lead of a 9 volt DC transformer to my then sick Mom's big toe, and the other to a potted fern with copiously-watered soil, and then touched her same top foot with a connecting lead from the plant! Yes, stupid, but I was 10, and I had just read Frankenstein. Plus my parents were very positive about my inquisitive nature and doings. Mom passed in her older years, but never forgot the incident in a proud way.

My son is thinking on going to Germany for school due to the quality of education, and the fact that it is now free (almost, minus taxes and other expenses) for foreigners as well as citizens. He is also considering just starting on his own after high school, since he has acquired my distaste for institutionalized learning. The short of this bias: age-segregation, broken curriculum, non-integrated areas of study, homework and memorization over problem solving, etc. You can't even do certain experiments in school because they are considered too dangerous, or inappropriate. I do not try and push him too much either way. My ex-wife, is more conservative, and is hoping he will go to university.

Time will tell...

10
stanlarroque 1 day ago 0 replies      
This is so great to see young people accomplish such greatness, compared to world news lately
11
thefastlane 1 day ago 1 reply      
"living things should not be within 35 feet as x-ray radiation is quite high"

hopefully this is a house in a low-density suburb and not an apartment. amazing achievement, but at the same time very dangerous to be generating x-rays out of your bedroom.

12
frik 22 hours ago 1 reply      
Let's hope there is some supervison.

There had been kids who built atomic reactor in their parents house and garden shed. Needless to say it's nowadays a superfund site and some people suffer from poising and radioactive materials. I think there is even a documentary about the boy scout guy who built a atomic reactor and afaik it was on HN before.

[disclaimer: While the reaction vacuum chamber seen in the video can be found at universities and is certainly less harmful than an open atomic reactor (see real boy scout guy story from above), it never the less involves substances that have to be handled with great care]

Edit: the boy scout guy David Hahn built an radiactive reactor at home, he was 17 years: https://en.wikipedia.org/wiki/David_Hahn

13
sbierwagen 1 day ago 0 replies      
He made it into the Neutron Club, people who have generated neutrons from fusion reactions: http://www.fusor.net/board/viewtopic.php?f=54&t=13
14
archagon 1 day ago 0 replies      
Wow! I wonder what exact steps are involved in getting from "I want to build a fusion reactor" to actually having a working prototype in your room. He mentions fusor.net; am I right in assuming that there are entire communities dedicated to building this kind of stuff? What is their history? How did they get started? Were they around before the internet? Fascinating!
15
b34r 13 hours ago 0 replies      
I'm mostly concerned that he's going to get sick from improper shielding or something.
16
pitchups 22 hours ago 1 reply      
The link is returning a 503 error: "all of our servers are busy right now. please try again in a minute"

Looks like the Fusion Reactor story caused a meltdown of Reddit's servers :)

17
amelius 1 day ago 1 reply      
Who backed these guys financially? And how much did they spend?
18
cwkoss 1 day ago 0 replies      
Amazing accomplishment from such a young team.
19
blazespin 21 hours ago 0 replies      
Creating a neutron generators is pretty trivial. Get some pyro electric crystal and apply heat. pyroelectric fusion nuetron genrator.
20
fapjacks 19 hours ago 0 replies      
I wish I had grown up in an environment flush with resources for this kind of thing.
21
Kinnard 1 day ago 1 reply      
Maybe add a "Show HN: "
22
clarkrinker 1 day ago 2 replies      
In before "is it energy neutral?"
23
stevecalifornia 1 day ago 2 replies      
If they weren't doing this in a suburb I'd say 'Congrats'-- but to knowingly be generating x-rays in a residential neighborhood is really, really selfish and disrespectful to neighbors.
7
SoftBank Group Nears Deal to Buy ARM Holdings nytimes.com
354 points by anyfoo  2 days ago   98 comments top 17
1
ktamura 2 days ago 9 replies      
This is the opportunistic genius of Masayoshi "Masa" Son as one of the most successful technology investors of our time. He's taking advantage of the weak GBP thanks to the whole Brexit fiasco. As a major carrier in Japan and an owner of Sprint, this move makes total sense.
2
jimmytidey 1 day ago 0 replies      
Apparently Arm's Founder isn't particularly into it:

"ARM is the proudest achievement of my life. The proposed sale to SoftBank is a sad day for me and for technology in Britain."

https://twitter.com/hermannhauser/status/755008815553273858

3
nashashmi 2 days ago 2 replies      
I once read an article where the CEO of ARM reasoned against raising prices because its partners would then have less healthy businesses. I never forgot that piece of wisdom.

But now with this acquisition, is there any room for such generous mentality?

4
dbcooper 2 days ago 2 replies      
The FT claims that the deal has been done for 23.4bn.

https://next.ft.com/content/0cc23483-7681-3018-8e9d-80c2dd77...

>Japans SoftBank has agreed to acquire Arm Holdings, the UKs preeminent technology company, for 23.4bn in an enormous bet by the Japanese telecoms group that the smartphone chip designer will make it a leader in one of the next big tech markets, the internet of things.

Apparently that is a 40% premium on market cap. [1]

[1] http://www.bbc.com/news/business-36822272

5
pavlov 2 days ago 1 reply      
Maybe this explains why SoftBank recently chose to sell its controlling stake of Supercell for about $7.3B USD -- they needed the cash for something bigger.

SoftBank had purchased Supercell for about $2B only a few years ago, so it was a genius investment. Will be interesting to see if they can turn ARM into another one of those.

6
mrlambchop 1 day ago 1 reply      
Mixed feelings on this - huge congrats to ARM friends but I feel it's 1 act too short for an exit like this. I'd love to have seen a British owned tech company higher up the Global 2000 before transferring ownership.
7
jhou2 2 days ago 2 replies      
I was surprised ARM hasn't been bought by a larger conglomerate yet. SoftBank is way ahead of the game.
8
plainOldText 1 day ago 2 replies      
Apparently, $32 billion is how much they'll pay for this deal. [1]

[1] http://www.wsj.com/articles/softbank-agrees-to-buy-arm-holdi...

9
unexistance 1 day ago 1 reply      
I wonder what SoftBank can add to ARM technologically?

Obviously besides swimming in dollar

10
tdicola 2 days ago 1 reply      
I'm kind of surprised Apple didn't swoop in and buy ARM long ago. I guess as others mentioned that might mean all the Apple competitors using ARM chips would flee and leave them with a lot less partners.
11
ausjke 2 days ago 0 replies      
Isn't he already bankrupt due to the "wrong" acquisition of Sprint? This is hard to believe. I did notice he was preparing cash for something(selling Supercell, and Alibaba shares for cash), I thought those cash are used to cover Sprint's losses. Gosh this is really, really hard to believe.
12
mtgx 1 day ago 0 replies      
This could be great for RISC-V's prospects, especially if Softbank decides to be more partial to which customers it gives the license, or if it increases prices, and so on - basically anything Softbank might do in the future that would piss customers off, could be a win for RISC-V.

Don't expect any major shakeup over the next 3 years though. Even if some customers decided to drop ARM right now, it would still take 3+ years before they come up with a strong RISC-V alternative. First, Android would have to give RISC-V its blessing (support it), too, but Google is a member of the RISC-V foundation, so I assume that's not an impossible task.

13
ZenoArrow 1 day ago 2 replies      
Maybe it's just me, but I see this as a foolish move. ARM was better suited to being an independent company due to their licencing model. With the purchase by SoftBank now there's greater incentive for licensees to move away from ARM. I suspect we'll see much more investment in RISC-V as a result of this, which would be a good thing for most of us, but a less good thing for ARM.
14
anonbanker 2 days ago 0 replies      
Neat! now my company can no longer worry about FIVEEYES having control of the spec, or the company that makes it. probably the best news of the year for me.
15
Animats 1 day ago 1 reply      
First sentence: "SoftBank is nearing a deal to acquire ARM Holdings, the British semiconductor company, said two people briefed on the matter who asked not to be named discussing private information."

Editing standards at the Grey Lady have declined.

16
analog31 2 days ago 2 replies      
Maybe I'm not thinking straight because I'm up to my ears in ARM chips (my cell phone, tablet, half a dozen homemade gizmos, etc.). And I know they don't actually make the chips themselves, but GBP 2.2e+09 doesn't seem like a lot for a technology that has so much influence.

Correction: 2.2e10, as noted. Now I know I'm not thinking straight.

17
jjawssd 1 day ago 0 replies      
SoftBank WiFi access points are the most user unfriendly on the planet. You have to call them to be able to register to get online. And guess what? If you are not a Japanese resident you can't get a Japanese phone number. And even before registering the WiFi craps out 80% of the time within 60 seconds of connecting, no matter where you are in Tokyo. I am extremely disappointed. SoftBank will shake you down for all you are worth.
8
Give me 15 minutes and I'll change your view of GDB [video] undo.io
418 points by DebugN  4 days ago   169 comments top 30
1
derefnull 4 days ago 1 reply      
A summary of the 15 minute video:

1. introduction to 'tui' mode in gdb (https://sourceware.org/gdb/onlinedocs/gdb/TUI.html)

2. intro to python support in gdb (if it was compiled in)

 2.1 python print('hello world') 2.1 set breakpoints with python gdb interface
3. Demo of reverse debugging in gdb, which helps isolate a stack-corrupting bug (https://www.gnu.org/software/gdb/news/reversible.html)

2
gizmo 4 days ago 15 replies      
A broken ncurses text gui debugger with broken navigation and byzantine commands should somehow make us look more favorably upon gdb? We had better debuggers than this in the early 80s. Turbo pascal had a decent debugger and IDE that was right built in!

How come the open source world hasn't been able to create a debugger that's as good as the (still primitive) debugger in Visual Studio 6.0? It's astonishing to me how bad the state of the art is.

3
lucio 4 days ago 5 replies      
IMHO, In this day and age, a proper debugger should by default show source code and execution point, should allow you to inspect variables, and change its values, you should be able to manually set the executing position, and you should be able to alter code and re-compile on the fly, continuing the debugging session.

I know this is really-really hard to accomplish, but some IDEs do it.

The only thing that's "optional" and a little over the top (but very useful) is time-travel debugging.

4
catern 4 days ago 2 replies      
I prefer Emacs GUD over the built-in GDB TUI. I actually don't understand why anyone would use the built-in TUI; you have to learn new things anyway, why not learn the Emacs interface which not only looks way better but is more reliable and has more features? The keyboard commands for the GDB TUI are even (mostly) cloned from Emacs.

But as that person in the audience says, it looks like the presenter is just not aware that the GDB TUI is a substandard version of Emacs GUD, having mostly the same keybindings.

The second half is a good combined demo of some neat GDB features though.

5
EliRivers 4 days ago 2 replies      
Is this entire set of comments suffering from people conflating the debugger with an interface to the debugger? In much the same way that sometimes people conflate their compiler with their IDE.

GDB is a fine debugger. I've never had any trouble with it at all, and I don't recall ever needing a feature in it that it didn't support but other debuggers did, although I'm not a very demanding user of the debugger; I rarely need more than some conditional breakpoints and the ability to see the value of any given variable or memory location.

6
vvanders 4 days ago 2 replies      
I get that he's being clever at the beginning by poking fun at Windows but many of the shortcuts he talks about are available in Visual Studio and it has a much saner set of defaults and initial information. Ask anyone who did XBox and PS3 development which environment they preferred :).

The reverse debugging stuff is pretty cool though.

7
pag 4 days ago 0 replies      
I use UndoDB a lot. It's been increadibly helpful for debugging dynamic binary translators, which in my case are x86-to-x86 JIT compilers.

I also use it for debugging unit tests for a static binary translator. When I'm unit testing, I will set a brakepoint at the reporting of a failure, look at how the native program and emulated program states diverged, then I'll use reverse execution and data/code breakpoints to go back and forth to try to figure out what went wrong. This is much better than having to restart each time.

Over the years I've found many bugs in UndoDB and they've been pretty quick to fix them. Their support is great.

I started using it as a student, on a non-commerical (i.e. free license), and it was an essential purchase for my current work.

8
jey 4 days ago 1 reply      
There's an awesome free replay debugger GDB extension available for Linux that most people don't seem to know about: http://rr-project.org
9
RJIb8RBYxzAMX9u 4 days ago 0 replies      
Developers don't use debuggers because neither the benefits of using one nor the harm of not using one are immediately obvious. And when you first start learning how to program, the cognitive load of picking up a new language, its tooling and quirks, you simply don't have the energy to understand yet another piece of complex software. Worse, you may not even know there's such thing as a debugger! So we all do "printf debugging" because printing to the screen is the first thing you learn, and it's obvious how you can use it to inspect program state.

However, this is a novice trap, as more experienced developers would know that printf debugging may not work for subtle cases, and may even steer you in the completely wrong direction. But by the time said novices encounter such bugs, they've already picked up the "bad habit", and it's difficult to go back and unlearn it.

Some also feels that needing a specialized tool to find bugs is a weakness, as you ought to be able to reason about the code without aid. But that is folly, for if so, why use editors with syntax coloring (shouldn't you know int is a keyword w/o being pointed out) or auto-completion (shouldn't you memorize all exported functions)? In fact it's the opposite: a good debugger greatly aids in comprehension, since you get to catch corner cases happen in action, and with sufficient context for you to figure out how it'd happened.

There was a time when many developers also don't use a D(VCS) of some sort, for similar reasons. But of course git has a celebrity champion, and so the rest followed. Almost all the excuses from this thread people had raised about not wanting to learn how to use a debugger properly apply to git (and to all D(VCS) to varying extents), but you'd be ridiculed for not using it. IMO debuggers need a similar champion before our opinions change, but alas IIRC Linus himself is not too fond of debuggers...

That said, the state OSS debuggers is terrible, so they're not helping their own cause. Moreover, they are often "buggy" themselves. I put that in quotes because sometimes a "bug" is actually correct, but unintuitive behavior for inexperienced users (pop quiz: how do breakpoints work?), or just poor / nonexistent documentation. Nonetheless, the result is the same, and nothing loses your trust in a tool faster than finding bugs in it while trying to fix your own bugs.

10
DebugN 18 hours ago 0 replies      
Just to let you know that the same guy is running a series of 15 minute webinars this week on how to become a GDB pro. First one is today (4pm London). http://undo.io/become-gdb-power-user/
11
hitr 4 days ago 0 replies      
I don't compare gdb to visual studio but to windbg.It would have been great if gdb has support for debugger extension s like windbg(or any windows debugging tools like cdb or ntsd etc). I can easily debug any .net programs on windbg and level of details sos,psscor or many other extensions show you on for .net debugging is amazing. There are windbg extensions where I can write SQL like queries on .net memory objects or write python scripts or JavaScript to control your debugger or write automated analysis very easily.I have seen the same for php and nodejs processes with windbg and it was not very difficult.On Linux the story is different and there is no easy replacement .When Microsoft ported .net to linux with dotnet core ,they chose lldb instead of gdb as gdb does not provide extensions support.I also heard that lldb is not as good or stable as gdb.
12
danilocesar 4 days ago 0 replies      
I think this a good prelude for a good talk. I'm wondering if the guy is going to release the full talk he gave later...

However, mentioning advanced-gdb usage nowadays and not mentioning RR (http://rr-project.org/) is a sin =)

13
dahart 4 days ago 0 replies      
TUI mode is cool, I didn't know about that!

Missing from this video, but the most powerful part of using gdb for me recently has been the ability to do procedural debugging. You can easily write debug scripts that are re-runnable instead of reproducing bugs manually. You can sequence together multiple conditional breakpoints, keep debug variables & state, and other stuff that is really tricky to do in more common GUI based debuggers. I haven't used Visual Studio in the last few years, so maybe they've added it. I had tried for years and never found it, resorted to gdb for the hard problems, when I could.

14
DebugN 1 day ago 0 replies      
The same guy is giving a series of 15 minute talks this week via webcast (first one is tomorrow at 4 pm London time)

http://undo.io/become-gdb-power-user/

15
ngneer 4 days ago 0 replies      
What I never liked about gdb was that I could not really figure out a way for it to show me what has changed when stepping over an instruction or a call, a feature that I often find very useful in other debuggers (especially in x64dbg and edb if you do not have the source).

The debuggers I have encountered seem a bit limited when regarding the debugger as a means of querying a subset of program executions (out of all possible executions), but they are usually helpful in picturing system dynamics. It has always been very helpful for me to allocate a portion of fixed screen real estate to memory, variables and registers that I am watching and see the changes as I step through, allowing me to observe the dynamics of the system.

Too bad I could not figure out a way for gdb to help with that, the closest I got was examining the memory I was interested in ('x' variants) before and after, but no highlighting. To be fair, it is good at other things though :)...

16
corysama 4 days ago 0 replies      
17
dicroce 4 days ago 0 replies      
My favorite gdb feature? write watches. Set a write watch on a variable (even by address) and get a break point when the variable is written to. Read watches work the same way but break when a thread reads the variable.
18
chuckcode 4 days ago 0 replies      
Reverse debugging is a nice feature but just skip to 9min in to watch the demo of reverse debugging. Surprised to see so much of the video talking about the UI of gdb which certainly didn't change my opinion of it. I've found emacs+gdb to be very useful over the years especially when debugging on remote servers but this demo just reinforced my opinion of gdb being a very powerful program that is pretty difficult to master. Also reinforced my opinion that you should open your talk with at least some of your best content rather than saving it for the last few minutes.
19
abc_lisper 3 days ago 0 replies      
GDB tui is old. I remember using it 11 years ago, debugging a VM for Itanium. Also, if you know emacs, gdb emacs integration will blow your mind.
20
the_mitsuhiko 4 days ago 2 replies      
I'm glad LLDB exists now. It brought competition into the abysmal Open Source debugger space. It's just a pity that there are no good frontends for LLDB so far.
21
kelvin0 4 days ago 1 reply      
Looks like VS Code has a plugin that integrates with GDB.

https://blogs.msdn.microsoft.com/vcblog/2015/11/18/announcin...

Maybe polishing that plugin could go a long way in a better dev experience than the current GDB UI?

22
mathgenius 3 days ago 1 reply      
Who writes bugs that need a reversible db to find? or an embedded python interpreter in your debugger? I've done some hairy things in gdb to find bugs (actually valgrind is a big help with this aswell) but these other features, yikes..! If i ever need them it's gonna be a dark day for sure.
23
nightcracker 3 days ago 0 replies      
I think GDB is fine architecturally for debugging.

One day maybe we'll see a good non-buggy GUI frontend for GDB that shows code, local variables, assembly, etc in separate windows, all clickable to change values, set breakpoints, change interpretations, etc.

24
protomok 3 days ago 0 replies      
Wow, I didn't know gdb has built in reverse debugging or the Python interpreter. The reverse debugger looks awesome!

I'm curious what people use the Python interpreter for?

25
jhallenworld 3 days ago 0 replies      
Interesting! I used to use in circuit emulators with trace buffers. You could use them to discover that an interrupt handler messed something up by looking back in time.
26
dimdimdim 4 days ago 0 replies      
Here is a free video course on GDB - covers most of things discussed in this video and more:

http://www.pentesteracademy.com/course?id=4

27
rosstex 4 days ago 0 replies      
"Sorry to the Windows users out there"

Not anymore! My, how times have changed.

28
gshx 4 days ago 0 replies      
Reminded me of all the hours spent debugging with dbx. Turns out, even after all the yrs, they are still quite similar.
29
akp__ 4 days ago 0 replies      
b _exit.c: 32

This guy is my hero.

30
johanneskanybal 4 days ago 0 replies      
Nice troll brain
9
How I Could Steal Money from Instagram, Google and Microsoft arneswinnen.net
379 points by adamnemecek  3 days ago   65 comments top 13
1
jimrandomh 2 days ago 4 replies      
Do premium phone numbers have any remaining legitimate use? The last time I remember hearing about them was in the context of a callback scam, where scammers would call and hang up after one ring so that people would call the caller ID numbers. Actual pay-per-minute phone services seem like a rare special case that would be better served by having people type credit card numbers into the line with their dialpad.
2
joshavant 2 days ago 0 replies      
In the article, the author mentions contacting Google + they instruct him to attempt pen testing on please.break.in@gmail.com. Didn't know they maintained an account like that. Interesting.
3
countryqt30 2 days ago 8 replies      
Do only I think that the rewards of 2000, 0 and 500 are incredibly low?

The motivation is much higher to keep the bug for yourself and create several 10 000 easily before anyone ever would notice.

4
thatusertwo 2 days ago 4 replies      
A friend runs some SIP networks, he said sometimes when hackers get access to a line they make calls to premium numbers in North Korea and other places. They can run up a 5000$ bills pretty quickly.
5
carl_corder 2 days ago 3 replies      
I understand how these call would cost Instagram/Google/Microsoft money. But, could someone please explain why a call to a premium number "earns" the account holder money?
6
tormeh 2 days ago 1 reply      
If you haven't already, call your phone provider and tell them to disable premium calls/texts/services. They're obsolete and quite a number of them are pure scams.
7
mgalka 2 days ago 1 reply      
Cool idea. My sense is that they would catch on before the amounts reached anything substantial, but who knows. Either way, fun as a thought experiment.
8
snowy 2 days ago 1 reply      
Any one else think that the bug bounty rewards were quite low?
9
raresp 1 day ago 0 replies      
You certainly deserve a bug bounty bonus from all these companies.
10
zappo2938 1 day ago 0 replies      
Reminds me of when I was a kid and used the next phone phone over to accept a third party paying call. Or, when answering machines in the early 90s only had two digit pass codes. I would change the out going message to to "yes yes yes yes yes ..." so when the automated machine checked if the phone would accept charges it would.
11
mk89 1 day ago 0 replies      
Super cool!

When input validation doesn't stop at ";&' and similar :)

12
paulpauper 2 days ago 1 reply      
amazing..so any service that sends an automated call to premium numbers can be exploited in this manner
13
interdrift 2 days ago 0 replies      
How could they miss that?!
10
Graal and Truffle could accelerate programming language design medium.com
373 points by cosbas  11 hours ago   159 comments top 26
1
nostrademons 10 hours ago 7 replies      
There's actually a fairly long history of cross-language VMs, with various degrees of success. What usually happens is that they work fine for languages that look, semantically, basically like the native language on the VM. So LLVM works well as long as your language is mostly like C (C, C++, Objective-C, Rust, Swift). Parrot works if you language is mostly like Perl 6 (Perl, Python, PHP, Ruby). .NET works if your language is mostly like C# (C#, F#, Boo, IronPython). The JVM works if your language is mostly like Java (Java, Clojure, Scala, Kotlin). Even PyPy has frontends for Ruby, PHP, Smalltalk, and Prolog.

"Mostly" in this case means semantically, not syntactically. It's things like concurrency model; floating point behavior; memory layout; FFI; semantics of basic libraries; performance characteristics; and level of dynamism. You can write a Python frontend for both the JVM and CLR, and it will look like Python but act like Java and .NET, respectively, with no guarantee that native CPython libraries will work on it.

The problem is that this is where basically all the interesting language-design research is. I wouldn't use Rust for the syntax; I'd use it because I want properties like complete memory safety, manual control over memory allocation, easy (and fast!) access to C libraries, and short startup time. These are all things that Truffle explicitly does not deliver.

It's a great tool if you work in the JVM ecosystem and want to play around as a language designer. But most of the interesting languages lately have created their own ecosystems, and they succeed by solving problems so fundamental that people will put up with having to learn a new ecosystem to gain the benefits they offer.

2
wahern 9 hours ago 2 replies      
Judging by LuaTruffle, the Lua example, then this framework is useless.

LuaTruffle appears to implement the syntax of Lua but none of the semantics. For example, no metatables and no coroutines. Lua is unparalleled in its support of coroutines, and metatables are what make optimizing Lua _difficult_. Without implementing either of those things you've neither demonstrated the ability of the framework to implement novel control-flow semantics (asymmetric coroutines that can yield across multiple, unadorned function invocation) nor performance capabilities (JIT-optimizing Lua tables is non-trivial).

I'm not even sure LuaTruffle implements Lua's lexical closures or tail call optimization, both critical to Lua's ability to support functional programming patterns.

Of course, it definitely doesn't implement the Lua C API, which is part-and-parcel of what makes Lua preferable as an extension and glue language. But I was willing to overlook that if it could easily implement the former.

The beauty of a good DSL isn't the syntax (I know, hard to believe!), but in novel approaches to code flow execution and other deep semantics. Golang's goroutines are beautiful. Rust's ownership analyzer and prover are what _defines_ the language. Haskell's lazy evaluation open up whole new dimensions for problem solving (and headaches). If your language framework doesn't at least preserve the ability to implement those things cleanly and performantly, it's not adding much value and is basically a toy. It's not like writing a lexer for a DSL is a serious impediment. (Using Lua's LPeg, for example, you can write parsers and build an AST for most major languages in a few hundred lines of code.)

3
outworlder 10 hours ago 3 replies      
> Since the dawn of computing our industry has been engaged on a never ending quest to build the perfect language.

Except it really hasn't. The industry is going to use whatever language has the bare minimum feature set they value at the moment and no more.

Call me cynical but my point of view is: if the industry were really striving for perfection, there would be no COBOL or BASIC, for instance. Lisp had already been invented. Garbage collection was a thing already. We had macros. A REPL. A little later, OOP and multiple dispatch. The list goes on.

They had to ditch the (almost) perfect wheel for a cheaper square one. Fast forward a few years, and the square wheel is now polished but just as square, the insanely great wheel now just is as cheap as it is, but we won't use it, as the axles expect the square wheel.

And then we invent Java and XML...

4
wcrichton 9 hours ago 3 replies      
Before anyone gets too excited, make sure you look at what a compiler using Truffle actually looks like, and remember that Java is far from an ideal language for writing a compiler. I'll use their SimpleLanguage (their primary tutorial lang) as an example.

Parser [1] shows how lack of sum types makes code messy. Lots of "factory.createStringLiteral" kind of calls. At least Java has a mature parser generator.

Implementing interpreter nodes [2] requires a pretty large amount of boilerplate, each individual node is relegated to its own file, some classes are littered with probably autogenerated getters/setters. While functional languages can be too terse, Java shows its exceptional verbosity here.

[1] https://github.com/graalvm/truffle/blob/master/truffle/com.o...

[2] https://github.com/graalvm/truffle/tree/master/truffle/com.o...

5
kodablah 10 hours ago 5 replies      
The Oracle flavors of the JVM/JDK probably scare too many away with regards to redistribution of their product. That coupled with the fact that things like the AOT engine is closed source and the weight of the JVM for anything besides daemons (both mentioned towards the end of the article) probably keep language designers away these days.

I would love a lightweight toolkit that made for easy language development. VMKit[0] is dead, MicroVM[1] is a nice idea but not full fledged. Many like myself looking to implement a toy language would love to be fast out of the gate and would rather not mess with the OS-specific constructs and specifics of LLVM IR. So we end up cross-compiling to existing languages or marrying ourselves to a certain runtime (e.g. the CLR). Any other lightweight VMs or AOT compilers that are cross-platform that language designers can use these days?

0 - http://vmkit.llvm.org/1 - http://microvm.github.io/

6
qwertyuiop924 8 hours ago 2 replies      
Okay, very cute, but what if you want your language to have significant semantic differences from C and the like? What if you want to write a Scheme interpreter, for instance? Can I have Continuations? Can I have Tail Call Optimization? I very much doubt it.

If you want to write a language that is semantically like C for the most part, than go ahead and use Truffle, but that's not where interesting language design is happening. Show me that Truffle's Ruby implementation hasn't removed callcc (yes, Ruby has callcc), and maybe I'll reconsider.

7
dkarapetyan 10 hours ago 2 replies      
Doesn't RPython do all the same things? Also there are language workbenches (http://www.languageworkbenches.net/) that allow one to get a bunch of things like IDE support and even basic compilers by properly expressing the language syntax and semantics.

I agree there is a lot of interesting stuff going on here but lets not forget the prior art. Even OMeta I think already did a lot of these things way back when. Going further back there's META II (https://en.wikipedia.org/wiki/META_II).

I think building self-specializing interpreters is the main trick. If that becomes easy then a whole bunch of other magic is also possible. But I'm just an amateur so all of this is still very impressive.

8
Animats 9 hours ago 1 reply      
So why is Oracle doing this?

Also, anything from Oracle which is partly open and partly closed is scary. They've done that with Java and MySQL, which drives people away from both.

9
dangerlibrary 11 hours ago 3 replies      
There are a lot of weasel words for the other languages, but the comment about ruby is extremely interesting:

"For example, the TruffleJS engine which implements JavaScript is competitive with V8 in benchmarks. The RubyTruffle engine is faster than all other Ruby implementations by far. The TruffleC engine is roughly competitive with GCC."

10
wellpast 11 hours ago 1 reply      
I think another key part of the dream is IDE support. So maybe the dream is complete with: Graal + Truffle + Nitra[1].

[1] Nitra - https://github.com/JetBrains/Nitra

11
azakai 10 hours ago 0 replies      
Graal + Truffle are very, very cool. The main downside is that it only runs on the JVM, which is a substantial limitation.

However, I assume one goal of the project is to make the JVM more competitive, which it certainly does.

12
TheMagicHorsey 10 hours ago 1 reply      
Interesting. But I feel like Racket is a better framework for quickly prototyping new languages.
13
rogerbinns 11 hours ago 1 reply      
Have there been any recent languages/environments that (have reasonably) succeeded but aren't completely open source? I feel a general resistance to building a dependency on something when you don't have complete access to it, but maybe that is just me.
14
kristianp 4 hours ago 0 replies      
When I first heard about this it was promoted as a faster ruby implementation. How far away is it from running rails?

Interesting that they've gone as far as running C extensions "internally", making it easier to run existing ruby apps without running up against the barrier of a native gem and having to change the code.

16
wellpast 11 hours ago 1 reply      
> Since the dawn of computing our industry has been engaged on a never ending quest to build the perfect language.

...unfortunately. I see PLs as mere material - sure, you can improve on them but at far more important is how we architect our systems (the PL-independent ways we create and organize our systems into interfaces and components) is where I see the software practitioners of today flailing - and no PL is going to save us there.

We need to have a better way to analyze systems on these architectural measures and a better way to train people to build better architectures, not more PLs.

17
cpeterso 4 hours ago 1 reply      
Here is a video of a Graal/Truffle talk given by a researcher at Oracle (presented at Mozilla in 2013):

https://air.mozilla.org/one-vm-to-rule-them-all/

18
lisper 10 hours ago 3 replies      
> A way to create a new language in just a few weeks

As a Lisp programmer, I regularly create new languages in a matter of hours.

19
zzzcpan 9 hours ago 0 replies      
So, judging by the slides for that "one vm to rule them all" talk [1], it's Java all the way. One has to generate AST in a form of Java code and deal with Java ecosystem. And it feels all very very complicated for all the wrong reasons.

I guess some people from the Java world could find something useful there, but it's very unlikely to attract anyone else.

[1] https://lafo.ssw.uni-linz.ac.at/pub/papers/2016_PLDI_Truffle...

20
ricardobeat 10 hours ago 1 reply      
> Interpreted dynamic languages like Python, JavaScript, PHP and Ruby look the way they do because building such a language is the path of least resistance when you start from a simple parse tree.

This looks... backwards?

21
carapace 6 hours ago 0 replies      
Well now, I have got to find time to implement Joy (lang by Manfred von Thun) in this and see if it works. (I can't tell just by thinking about it, at least not so far. It will either work and be great, or not-work and be really really interesting why not.)
22
polskibus 8 hours ago 0 replies      
This article, while very interesting, completely ommitted .NET CLR. Does anyone know how does the CLR compare to JVM, especially the new CoreCLR? How matur is it in terms of using the best of breed JIT optimizations?
23
ktRolster 10 hours ago 0 replies      
fwiw IO language is a pretty good prototyping language too.....good for experimenting with different syntaxes, etc
24
desireco42 6 hours ago 1 reply      
One word: Oracle.
25
ilostmykeys 9 hours ago 1 reply      
Funny, I posted this yesterday and got no up votes. Can you post a duplicate of some post and get up voted? I didn't realize that was possible.
26
speeder 10 hours ago 3 replies      
Do a little googling and you wont get a downvote: https://news.ycombinator.com/item?id=7494732

dang's profile https://news.ycombinator.com/user?id=dang with contact information.

11
Destroy All Ifs A Perspective from Functional Programming degoes.net
356 points by buffyoda  3 days ago   216 comments top 42
1
barrkel 3 days ago 8 replies      
Define true as a lambda taking two lazy values that returns the first, and false as one that returns the second, and you can turn all booleans into lambdas with no increase in code clarity.

The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point, and if you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison), you need to change a lot more code.

That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself. Sure, it might be more flexible, but sometimes you want just the tool, without needing to understand how it's put together. And a good tool for a single purpose is usually surprisingly better than a multi-tool gizmo. If you have a lot of need for different tools that have similar substructure, then compromises make more sense.

This is just another case of the tradeoff between abstraction and concreteness, and as usual, context, taste and the experience of the maintainers (i.e. go with what other people are most likely to be familiar with) matters more than any absolute dictum.

2
ufo 3 days ago 4 replies      
I'm surprised that the article and none of the comments so far mentioned the "Expression Problem": http://c2.com/cgi/wiki?ExpressionProblem

Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".

I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.

3
kazinator 3 days ago 2 replies      
Problem is, a decision has to be made somewhere about which function to pass into that "if-free" block of code. The if-like decision has just moved elsewhere. That is a win if it reduces duplication: if a lambda can be decided upon and then used in several places, that's better than making the same Boolean decision in those several places.

Programs that are full of function indirection aren't necessarily easier to understand than ones which are full of boolean conditions and if.

The call graph is harder to trace. What does this call? Oh, it calls something passed in as an argument. Now you have to know what calls here if you want to know what is called from here.

A few days ago, there was this HN submission: https://news.ycombinator.com/item?id=12092107"The Power of Ten Rules for Developing Safety Critical Code"

One of the rules is: no function pointers. Rationale: Function pointers, similarly, can seriously restrict the types of checks that can be performed by static analyzers and should only be used if there is a strong justification for their use, and ideally alternate means are provided to assist tool-based checkers determine flow of control and function call hierarchies. For instance, if function pointers are used, it can become impossible for a tool to prove absence of recursion, so alternate guarantees would have to be provided to make up for this loss in analytical capabilities.

4
qwertyuiop924 3 days ago 1 reply      
This is, as many commenters have noted, just another overzealous programming doctrine. Just like 'GOTO considered harmful.'

Here's the deal: if is a flow control primitive. Just like goto and while. If (heh) that primitive isn't high-level enough to handle the problem you are facing, it is incumbent upon you as a programmer to use another, higher level construct. That construct may be pattern matching, it may be polymorphism (or any other form of type-based dynamic dispatch). It may be a function that wraps a complex chain of repeated logic, and is handed lambdas to execute based upon the result. It may, as in the article given here, be a funtion that is handed lambdas which apply or do not apply the transformation described.

The point is, there are many branch constructs, or features that can be used as branch constructs, in most modern programming languages. Use the one that fits your situation. And if that situation isn't a that complex, that construct may be if.

Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen in Haskell.

Although now that I think about it, if you provide a function with a list of numbers...

5
white-flame 3 days ago 2 replies      
This whole campaigned is misguided.

"Bad IFs" are a code smell, and they're being scapegoated when the real problems are management demanding that simple hackish prototypes & tests be deployed into production, management that doesn't allow time for refactoring, and poor programmers who think that "bad IFs" are good code.

But the main site also doesn't do any reasonable job of defining what a "Bad IF" even is.

The crux of the matter is that programmers need time to craft the details of a project to avoid or correct technical debt. These sort of reactions just point out one tiny portion of technical debt itself and doesn't solve any fundamental problems at all.

(and yeah, I known I'm ranting against the Anti-IF campaign, not the particular take on the linked site. But this article just seems to parameterize the exact same parameters that are branched on anyway.)

6
Animats 3 days ago 8 replies      
The idea that each type has its own control flow primitives is bothersome. It's taken over Rust:

 argv.nth(1) .ok_or("Please give at least one argument".to_owned()) .and_then(|arg| arg.parse::<i32>().map_err(|err| err.to_string())) .map(|n| 2 * n)
I'm waiting for

 date.if_weekday(|arg| ...)
Reading this kind of thing is hard. All those subexpressions are nameless, and usually comment-less. This isn't pure functional programming, either; those expressions can have side effects.

7
throwaway13337 3 days ago 2 replies      
This just seems to obscure the logic. Not unlike how polymorphism can make code flow harder to read, though feel more clever.

There is a place for it - like when you're trying to express a set of logic that will be guarded by the same condition, but always at the cost of some complexity.

A set of conditionals is probably the most obvious way to express branching.

8
dwrensha 3 days ago 2 replies      
I recommend Bob Harper's essay on "boolean blindness": https://existentialtype.wordpress.com/2011/03/15/boolean-bli...

An excerpt:

> The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit.

9
lilbobbytables 3 days ago 4 replies      
Often times when I read about "ideal" ways of programming, I'm curious if it's ever implemented in a production code base built by a team.
10
MrManatee 1 day ago 0 replies      
If I understood correctly, the article suggests that as a general principle you should replace your union types and case-by-case code with lambdas. I feel almost the opposite.

Article: "In functional programming, the use of lambdas allows us to propagate not merely a serialized version of our intentions, but our actual intentions!"

Counterpoint: The use of structured objects instead of black box lambdas allows us to do more than just evaluate them. For example, Redux gets a lot of power by separating JSON-like action objects from the reducer that carries out the action.

But let's take instead the article's example of case-insensitive string matching. One tricky case is that normalization can change the length of the string: we might want the german "" to match "SS". Sure, the lambda approach can handle that. But now suppose that we want a new function that gives the location of the first match. It should support the same case-sensitivity options (because why not?). But now there is no way to get the pre-normalization location, because we encoded our normalization as a black box function. Case-by-case code would have handled this easily.

11
dahart 3 days ago 1 reply      
I read everything I could find on the Anti-IF site and didn't understand what the mission is exactly. They qualify and mention they want to remove the bad and dangerous IFs, but I couldn't find examples that differentiate between bad ones and good ones -- are there good ones according to this campaign?

I like using functional as much as anyone, and removing branching often does make the code clearer and remove the potential for mistakes.

But I admit I have a hard time with suggesting people prefer a lambda to an IF, or to not ever use an IF. A lambda is, both complexity wise, and performance wise, much heavier than an IF. And isn't is just as bad to abstract conditionals before any abstractions are actually called for?

12
externalreality 2 days ago 1 reply      
I tried to ask the author the follow: (kept getting deleted as spam). Perhaps he will see it here but its unlikely due to the fact there are many comments as it is.

Hi John,

Are you familiar with Jackson Structured Programming?

https://en.wikipedia.org/wiki/Jackson_structured_programming

Notice how the focus in on using control flows that are derived from the structure of the data being processed and the processed data. Notice how the JSP derived solution in the Wikipedia example lack if-statements.

Pattern matching allows ones to map control flow to the structure of data. What are your thoughts on that? I think inversion of control has other benefits but I don't think it has much to do with elimination of `if` conditionals, the pattern matching does that.

Also, I noticed one thing:

In the article you mention `doX :: State -> IO ()` as being called for its value and suggest that if you ignore the value the function call has no effect. Isn't it the case that a function of that type usually denotes that one is calling the function for its effect and not for any return value? Its value is just an unevaluated `IO ()`.

13
AYBABTME 3 days ago 1 reply      
The author seems to ignore the fact that passing lambdas like this merely changes where the IF or SWITCH statement is made. I can agree that passing functions instead of booleans is better and more general. But pretending that IF/SWITCH are thus avoided, is delusional.

For instance, at some point there will be a decision made whether the string matching must be case sensitive or not. If the program can do both at runtime, the IF will be, perhaps, in the main (or equiv.).

14
astazangasta 3 days ago 1 reply      
Why don't we just treat this like writing?

Good writing has one clear imperative: communicate meaningfully the intent of the author to the reader. Good code is no different; it is merely expressive writing in a different language, with, perhaps, greater constraint on its intent.

Some people make up rules like "don't use adverbs", or "don't split infinitives", in an effort to write better. But this doesn't necessarily produce good writing; sometimes an adverb is just what you need.

The same is true of code. These are useful things to think about, but "destroy all ifs" is akin to "never use a conjunction".

15
galaxyLogic 3 days ago 0 replies      
Isn't this exactly the Smalltalk way? In ST what looks like if-statements actually are messages passed to instances of Boolean, with lambdas (in Smalltalk: BlockClosures) as argument. The boolean then makes the decision whether it will evaluate the lambda or not.
16
jwatte 3 days ago 0 replies      
The first problem is that the "match" function is considered in the first place. It's too general. It should only be used in higher order constructs where its flexibility is actually needed.

Second: The enum based refractor is actually valuable and fine IMO. If you need string functions, stop there.

Now, shipping control flow as a library is a cool feature of Haskell. But, if those arguments are turned into functions, the match function itself isn't needed! It just applies the first argument to arguments 3 and 4, then passes them to the second argument.

match :: (a -> b) -> (b -> b -> Bool) -> a -> bmatch case sub needle haystack = sub (case needle) (case haystack)

Does that even need to be a function? Perhaps. But if so, it's typed in a and b and functions thereof, and no longer a "string" function at all. And, honestly, why are you writing that function?

Typing it out where you need it is typically less mental impact, because I don't need to worry about the implementation of a fifth symbol named "match."

sub (case needle) (case haystack)

17
vittore 3 days ago 0 replies      
When I read things like "anti-if" I recall this brilliant illustration that I saw several years ago - http://blog.crisp.se/henrikkniberg/images/ToolWrongVsWrongTo...
18
skybrian 2 days ago 0 replies      
General principle: for every possible refactoring, the opposite refactoring is sometimes a good idea.

So, yes, replacing booleans with a callback is sometimes a good idea. But in other situations, replacing a callback with a simple booleans might also be a good idea.

Also, advice like this is often language-specific. In languages whose functions support named parameters, boolean flags are easy to use and easy to read. If you only have positional parameters, it's more error-prone, so you might want to pass arguments using enums or inside a struct instead.

19
dozzie 3 days ago 0 replies      
The inversion of control flow from called to calling function is aninteresting way to describe (part of) functional programming style. I hadn'tthought of that this way, even though I use it for quite some time.
20
nialv7 3 days ago 0 replies      
Someone found a hammer, and now everything looks like thumbs
21
js8 2 days ago 1 reply      
The idea that functional programming is a type of inversion of control reminds me of similar idea I had, when comparing OOP and FP.

In OOP, you encapsulate data into objects and then pass those around. The data themselves are invisible, they only have interface of methods that you can apply on them. So methods receive data as package on which they can call methods.

In FP, in contrast, the data are naked. But instead of sending them out to functions and getting them back, the reference frame is sort of changed; now the data stays at the function but what is passed around is the type of processing (another functions) you want to do with them.

For example, when doing sort; in OOP, we encapsulate the sortable things into objects that have compare interface, and let the sort method act on those objects. So at the time sort method is called, the data are prepared to be compared. In FP, the sort function takes both comparison function as an argument, together with the data of proper type; thus you can also look at it as that the generic sort function gets passed back into the caller. In other words, in FP, the data types are the interfaces.

So it is somewhat dual, like a different reference frame in physics.

The FP approach reminds me of Unix pipes, which are very composable. It stands on the principle that the data are the interface surface (inputs and outputs from small programs are well defined, or rather easy to understand), and these naked data are operated on by different functions (Unix commands). (Also the duality is kind of similar to MapReduce idea, to pass around functions on data in the distributed system rather than data itself, which probably explains why MapReduce is so amenable to FP rather than OOP.)

It also seems to me that utilizing this "inversion of control" one could convert any OOP pattern into FP pattern - just instead of passing objects, pass the function (method which takes the object as an argument) in the opposite direction.

I am not 100% convinced that FP approach is superior to OOP, but there are two reasons why it could be:

1. The "nakedness" of the data in FP approach makes composition much easier. In OOP, data are deliberately hidden from plain sight, which destroys some opportunities.

2. In OOP, what often happens is that you have methods that do nothing rather than pass the data around (encapsulate them differently). In FP approach, this would become very easy to spot, because the function that is passed in the other direction would be identity. So in FP, it's trivial to cut through those layers.

22
nn3 3 days ago 1 reply      
tl;dr: prefer callback hell instead of straight forward ifs and somehow that's progress.
23
asQuirreL 3 days ago 1 reply      
The article seems to advocate type synonyms like the following:

 type Case = String -> String -- ... type Announcer = String -> IO String
I would argue that these are actually much worse than not having type synonyms at all.

(String -> String) functions could do anything to your query parameter and text, the type is too coarse, and the inhabitants too opaque for us to reason about them easily. Naming the type suggests the problem is solved without actually having solved it. It is like finding a hole in the ground, and covering it with leaves, so you don't have to look at it anymore. You are literally making a trap for the next person to come this way.

In an ideal world you would be able to use refinements to say that you want any (f :: String -> String) such that `toUpper . f = toUpper` but without such facilities, I think I may just settle for:

 newtype Case = CaseSensitive Bool
Sometimes, your type really does only have two inhabitants.

24
Eliezer 3 days ago 0 replies      
I thought the argument was going to be "Conditionals are bad for running on GPUs."
25
oliv__ 3 days ago 1 reply      

 Its no wonder that conditionals (and with them, booleans) are so widely despised!
They are?

26
yawaramin 3 days ago 0 replies      
view-source:http://antiifcampaign.com/

Find in page: 'if('

2 hits.

So, yeah.

27
true_religion 3 days ago 3 replies      
This is the starter code:

 publish :: Bool -> IO () publish isDryRun = if isDryRun then do _ <- unsafePreparePackage dryRunOptions putStrLn "Dry run completed, no errors." else do pkg <- unsafePreparePackage defaultPublishOptions putStrLn (A.encode pkg)
This would be nicer if you could do multiple functions with pattern matching. In Elixir this would be:

 @spec publish(boolean) :: any def publish(true = _isDryRun) do _ = unsafePreparePackage dryRunOptions IO.puts "Dry run completed, no errors." end def publish(false = _isDryRun) do pkg = unsafePreparePackage defaultPublishOptions IO.puts (A.encode pkg) end
Pattern matching is pretty powerful, even going as far to give a dynamic, non-statically types language like Elixir the ability to 'destroy all iffs' too.

28
whazor 3 days ago 0 replies      
Reducing if statements does shrink the possible state space, however using additional abstraction might increase it even further.
29
cdevs 3 days ago 0 replies      
Bad programmers will mess any syntax restrictions/guidelines/styles we put on them. If you let them make any function were they can put launchNukes(); into doX(); then they will. though running things as a service may be the future, this launchNukes(); function is over here....safe from you.
30
VladKovac 3 days ago 1 reply      
Functional programmers love to emphasize how all the aspects of programming that their pet language is uniquely good at dealing with also happen to be the biggest problems in code maintenance. Is there any actual data on what the biggest problem sources are?
31
mapleoin 1 day ago 0 replies      
This is the best bit I think:

> The problem is fundamentally a protocol problem: booleans (and other types) are often used to encode program semantics.

> Therefore, in a sense, a boolean is a serialization protocol for communicating intention from the caller site to the callee site.

32
svanderbleek 3 days ago 0 replies      
I think pattern matching is fine, I don't see how it is still "boolean". The additional techniques shown are interesting, but heavy abstractions that should not be prescribed in general.
33
iopq 3 days ago 0 replies      
34
dorfsmay 2 days ago 0 replies      
Since this is about FP, we have to have recursion:

https://www.reddit.com/r/functionalprogramming/comments/4t91...

35
smoothdeveloper 3 days ago 0 replies      
Paul Blasucci had a good talk on Active Patterns (an F# language feature):

https://github.com/pblasucci/DeepDive_ActivePatterns

This feature allows to encapsulate conditional matching on arbitrary input and dispatching.

For those who know ML, it is making the concept of pattern matching extensible to any construct.

36
sebastianconcpt 2 days ago 0 replies      
Less if is better, I agree on that. Lamdas technique is interesting because they "encapsulete" a specific case. In OOP this is achieved by using polymorphism on the objects instantiated for the right case. Right?
37
rimantas 2 days ago 0 replies      
Sandi Metz talks about ifs a bit here: https://www.youtube.com/watch?v=OMPfEXIlTVE
38
based2 2 days ago 0 replies      
If 'if' could support single 'expression' and multiple 'case's like 'switch/match', it would make easier the transition.
39
rosalinekarr 3 days ago 0 replies      
only a sith speaks in absolutes
40
basicplus2 3 days ago 0 replies      
sounds like what's really being said is..

It is recommended that programmers use abstractions whenever suitable in order to avoid duplication, and associated errors

41
dingleberry 3 days ago 2 replies      
i can't think of use of 'if' in a math function; however, if is implicitly used in input, say 0<x<1, f(x)=x, 1<x<3, f(x)=x^2

i see a lot of loop though, summation is so a double integral is loop within loop. i can't think a code analogue with derivative

fta, i take that if in function body makes an ugly code.

42
sqldba 2 days ago 1 reply      
Ummm. Many common day to day languages don't use lambdas. Also I have no idea what they are. So - yeah I don't think you can just replace if so easily.
12
John Carmack on Inlined Code (2014) number-none.com
386 points by rinesh  18 hours ago   191 comments top 25
1
dzdt 15 hours ago 20 replies      
I have had the pleasure to work with lots of other people's code of varying styles and quality. Nothing is harder to read and understand than code which is deeply nested calls from one little helper (or wrapper) function to another. Nothing is easier to read and understand than code which just flows straight through from top to bottom of a big function.

There are other tradeoffs of code reuse and speed and worst-case-speed and probability of introducing bugs. If you haven't read the article, do, its worth it.

I love that Carmack tries to measure which styles introduce more bugs! Who else does that? Seriously, I would love to see more of that.

2
lj3 14 hours ago 3 replies      
Interestingly, Casey Muratori accidentally demonstrates during one of his Handmade Hero sessions that the compiler won't always be able to optimize certain bits of code that are put in a function as opposed to being inline.

In the video, he inlines a very simple function and his game gets twice as fast for no apparent reason. It's instructive to watch him dive into the generated assembly to figure out why.

https://www.youtube.com/watch?v=B2BFbs0DJzw

3
bshimmin 17 hours ago 4 replies      
Imagine how fantastic it must be working with someone like Carmack. Sure, the first few code reviews would be fairly traumatic - as you quickly realise just how much faster and generally better he is than you - but I think after a little while you could just relax and try to absorb as much as possible.

I love how everything in these emails is delivered as a calm series of reflections, chronicling with great honesty his own changing opinions over time - nothing is a diktat.

I also found it rather heartening that he makes the same copy/paste mistakes that the rest of us do - how many times have you duplicated a line and put "x" or "width" on both lines..? Seemingly Carmack can actually tell you how many times he's done that!

4
bluetomcat 17 hours ago 3 replies      
Another perspective in defense of long functions is that they enable you to spot common expressions/statements within the body, for example:

 void long_func(void) { ... if (player.alive && player.health == 100) { .... } ... if (some_other_condition && player.alive && player.health == 100) { } }
Conventional wisdom says that you should write a function `is_player_untouched` and substitute the composite expressions with function calls, but the code in question can be refactored in a much more straightforward way:

 void long_func(void) { ... const bool player_untouched = is_player_untouched(); if (player_untouched) { .... } ... if (some_other_condition && player_untouched) { } }
Had the function body been split into more functions for "clarity", you would be doing duplicate calls to `is_player_untouched()` which go unnoticed because they would be buried deep in the call graph.

5
ctrlrsf 14 hours ago 2 replies      
Long functions might read easier but you lose some testing precision. I think recent focus on testing has lead to shorter functions with as little responsibility as possible. When short functions fail a test you have smaller surface area to look for the cause.
6
jon-wood 16 hours ago 3 replies      
The thing that struck me was Carmack's relentless pursuit of perfection. I can't think of many people who'd describe a single frame of input latency as a cold sweat moment!
7
apeace 14 hours ago 2 replies      
I think there's a big point that's being missed here. Carmack is conflating inlining code with writing functional code. These are different things.

I'd agree that if the majority of your code is mutating state, it makes sense to mash all that together in one place. You want to keep an eye on the dirty stuff.

But on the other hand, inlining pure functions that don't use or mutate any global state doesn't make sense to me. Why is making it "not possible to call the function from other places" a benefit?

How about calling that code from a unit test!

8
typedef_struct 14 hours ago 1 reply      
This is a pet peeve of mine. If you made a block of code into a separate function, I'm assuming it's called from multiple places. Or maybe it used to be. Or will be soon. But still.
9
nickm12 12 hours ago 3 replies      
I'm surprised at the enthusiasm for really long functions here. I find my experience is just the oppositeI find it much more difficult to read code written that way than when the different sections are broken up into smaller functions.

It is of course essential that the smaller functions be well-named and manage side-effects carefully. That is, they should either be pure functions, or the side effects should be "what the function does", so that readers of the main function don't generally need to read the function's code to understand its side effects.

10
Practicality 14 hours ago 0 replies      
I wonder how much of this change is because he can no longer keep track of so much state in his head (or just doesn't want to).

I only say this because I've gone through a similar transition of valuing my mental computation time in the last 20 years of coding :).

The efficiency of inlining is compelling when you code the whole thing at once, in one session. Once you decide to break the work up over multiple sessions, it's too much to keep in your head over multiple days (or weeks).

11
hiou 15 hours ago 1 reply      
I think this is a great example of something that is different for an exceptional developer as opposed to an average one.

A developer like Carmack and likely the teams he works with are able to keep a much larger system in their head at one time than an average developer.

And this is typically why they can write larger functions like that and get away with it.

A less talented developer will be much more likely to introduce bugs near the top of that function over time as they struggle to maintain the entire function in there head.

Sometimes choosing the correct tool has more to do with the craftsman than the craft.

12
panic 18 hours ago 0 replies      
Some good previous discussion here: https://news.ycombinator.com/item?id=8374345
13
qwertyuiop924 10 hours ago 1 reply      
Although I am a fan of the LISP school of program design (minimal global state, build small functions and macros, make sure they work, and than build more functions and macros on top of that, until you have an abstraction that you can build your app on), Carmack raises some interesting points: If you're handling a lot of global, mutable state, you may want to abstract minimally, so that you can see where that state is mutating, which makes bugs easier to spot.

Not a bad idea.

14
anotherhacker 14 hours ago 2 replies      
Don't we write code for other programmers first--then for the system?

It seems counter-intuitive but in the long run this mentality best serves the business.

15
p0nce 17 hours ago 0 replies      
About style A vs B vs C:

Robert C. Martin encourages style B because it reads topdown and replaces comments with names.

16
hellofunk 13 hours ago 1 reply      
One of his definitions of a pure function is one that only has by value parameters and doesn't change state. Am I correct in thinking that in C++, the advent of C++11 lambdas allows you to be explicit about this and prevent the compiler from allowing you to accidentally use variables from outside the scope of the function's parameters, by writing lambadas (named, if necessary, like a normal function) with no-capture lists ("[]") which would force you to work in a more pure style. In C++, what other method might help you enforce purity?
17
Animats 8 hours ago 0 replies      
Both games and low-level real time systems have one big loop executed at a fixed rate. That leads to the architecture Carmack describes.

It's not particularly helpful to a server that's fielding vast numbers of requests of various types.

18
schlipity 17 hours ago 6 replies      
>I now strongly encourage explicit loops for everything, and hope the compiler unrolls it properly.

I get why this is a thing. Sometimes an unrolled loop is faster. But if this is really an issue, why isn't there a [UnRoll] modifier or a preprocessor or something that handles that for you?

Something like this:

 for (int i = 0; i < x; i++;) { dothing(x[i]); }
versus:

 unroll for (int i = 0; i < x; i++;) { dothing(x[i]); }
Only the compiler / preprocessor would unroll the second one. You have the best of both worlds with a reduced chance of subtle errors.

19
logfromblammo 13 hours ago 0 replies      
If you occasionally inline all the functions and unroll all the loops, you can occasionally find optimizations that even the compiler won't be able to make.

For example, in quaternion-based rotation math, there exists a "sandwich product" where you take the (non-commutative) product of the transform and the input, followed by the product of that result and the conjugate of the transform.

It turns out that several of the embedded multiplication terms cancel out in that double operation, and if you avoid calculating the canceled terms in the first place, you can do a "sandwich product" in about 60% the total floating-point operations as two consecutive product operations.

In the application that used spatial transforms and rotations, the optimized quaternion functions were faster than the 4x4 matrix implementation, whereas the non-optimized quaternion functions were slightly slower. That change alone (adding an optimized sandwich product function) cut maybe 30 minutes off of our longest bulk data processing times.

You would never be able to figure that out from this.

 out = ( rotation * in ) * ( ~rotation );
You have to inline all the operations to find the terms that cancel (or collapse into a scalar multiplication).

20
saynsedit 9 hours ago 0 replies      
I think Carmack is conflating FP with good abstractions.

Haskell abstractions are often good because they flow from category theory and there are usually well established mathematical laws associated with them. I'm thinking of the "monad laws" and the "monoid laws."

Mathematicians tend to create abstractions if the abstraction satisfies coherent and provable properties. Programmers tend to be less rigorous about what and how they abstract.

There is nothing about C++ that prevents making good abstractions. It's just the culture of the language. Industry programmers are taught to not duplicate code and to keep functions short but they are not taught the fundamentals of what makes a good abstraction.

21
TickleSteve 15 hours ago 1 reply      
Correct me if I'm wrong (I only skimmed it) but this is less about not liking inlining than having deterministic/time-bounded performance.

These are two separate/orthogonal issues, I doubt he would turn his nose up at the processor doing less work iff it was also deterministic and had predictable worst-case timing.

22
andy_ppp 17 hours ago 1 reply      
Yes, once you understand functional programming you never want to go back to non-explicit state changes to all of your variables contents without you knowing or your explicit consent.
23
corysama 9 hours ago 0 replies      
Reposting my comment from the last time this was posted. There was a lot of nice discussion there: https://news.ycombinator.com/item?id=8374345

===

The older I get, the more my code (mostly C++ and Python) has been moving towards mostly-functional, mostly-immutable assignment (let assignments).

Lately, I've noticed a pattern emerging that I think John is referring to in the second part. The situation is that often a large function will be composed of many smaller, clearly separable steps that involve temporary, intermediate results. These are clear candidates to be broken out into smaller functions. But, a conflict arises from the fact that they would each only be invoked at exactly one location. So, moving the tiny bits of code away from their only invocation point has mixed results on the readability of the larger function. It becomes more readable because it is composed of only short, descriptive function names, but less readable because deeper understanding of the intermediate steps requires disjointly bouncing around the code looking for the internals of the smaller functions.

The compromise I have often found is to reformat the intermediate steps in the form of control blocks that resemble a function definitions. The pseudocode below is not a great example because, to keep it brief, the control flow is so simple that it could have been just a chain of method calls on anonymous return values.

 AwesomenessT largerFunction(Foo1 foo1, Foo2 foo2) { // state the purpose of step1 ResultT1 result1; // inline ResultT1 step1(Foo1 foo) { Bar bar = barFromFoo1(foo); Baz baz = bar.makeBaz(); result1 = baz.awesome(); // return baz.awesome(); } // bar and baz no longer require consideration // state the purpose of step2 ResultT2 result2; // inline ResultT2 step2(Foo2 foo) { Bar bar = barFromFoo2(foo); // 2nd bar's lifetime does not overlap with the 1st result2 = bar.awesome(); // return bar.awesome(); } return result1.howAwesome(result2); }
If it's done strictly in the style that I've shown above then refactoring the blocks into separate functions should be a matter of "cut, paste, add function boilerplate". The only tricky part is reconstructing the function parameters. That's one of the reasons I like this style. The inline blocks often do get factored out later. So, setting them up to be easy to extract is a guilt-free way of putting off extracting them until it really is clearly necessary.

===

In the earlier discussion sjolsen did a good job of illustrating how to implement this using lambdas https://news.ycombinator.com/item?id=8375341 Improvements on his version would be to make everything const and the lambda inputs explicit.

 AwesomenessT largerFunction(Foo1 foo1, Foo2 foo2) { const ResultT1 result1 = [foo1] { const Bar bar = barFromFoo1(foo1); const Baz baz = bar.makeBaz(); return baz.awesome(); } (); const ResultT2 result2 = [foo2] { const Bar bar = barFromFoo2(foo2); return bar.awesome(); } (); return result1.howAwesome(result2); }
It's my understanding that compilers are already surprisingly good at optimizing out local lambdas. I recall a demo from Herb Sutter where std::for_each(someLambda) was faster than a classic for(int i;i<100000;i++) loop with a trivial body because the for_each internally unrolled the loop and the lamdba body was therefore inlined as unrolled.

24
Kenji 11 hours ago 0 replies      
and I was quite surprised at how often copy-paste-modify operations resulted in subtle bugs that werent immediately obvious.

I noticed this quite some time ago. This is also a major source of bugs that I write. That is, until I decided to stop copy-pasting more than a word at all, and retype everything character by character when I need it again. Interestingly enough, this saves a lot of time because the bugs I would generate otherwise cost way more time than a bit of typing.

25
dustingetz 14 hours ago 1 reply      
50% of HN comments have misread this! The first few paragraphs mentioning FP were written in 2014 and are retracting the opinion of the long email about inlining, which is from 2007
13
Donkey A computer game included with early versions of PC DOS github.com
467 points by mondaine  1 day ago   171 comments top 28
1
screensquid 1 day ago 3 replies      
Playable here:http://www.pcjs.org/devices/pcx86/machine/5150/cga/64kb/donk...

"The above simulation is configured for a clock speed of 4.77Mhz, with 64Kb of RAM and a CGA display, using the original IBM PC Model 5150 ROM BIOS and CGA font ROM. This configuration also includes a predefined state, with PC-DOS 1.0 already booted and DONKEY.BAS ready to run.

And now that PCx86 automatically saves all your changes (subject to the limits of your browsers local storage), you can even close the browser in the middle of a game of DONKEY, and the next time you load this page, your progress (and the donkey) will be perfectly restored."

2
segmondy 1 day ago 2 replies      
Why are people complaining that the game is bad? Really? Can we see what you wrote in 1982? Games like this were quickly hacked together to show the capabilities of the machine. It was done in BASIC for owner's of those machines to play with. If this was a commercial product, it would be written in assembly.

If anything, this was likely a demo of Microsoft BASIC, let's not forget that Bill Gate's first product was not an OS, but rather a BASIC interpreter written in 1975 for the Altair 8800, written in assembly on paper tape, without the actual hardware! This was written a good 5-7 years later.

While some may not like Bill Gates, he was probably a better programmer than most of you in his youth and this was before the Internet where it's now easy to get access to books, screencasts and lots of sample code via github. Give the man his respect.

3
Jerry2 1 day ago 4 replies      
Andy Hertzfeld recollects how Macintosh team dissected the new IBM PC and what they thought of donkey.bas:

http://www.folklore.org/StoryView.py?project=Macintosh&story...

4
pyromine 1 day ago 1 reply      
Here's a wiki link[0] for anyone else that didn't know what they were looking at.

0: https://en.wikipedia.org/wiki/DONKEY.BAS

5
adrianratnapala 1 day ago 1 reply      
I never played or even saw Donkey. Too young.

But I learned to program by playing and modifying Microsoft Nibbles -- which is still my favourite snake game. And possibly my favourite Microsoft product.

And QBasic is still the only IDE that I ever really liked. Although there are some newer ones that I respect.

6
32bitkid 1 day ago 2 replies      
My father bought the first family computer, an IBM PCjr, when I was 4 or 5. Donkey.BAS made a huge impact on me, although its easy to overlook as a crap game these days. Sure, there were "better" games that I played on that system: my older brother purchased Sierra's Black Cauldronwhich I played the shit out of. My uncle wanted me to grow up to be a pilot, so MS Flight Simulator was in order, too. Jordan Mechner's Karateka is still, to this day, mind blowing to me. And lest we forget some Broderbund and Microprose classics.

But donkey.BAS was, which all those other things weren't, is my _first_, real introduction to programming. I was too young at the time to really recognize the value in being able to not only consume, but read the code, change it, learn from it. Unlike more polished games, I learned way more about programming from changing, breaking, and subverting donkey.bas than anything else on the PCjr. Sure, it wasn't _the best_, but it was the first time that someone pulled back the curtain and I was afforded a glimpse at what was possible, and what computers could _do_. Between that, pouring through the "Hands On Basic" book[0], and typing in basic programs COMPUTE magazine, I'm not sure that I'd be a programmer today, as hyperbolous as that sounds...

And some overly nostalgic part of me kind of misses doing PEEKs and POKEs in physical memory and the summer I spent learning binary math because I didn't understand the relationship between the AH and AL registers of my 286 years later. Then I go back to writing CRUD applications in whatever javascript library is the flavor of the month.

I just hope that kids today have the same access to shitty, but accessible, chunks of code to help inspire them and show them what the machines they interact with everyday are really capable of.

[0]: http://www.brutman.com/BasementCleanout/IBM_Hands-On_BASIC/H...

7
cosenal 1 day ago 1 reply      
Context here (Jeff Atwood's blog post dated 2007): https://blog.codinghorror.com/bill-gates-and-donkey-bas/
8
donkeyd 1 day ago 3 replies      
This reminds me about my time writing games in Basic on my calculator during math class. I suck at math because of this, but at least I'm an ok programmer.
10
kobayashi 1 day ago 0 replies      
Relevant to the mention of 4 AM: https://www.youtube.com/watch?v=ORYKKNoRcDc
11
cdevs 1 day ago 0 replies      
36 years ago they wrote every popular iPhone game ever.
12
amelius 1 day ago 0 replies      
"2 lanes ought to be enough for anybody"
13
ernestbro 1 day ago 2 replies      
GORILLAS.BAS for the win
14
dkhenry 1 day ago 0 replies      
This reminds me of the first computer games I ever wrote. I remember being able to get something so satisfying together in BASIC, with graphics, user input, and sounds. Then I remember moving to C and asking how I could draw a simple line, only to be told that it would require a few pages of boilerplate to set up, and that all my programs should just be text prompts.
15
lucisferre 1 day ago 0 replies      
Wow, This is actually one of the first games I remember playing. I was probably 4 or 5 and visiting my Dad at work. To keep me busy their IT guy showed me this game. I think it was also the only time I played it.

Shortly after that we got a PC at home and it came with a demo copy of EGA Golf which I think only had Pebble Beach. Sound quality was similar.

16
max_ 1 day ago 4 replies      
How old was gates when he wrote this thing??
17
nickhalfasleep 1 day ago 0 replies      
The first code I ever wrote was modifying this in a few different ways on my parent's IBM XT to say inappropriate things and change the game dynamics.
18
stefanix 1 day ago 1 reply      
You could say the frustration of interaction is already apparent. The fact I cannot cut in right after a cow is so lame yet so familiar as a Windows user of the 90s.
19
restalis 1 day ago 0 replies      
36 years ago? That would mean 1980, but the code says "1981, 1982" in its copyright info.
20
bane 1 day ago 0 replies      
This has as much interaction as many mobile mega hits.
21
dragonbonheur 1 day ago 1 reply      
People forget that the intention of some programming demos is to explain some concepts simply. The intended audience of that code wasn't some expert programmer - it was for those who wanted to see how things worked so that they could see how the game sucked and once they learned enough they could improve upon it.

For expert BASIC code see Nibbles.BAS:

http://stanislavs.org/OldPages/stanislavs/src/nibbles.bas

22
avodonosov 1 day ago 0 replies      
It's pretty short
23
frnhr 1 day ago 0 replies      
Try hitting the donkey with the side of the car :)
24
kragen 1 day ago 1 reply      
https://robhagemans.github.io/pcbasic/ is a GPL3-licensed implementation of the BASIC language this game is written in. PC-BASIC is written in Python. Supposedly it can run this game, although I haven't tried it.

A couple of interesting features of the game (quotidian to those of us around at the time, but...)

1. The "PLAY" statement is barely used; the sounds are mostly done with the SOUND statement, maybe in part because they are being generated randomly. In fact it seems to be used only to verify that the program is being run on a BASIC interpreter that supports "advanced" features like DRAW.

2. The "DRAW" statement, which has its own mini-language similar to that of the "PLAY" statement. Later these were dubbed "Graphics Macro Language" and "Music Macro Language", even though neither one allows you to define macros. This is used to include vector graphics of the donkey and the racecar in the program, in the subroutines on lines 1940 and 1780, respectively. But the interpretive rendering of these vector graphics (and especially the flood fills, lines 1900 and 2010) was too slow to want to do it every frame; instead it's done at program startup (into an on-screen buffer, since that's the only way to do it in GW-BASIC or BASICA; you can see the painting happen briefly before the game starts) and stored in the arrays CAR% and DNK% with GET statements, later to be PUT onto the screen in the right place each frame. There's a bit of sloppiness there: CAR% is DIMmed right there in the subroutine on line 1910, while DNK% is DIMmed up at the top on line 1470. (And the sprites for the halves of the donkey and car are dimmed there too, along with a planned sprite called Q% which is never used.)

3. There's actually an additional sprite, B%, which isn't set up with a GET statement; it's filled in "by hand" to a simple fill pattern on lines 15101530. My memory was saying that the data format of this array was undocumented, but it does seem to be documented in http://www.antonis.de/qbebooks/gwbasman/, which I'm pretty sure is the actual GW-BASIC manual from Microsoft. Anyway, B% is a vertical line that's getting XORed into the framebuffer (the default PUT raster op was to XOR into the framebuffer, violating patent 4,197,590 if you use it for a cursor) to make the stripes down the middle of the road "move".

4. See how AND is being used as a bitwise operator? That's why true was -1 in MBASIC.

I think there are some important lessons in GW-BASIC/BASICA about how to design user interfaces for end-user programming, and the DRAW and PLAY statements in particular. Also, I can't have been the only person who never figured out how to use the vi-like line editor in BASIC-80 but who edited existing code all the time in Z-BASIC/GW-BASIC/BASICA because I could just use the arrow keys.

25
hclivess 1 day ago 0 replies      
imagine how many hours do you need to spend studying basic to be able to write that
26
Kenji 1 day ago 0 replies      
Is anyone else impressed that it's only 131 lines long?? I have to do more work to set up a plain empty window and OpenGL when I create my games these days!
27
grhmc 1 day ago 1 reply      
Could we fix the title?
28
FoeNyx 1 day ago 0 replies      
So 3 decades ago the autopilot could already keep the car in the current lane but was not able to avoid the unicorn^W donkey? Drivers need to remain engaged and aware!
14
ARM founder says Softbank deal is 'sad day' for UK tech bbc.com
277 points by mbgaxyz  21 hours ago   210 comments top 19
1
cm3 17 hours ago 2 replies      
And in Germany minister Gabriel is trying to find a European buyer, while the Chinese Midea makes offers to buy Kuka Robotics. Somehow I have the feeling that the minister doesn't realize that Midea already owns 76% of Kuka's shares.

In a global economy, is it really a risk for already global companies to be owned by someone who isn't in said company's original HQ nation? What difference does it make, given that the whole operation is already distributed over the globe?

http://www.bloomberg.com/news/articles/2016-06-02/midea-tout...

http://www.reuters.com/article/us-kuka-midea-group-stake-idU...

2
grabcocque 20 hours ago 2 replies      
Probably should have thought of that before he ran ARM into the ground the first time and sought foreign bailouts.

Under his leadership, "sad days" included sizable foreign investments from Olivetti, Apple and GEC.

3
sohkamyung 20 hours ago 2 replies      
Isn't it a bit of a stretch for Hermann Houser to say ARM 'sold' 15 billion chips last year and compares ARM to Intel? Technically it was ARM licensees that sold those chips, not ARM.

Quote from the article: "The man [Hermann Hauser] who helped spin ARM Holdings out from Acorn Computers in 1990 also said the technology firm had sold 15 billion microchips in 2015, which was more than US rival Intel had sold in its history."

4
Nux 20 hours ago 6 replies      
Now that they got their country back, they can sell it off. :-)

Probably the lower value of the has probably contributed to this, or hastened it.

5
ProfChronos 20 hours ago 1 reply      
Interesting timing: we have the same debate rising up in France with the acquisition of Medtech (Rosa medical robots) by Zimmer Biomet [1]. CEO and founder, Bertin Nahum, said he had to sell as he couldn't find the right funding in FR/EU. Real shame and also a sad day for France (med)tech

[1]http://www.reuters.com/article/idUSFWN1A40EO

6
dandare 20 hours ago 2 replies      
The sum almost feels low in the light of all the unicorn valuations.
7
petercooper 17 hours ago 1 reply      
A complaint I've seen about the UK's tech scene is that it can't be like Silicon Valley because there haven't been enough acquisitions to spread the wealth and encourage further and more diverse investment. But now there's an acquisition, people are moaning :-D
8
imtringued 14 hours ago 3 replies      
I would like to see some ARM chips that are competitive with x86 even if that still means they are only put in Chromebooks. The Tegra X1 is quite interesting but the dev board is too expensive for me ($599). It's kind of ironic that AMD's x86 cpus are actually the cheapest choice if you want relatively high performance because that is one of the big selling points of ARM.
9
matco11 7 hours ago 0 replies      
Considering that 1) the revenue base of ARM is predominantly in USD licensing fees, and 2) that the British pound has lost ~20% against he dollar. The ~41% premium is really a 21% premium. No chance a global leader like ARM could have been snatched for a 21% premium a few weeks ago.

Run a list of companies listed in London with predominantly non-GBP revenue base and you have a great list of takeover targets.

10
ionwake 14 hours ago 0 replies      
To think 2 weeks ago I was put forward there for an interview, and I just heard back that they are outsourcing it to a different team...

I can't help but feel this is not coincidental, and that I probably won't be hearing about any new roles any time soon.

11
ed_blackburn 20 hours ago 3 replies      
Can anyone express in none emotional terms what the sale means to UK business?
12
m0llusk 6 hours ago 0 replies      
Developing technologies for the future of mankind is not inherently compatible with Nationalism.
13
kazinator 15 hours ago 0 replies      
That's nothing compared to all of 7-Eleven being Japanese owned. Also, Sapporo owns Canada's Okanagan Springs beer brewery. :)
14
petrikapu 20 hours ago 2 replies      
Is this all because of brexit or why to sell such successful business?
15
tiatia 17 hours ago 0 replies      
Why a sad day? The UK has lost most of it's industrial base and the oil, that fueled the rise of the UK under Thatcher, runs out. Last thing I heard is that they want to make a free trade zone with China. So they wouldn't really need whats left of the industrial base anyway.

Probably they are trying to become an offshore business center with decent money laundering services.

16
cleeus 20 hours ago 2 replies      
aha, they intend to keep the ARM HQ in the UK. Well ... let's see what they say in 10 years because you know, once a company is sold, the buyer can do whatever the fuck he wants with the thing he just bought.
17
gonzo 19 hours ago 1 reply      
Knew before clicking that it would be "Herman the German"
18
fit2rule 20 hours ago 0 replies      
I wonder now if Softbank/Japan is going to be brought into the "five-eyes" agreements somehow, now that they'll have control over all the ARM processors we'll be using on the planet, and thus will also inherit all the backdoors put there for that bastion of British Pride, the GCHQ ..
19
stuaxo 21 hours ago 0 replies      
Not sure what will hap happen, the ways of running businesses in both countries are very different and not really compatible often.
15
Passport Index 2016 passportindex.org
340 points by dominotw  2 days ago   237 comments top 33
1
hal9000xp 2 days ago 5 replies      
I had citizenship of one most repressive and poor countries in the world - Uzbekistan. Later I obtained Russian citizenship. Now, I'm living in the Netherlands and plan to obtain citizenship of some first world country (NL, SG etc).

So I have first hand experience to live with Uzbek passport and being practically completely isolated from the first world (nobody give you EU tourist visa if you have Uzbek passport and you are not "special" person).

Now, I have Russian passport, it's a bit easier but I'm still struggling to get even tourist US visa (I was refused twice).

Changing passport is very long and painful process. It can significantly change your course of life (like in my case).

2
sevenless 2 days ago 6 replies      
Is there a guide to multiple citizenship power leveling anywhere?

Edit: A bit of munging from the web page later...

For Americans and British/Western Europe, the passports you want are Burkina Faso, Benin and Senegal. They will bring your passport power level up by 13 points to 168 (American) and 170 (UK) and give you visa free access to Cote Divoire (Ivory Coast), Mali, Iran, Congo, Niger, Benin, Nigeria, Liberia, Central African Republic, Sierra Leone, Guinea, Chad, Ghana. In reality these countries may not allow dual citizenship! Benin and Burkina Faso however do recognize dual citizenship while Senegal does not. (http://www.multiplecitizenship.com/worldsummary.html)

The highest power level you can have as a dual citizen is 170. Any combination of these three African countries and Western European nations will achieve this. So the most powerful dual citizens are mostly West African migrants.

Now for triple citizens, the highest possible power is a whopping 178. There are a few combinations that do this, involving those 3 African countries, plus Russian Federation, Singapore, Japan and Turkey. For example, Turkey/Japan/Senegal, Burkina Faso/Russia/Singapore, Japan/Benin/Turkey. So it looks like the path to ultimate power is sadly blocked for those of us in the USA and Europe, and in fact, the most powerful dual citizens cannot become the most powerful triple citizens. On the other hand, Senegal/United States of America/Malaysia will get you 177, nearly as good.

3
honkhonkpants 2 days ago 5 replies      
Let's comment on the visual design of passports. These are all basically the same, except two that stand out: Switzerland, and The Vatican. The latter is very typically Swiss, with an attractive, simple, and modern design. The former is sinister and mysterious, but also quite distinctive.

Honorable mention to Brazil and Bosnia for having passports without a coat of arms type of thing on the front.

4
hypertexthero 2 days ago 0 replies      
Reminds me of [Quest][1] by sailor-philosopher and self-declared citizen of the world [George Dibbern][2], who renounced his German passport in 1940 and created his own with the following declaration:

> I, George Dibbern, through long years in different countries and sincere friendship with many people in many lands feel my place to be outside of nationality, a citizen of the world and a friend of all peoples.

> I recognize the divine origin of all nations and therefore their value in being as they are, respect their laws, and feel my existence solely as a bridge of good fellowship between them.

[1]: http://www.georgedibbern.com/aboutdibbern.html[2]: http://www.georgedibbern.com/quest-dibbern.html

5
danra 2 days ago 7 replies      
Anyone else reminded of Passport Please? Imagine having to recognize a fake passport by a slight difference in symbol/text format :)
6
Freak_NL 2 days ago 4 replies      
Looking at the passports that provide visa-free access to 158, 157, and 156 countries, I wonder why Viet Nam and Rwanda grant visas on arrival to specifically some European countries, but not others.

So for Viet Nam; Denmark, Germany, Sweden and Spain are okay, but not Austria, Belgium and The Netherlands.

Interesting visualisation; the map is missing South Sudan though.

7
valine 2 days ago 0 replies      
The consistency of the design between passports is remarkable. They all follow a basic pattern: Country name at the top, centered logo, then the word 'passport' at the bottom. There are only a handful of countries that deviate from the pattern. It's almost like there is a universal passport design language. Does anyone know if there is a official specification counties follow?

Edit: The international Civil Aviation Organization issues the specification. https://en.m.wikipedia.org/wiki/International_Civil_Aviation...

8
tuna-piano 2 days ago 1 reply      
In case anyone else was curious as to the difference between "Visa on Arrival" and "Visa Free", check out this Quora thread: https://www.quora.com/What-is-the-difference-between-visa-on...

It seems the practical difference is mainly in name only, although it's possible that immigration officers are given more leeway to restrict entry for the "visa on arrival" countries.

9
dorfsmay 2 days ago 1 reply      
The passports with a symbol of a circle inside a rectangle at the bottom include biometric information:

https://en.m.wikipedia.org/wiki/Biometric_passport

10
the_mitsuhiko 2 days ago 0 replies      
Last time I was checking this index is pretty crappy (as in inaccurate and spotty; for instance it claims that swiss citizens can access North Korea visa free). Henley & Partners have one that is more reliable. I spot checked last year's version of H&P and that website against timatic which is the source for both and H&P was at least mostly correct.

Unfortunately there is no good way to get access to this information in machine readable format. It's all free form text and full of special rules.

11
Animats 1 day ago 0 replies      
There's are some multinational passports. There's the European Union laissez-passer.[1] This is issued to European Union officials who travel on EU business, and is accepted as a passport within the EU and by about 100 non-EU countries. There's also a United Nations laissez-passer, for UN officials.[2] That's not as widely accepted as the EU one.

[1] https://en.wikipedia.org/wiki/European_Union_laissez-passer[2] United Nations laissez-passer

12
frobozz 1 day ago 0 replies      
Mildly disappointed that this appears to only be a list of national passports.

It would be nice to see the World[1] and Sovereign Military Order of Malta[2] passports on here, as well as any others that I don't know about

That said, neither of them align well with the purposes of the company running the site - securing multiple citizenships for rich people.

The latter requires rather a lot of effort to get, as you need to work up to become one of the three senior officials of the order (which, I think, requires a vow of poverty).

The former costs as little as $55 for three years, and doesn't have any visa-free arrangements.

[1] http://www.worldservice.org/visas.html

[2] https://en.wikipedia.org/wiki/Sovereign_Military_Order_of_Ma...

13
morgante 2 days ago 1 reply      
It would be nice for this ranking to be expanded to include employment/work access beyond tourist visits.

I'm working on acquiring an Italian passport not because it will allow me to enter more countries without a visa (an American passport is fine for that) but because it allows long-term working/residence in EU countries. Unfortunately, this ranking doesn't capture that dynamic.

14
tegeek 2 days ago 1 reply      
Incidentally the Germanic nations (German, Sweden, Finland, Denmark, Switzerland etc) always ranks in top 5 in almost every ranking of this kind.
15
clarkmoody 2 days ago 0 replies      

 National borders are imaginary lines on a map that people are willing to kill people for; just for the privilege to keep imagining the lines [1]
[1] http://www.kentforliberty.com/borders.html

16
mthoms 2 days ago 2 replies      
No mention yet of the very best passport to have - https://en.wikipedia.org/wiki/United_Nations_laissez-passer

(It's baby blue of course) It's kind of neat to see them in the wild.

17
tenpies 2 days ago 1 reply      
What is it with South American countries not being able to lay out things properly? Peru looks like someone forgot centre everything and by the time they noticed they had already printed hundreds of thousands of passport covers so they just went with it.
18
guelo 1 day ago 0 replies      
If I imagine a completely open world with no visas I think the "first world" would stop being desirable. USA and Europe would be flooded with hundreds of millions of people and would develop giant slums. It would be bad not only for the current people that live there but also for the people that currently want to move to a better place since there wouldn't be a better place to move to.
19
jonathankoren 2 days ago 0 replies      
Honduras and Guatemala have the worst passport covers in the world. They're both the same political map of central america with with their respective countries shaded in. It looks like they they both farmed this out to some 10 year old that had never seen a passport before. Both countries have perfectly good coat of arms. Why not use that? Instead it's like a geography lesson. ("I've been working in passport control for years, and I don't know where your country is." "Well, sir, it's this one right here." "Oh nice. Thanks. Learn something everyday!")
20
xxxxxxxx 2 days ago 2 replies      
Australia is such an unwelcoming country it's embarrassing. I suspect it's island mentality. At the airport we treat all arriving passengers as criminals - guilty of some terrible crime until cleared with a fully cavity search.To all our visitors - past, present and future - I apologise for our stupid bureaucracy and politicians. Once you get out of the airport things do get a little better.
21
laut 2 days ago 0 replies      
On the map (https://www.passportindex.org/byRank.php) Greenland should probably have a "more powerful" colour since they get a Danish passport.
22
galori 1 day ago 0 replies      
Just opening this page makes me feel like I'm Jason Bourne and a swat team is about to crash through my window. I closed the tab.
23
tuna-piano 2 days ago 6 replies      
In relation to passport power - I find Trump's proposed ban on Muslims pretty interesting to think about. Although widely criticized, it seems to me more ethically wrong to base entry decisions on country of origin (not a choice) than of religion (a choice).

Obviously there are many issues with the practicality, unintended consequences, and the blanket nature of a ban on muslims. That said, I'm not sure I see an ethical issue compared to restricting based on country of origin (what this site shows).

24
akita-ken 2 days ago 1 reply      
I wonder how countries choose which colour to use for their passports. Black and purple seem to be the rarest colours, with generally no light colour (other than bright red) being used.
25
wineisfine 2 days ago 0 replies      
Did you know Cuba has agreements with individual countries; like for example, for Serbs it's one of the few hassle free countries to travel to.
26
raz32dust 2 days ago 0 replies      
Chinese passport holders require visa for Hong Kong even though they are visa-free 145 countries!

And visiting Iran is visa-free for 186 countries!

27
tragomaskhalos 2 days ago 0 replies      
This is fascinating but I found myself frustrated by the lack of an obvious key to the acronyms used in the rankings
28
nerdponx 2 days ago 0 replies      
It would be nice if they explained what these scores mean. Or am I missing something?
29
YeGoblynQueenne 2 days ago 0 replies      
The pope has a cool passport.
30
aearm 2 days ago 0 replies      
The ranking isnt true the strian passport hasnt 32 visa free country
31
exit 2 days ago 0 replies      
nationality is segregation
32
nononosisisi 2 days ago 2 replies      
You forgot pizza
33
stonogo 2 days ago 2 replies      
I find it interesting they rate a passport's "power" by where you can go away to, instead of by where you go home to.
16
The History of the URL: Path, Fragment, Query, and Auth eager.io
322 points by zackbloom  1 day ago   71 comments top 18
1
WiseWeasel 1 day ago 1 reply      
>Given the power of search engines, its possible the best URN format today would be a simple way for files to point to their former URLs. We could allow the search engines to index this information, and link us as appropriate:

 <!-- On http://zack.is/history --> <link rel="past-url" href="http://zackbloom.com/history.html"> <link rel="past-url" href="http://zack.is/history.html">
This seems like a heart-warmingly naive proposal; cross-domain search result hijacking.

2
orf 1 day ago 4 replies      
Perhaps the author has a point, but I don't think it's as bad as he puts it. Grandma and Grandad don't need to learn UNIX paths to use the net, they just need to know one thing: the hostname. They type in "google.com" or "walmart.com" and that's it, not hard to grok. Perhaps they don't fully get what .com means but they know it's needed and some sites have different ones.

You never see an advert with "visit our site at http://user:pass@mycompany.com/app/page1244.html?abc=def#hel..., it's just "mycompany.com". The structure of the URL's once the user has hit the landing page and clicked a couple of links doesn't need to be known by the user, nor should the user care.

To put it simply: if it was hard then the web wouldn't be as big as it is now. Non-technical people understand simple URI's and that's all that's needed.

3
myfonj 1 day ago 2 replies      
Great article indeed.What I personally missed there was a clear statement about segment (or path) parameters as opposed to query ("search") parameters:

 //domain.tld/segment;param1=value1,param2/segment?query=..
Clearly nowadays it seems a bit odd. I bumped into it while reading about this in the "HTTP: The Definitive Guide" from 2002 chapter "2.2 URL Syntax". Actually read this just few years ago and ever since pondered whether this aspect was or was not really adopted by some wild CGI at the times I haven't experienced myself.

Definitive guide apparently paraphrased RFC 2396 which clearly defined semicolon as segment parameter delimiter, but later was obsoleted [a] by RFC 3986, moving former standard to "possible practise" [b] stating that:

> URI producing applications often use the reserved characters allowed in a segment to delimit scheme-specific or dereference-handler-specific subcomponents. For example, the semicolon (";") and equals ("=") reserved characters are often used to delimit parameters and parameter values applicable to that segment. The comma (",") reserved character is often used for similar purposes. For example, one URI producer might use a segment such as "name;v=1.1" to indicate a reference to version 1.1 of "name", whereas another might use a segment such as "name,1.1" to indicate the same. Parameter types may be defined by scheme-specific semantics, but in most cases the syntax of a parameter is specific to the implementation of the URI's dereferencing algorithm. [c]

[a] would't have noticed without: http://stackoverflow.com/questions/6444492/can-any-path-segm...[b] https://tools.ietf.org/html/rfc3986#section-3.3

5
bduerst 1 day ago 3 replies      
Just a random URL fact: the TLDs for domains could technically be used as domains as well, if ICANN ever allowed it.

It wouldn't make sense for TLDs like com, net, org, etc., but for trademarked TLDs like barclays, youtube, pwc, etc. visitors could essentially go straight to that webpage with the TLD, like https://youtube

6
userbinator 1 day ago 1 reply      
IMHO the arguments given in the quotes in the article are not very compelling, in view of the fact that humans have been identifying physical locations with far more complex and inconsistent systems for literally centuries:

https://news.ycombinator.com/item?id=8907301

Compared to "real-life addresses", URLs are an absolute pleasure to handle and understand; which naturally raises the question of why so many people seem to have trouble, or are suggesting that others do, with URLs? It's just a "virtual" address, in an easily-parseable format for identifying a location in "cyberspace". Perhaps its the "every time you put an intelligent person in front of a computer, his/her brain suddenly disappears" syndrome (for lack of a better word)?

The advocacy of search engines instead of URLs is also not such a great idea; sadly, search engines like Google today do not work like a 'grep' that lets you find exactly what you're looking for, do not index every page (or allow you to see every result, which is somewhat equivalent), and the results are also strongly dependent upon some proprietary ranking algorithm which others have little to no control over. If relying on links and having them disappear is bad, relying on SERPs is even worse since they are far more dynamic and dependent on many more external factors which may even include things like which country your IP says you're from when you search.

Search engines are definitely useful for finding things, but as someone who has a collection of links gathered over many years, most of which are still alive and yet Google does not acknowledge the existence of even when the URL is searched, I am extremely averse to search engines as a sort of replacement for URLs.

7
agentgt 9 hours ago 0 replies      
In the Java world URL/URI/URN construction, escaping, unescaping, manipulation is a confusing disparate buggy mess.

Of course there is the obvious broken `java.net.URL` but there are so many other libraries and coding practices where programmers just continuously screw up URL/URI/URNs over and over and over. It is like XML/HTML escaping but seems to be far more rampant in my experience (thankfully most templating languages now escape automatically).

In large part I believe this is because of the confusion of form encoding and because of the URI specs following later that supersede the URL specs (but actually are not entirely compatible).

In our code base alone we use like 8 different URL/URI libraries. HTTP Components/Client (both 3 and 4), Spring has its own stuff, JAX-RS aka Jersey has its own stuff, the builtin crappy misnamed URLEncoder, the not that flexibile java.net.URI, several others that I can't recall. I'm surprised Guava hasn't joined in the game as well.

I would love a small decoupled URL/URI/URN library that does shit easily and correctly. URL templates would be nice as well. I have contemplated writing it several times.

8
anyfoo 1 day ago 1 reply      
Tim Berners-Lee thought we should have a way of defining strongly-typed queries: <ISINDEX TYPE="iana:/www/classes/query/personalinfo"> I can be somewhat confident in saying, in retrospect, I am glad the more generic solution won out.

The author does not explain why he is glad that the more generic solution won out. Having strongly-typed queries might have brought as much closer to some approximation of a practical "semantic web" and done some wonders for web services, accessibility and others.

Maybe he is glad because not having any strong typing allowed us to have the flexible, completely free-form web interfaces we have today, but who's to decide that that wouldn't have emerged anyway; maybe even in a slightly saner form than the horrible mess we have today.

On a completely unrelated note:

Given the power of search engines, its possible the best URN format today would be a simple way for files to point to their former URLs.

This raises immediate concerns about security and spam, but that may be solvable somehow.

That being said, I really enjoyed reading this thoroughly researched history a lot, even more than the previous, also great installment about CSS's history. (But my preference is just because I'm not a Web/Design guy.)

9
nommm-nommm 1 day ago 0 replies      
If anyone is wondering (like I was) what the ISBN referenced is its The C Programming Language by Brian Kernighan and Dennis Ritchie.
10
shmerl 1 day ago 1 reply      
> rendering any advertisement which still contains them truly ridiculous

I don't find it ridiculous, since despite gopher going out of existence, and ftp being a minority, http vs https distinction is quite important until the present day, especially considering that redirect form http to https can be insecure and the proper way to open many sites is explicitly with https://

It might get fixed in the future, but it didn't happen yet.

11
mkoryak 1 day ago 1 reply      
> As early as 1996 browsers were already inserting the http:// and www. for users automatically.

but not the default install IE9. Figures.

12
utopcell 1 day ago 1 reply      
This is a well-written article, but I find this criticism of URLs a bit exaggerated. We needed to reach an agreement on URL structure and there is little point in changing that now. Hardly anyone would abandon English for the unambiguous Lojban (https://en.wikipedia.org/wiki/Lojban) for example.

As for non-expiring identifiers to pages/content, the 'expired' URLs are as good identifiers as anything, considering URL redirects exist.

I did find this tidbit of information quite intriguing: the creator of the hierarchical file system attributes his creation to a two hour conversation he had with Albert Einstein in 1952!

13
RockyMcNuts 1 day ago 3 replies      
I always suspected there must have been 2 archaic null tokens between the :// of http://...

If it was a design choice to use a 3-character separator instead of a single character, seems an odd choice.

14
kevin_thibedeau 1 day ago 1 reply      
> This system made it possible to refer to different systems from within Hypertext, but now that virtually all content is hosted over HTTP, may not be as necessary anymore.

WS and WSS will become more and more commonplace over time. I like that a 25+ year old protocol is forward compatible enough to accommodate new methods of network communication. It could be debatable whether HTTPS and WSS are necessary for a URL but they give a hard guarantee that a secure connection will be made and not silently downgraded for those who care about such things.

15
Azy8BsKXVko 1 day ago 1 reply      
I like the idea of URNs, mostly because I just think they're cool. They'd be pretty infeasible though.
16
microDude 1 day ago 2 replies      
I am not a web developer, but I have to ask.

Using basic authentication over SSL, does that mean if you entered https://user:pass@domain that the user and pass would be sent in the clear, or does this get put into the header and encrypted?

17
javajosh 1 day ago 1 reply      
As an aside, it would be interesting to adopt a convention where certain links included a hash of what it pointed to, avoiding the case that the (sub-)resource changed out from underneath the link. This implies that you'd want to link to a specific version of a (sub)resource. E.g. you could do something like what github does with:

https://github.com/USER/PROJECT/commit/COMMIT_HASH#DIFF_HASH

where the DIFF_HASH is a fragment pointing to a particular resource within the commit.

18
tantalor 1 day ago 0 replies      
> The same thinking should lead us to a superior way of locating specific sites on the Web.

Try http://google.com

17
Cron checker crontab.guru
328 points by otoolep  3 days ago   126 comments top 18
1
Animats 3 days ago 11 replies      
This is why the UNIX approach of using flat text files for dynamic information is obsolete. It seems so simple to have a text file. But then you need an editor, a locking system, access control, and a checker. And probably something to remove orphaned lock files and detect corrupted data.

Those are all database functions. In UNIX/Linux, each configuration file has its own mechanism for all that. They're all inferior to SQLite.

2
joelthelion 3 days ago 4 replies      
What I'd like is a way to check that the command will run properly. Waiting for one minute to test is not productive at all. And no, testing in your shell won't do, since cron runs in a particular environment.
3
ajmurmann 3 days ago 1 reply      
Looks like a more polished version of https://cronwtf.github.io minus the awesome name and cat picture
4
thomseddon 3 days ago 0 replies      
Looks great! I also find the other way around very convenient: http://thomseddon.github.io/cronstring/
5
taspeotis 3 days ago 2 replies      

 5 4 W * *
Really gave Chrome some grief.

7
0xmohit 3 days ago 0 replies      
Crontab generator [0] is a good alternative that doesn't feature broken JS [1]. Moreover, the example and configuration page on the wikipedia page [2] are clear enough.

[0] http://crontab-generator.org/

[1] https://news.ycombinator.com/item?id=12105516

[2] https://en.wikipedia.org/wiki/Cron

8
agumonkey 3 days ago 1 reply      
I leaerned more with one minute of this than all previous attempts at rtfm since 2002. Kudos

http://crontab.guru/#*_5/6_*_*_*

9
krzrak 3 days ago 0 replies      
"15 14 1/2 2 *" -> At 14:15 on the 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29 and 31st in Feb. :)
10
krick 3 days ago 2 replies      
Why just not replace cron with something a bit more flexible in terms of settings and config formats? Seriously, I guess time spent on making this website should be coparable with rewriting cron it's not like cron is a complicated piece of software.
11
lugus35 3 days ago 0 replies      
It would be nice to have the inverse: write in plain english what you want and it gives you the crontab line.
12
Pamar 3 days ago 0 replies      
Hopefully this will attract lots of cron experts.systemd experts are fine too... I have a question for you all: https://news.ycombinator.com/item?id=12084455
13
fideloper 3 days ago 0 replies      
Can anyone find (confirm) flaws in it?For example, this gives me the same result for

 0/5 * * * *
as it does for

 */5 * * * *

14
gourneau 3 days ago 0 replies      
This is a super great tool.

If any of y'all are looking for a more modern cron replacement for running periodic tasks with a nice web uis here are a few other options:

* Rundeck http://rundeck.org/* Stackstorm https://docs.stackstorm.com* ndscheduler https://github.com/Nextdoor/ndscheduler

15
CraigJPerry 3 days ago 0 replies      
Heh it even models the weird behavior you get when you specify day of month AND day of week.

All other fields are AND together, these two are OR

16
boot13 3 days ago 0 replies      
I love this. Thank you.
17
mangix 3 days ago 3 replies      
who uses cron when you have systemd anyway?
18
yoavm 3 days ago 2 replies      
what about @reboot ?
18
Cloudflare ReCAPTCHA De-Anonymizes Tor Users cryptome.org
248 points by walterbell  12 hours ago   105 comments top 20
1
mmaunder 9 hours ago 1 reply      
"The Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network. That's because if you can see both flows, some simple statistics let you decide whether they match up."

https://blog.torproject.org/blog/one-cell-enough

Work on a client to try and mitigate the risk of timing attacks:

https://news.ycombinator.com/item?id=9585466

2
jgrahamc 10 hours ago 4 replies      
This short piece doesn't have much detail. But if reCAPTCHA is usable to deanonymize Tor users then I would like to know about it in detail so I can do something about it.
3
bostik 9 hours ago 0 replies      
In other news, a global passive adversary can use traffic analysis, timing data, and known patterns to deanoymise a Tor user.

The only "new" thing here was the rough traffic pattern analysis of CF captcha page.

4
pyromine 11 hours ago 5 replies      
I didn't realize just how fragile TOR is. . . While I understand that remaining anonymous requires adjusting your browser habits somewhat extensively, the fact that a ReCAPTCHA is enough to (theoretically) de-anonymize a user seems to me that it's not able to anonymize at all when browsing.

While TOR may be useful for evading firewalls, my general perception of the project has changed from general anonymity tool to a tool tailored for very specific use.

Granted, this is probably what my understanding always should have been.

5
cuonic 11 hours ago 1 reply      
One way around this is to disable javascript for ReCAPTCHA, the service provides you with a rather primitive HTML form with checkboxes over the images, generating only one request on submit.
6
Johnny555 9 hours ago 2 replies      
Why is this phrased as if it's Cloudflare's fault?

If it's this easy for a side effect of a recapcha image to de-anonymize a Tor user, then this seems like a failing of the Tor protocol that they should fix. Maybe they need to introduce more jitter, repackage requests into a single stream with consistent (or randomized) packet size, or pad the packets with random data.

7
sp332 9 hours ago 0 replies      
Isn't this explicitly outside Tor's threat model? https://svn.torproject.org/svn/projects/design-paper/tor-des... See section 3.1
8
captainmuon 8 hours ago 1 reply      
Huh, I always thought that Tor breaks up traffic in a random, but deterministic (not data dependent) way - sometimes joining data from two packets into one network packet, sometimes splitting packets and holding data for a while [x]. That's how I explained the jitter to myself. Sometimes a connection would be really fast, and sometimes it would hang on a single packet for hundreds of ms. Seems I was mistaken.

In this case, it would have helped a bit, since an attacker would not have seen the characteristic staccato of the reCAPTCHA exchange. They would have seen a few kB in either direction, in 40-100 packets, over a period of a few seconds. If the implementation is clever, on end would even have a different signature than the other.

At least this is something I would have included in Tor. Now that I think about it, randomly introduced delays (from the outside) might actually be a technique to deanonymize users....

([x] You'd generate packet sizes and minimum transmission times from a known seed. First packet is 501 B, 24 ms later a packet of 2048 B, then 15 ms later one of 1718 B, and so on. If there is not enough data after a grace period, pad with junk. If you constantly need more time to send packets than allowed, or need to pad, then adjust the model. Also choose the model to match regular traffic if possible. Disclaimer: I'm just making this up on the spot and am no expert, but it seems plausible and obvious to me.)

9
mikegerwitz 11 hours ago 1 reply      
Traffic analysis is always a problem; this is a specific case, but I'm not sure this is anything new.

Many attacks on Tor are facilitated by or require JavaScript. Consider disabling it rather than executing arbitrary, untrusted software on your computer automatically.

10
hewhowhineth 6 hours ago 0 replies      
I stopped visiting sites with image recognition reCAPTCHAs. It has to be one of the worst UX patterns ever devised. It's dirt cheap to automate them away so it doesn't really stop any self-respecting bot maker, and it comes at a price of being a huge pain in the ass for a real user. Every time I run into them I felt used and abused.

It's really sad. So much brain power and this is what they come up with.

Apologies for the rant, couldn't help it. ReCAPTCHA is one of very few things I genuinely hate.

11
matt_wulfeck 9 hours ago 1 reply      
It's bizarre that this article is critical of Cloudflare. If TOR can't stand up to a recaptcha without leaking PII, then it sounds like TOR ultimately needs to be fixed.

I stand by Cloudflare. So much malicious traffic comes through Tor that administrators need to do a lot to protect themselves from it.

12
tlrobinson 10 hours ago 1 reply      
Are there any anonymity networks that transmit streams of packets between nodes at a constant rate regardless of whether it's being actively used?

Obviously it would be a very bandwidth hungry network, though if exit node bandwidth is currently the limiting factor (is it?) then maybe not entirely impractical.

13
gnud 11 hours ago 4 replies      
I wonder why these anti-abuse systems don't use proof-of-work. Instead of a captcha, let the browser chug for 5 seconds, and then POST the solution in order to gain a temporary access cookie.

Sure, this could be attacked - but not at scale, and that's the whole point of the capchta anyway, right?

14
the8472 8 hours ago 0 replies      
Don't most recaptcha http requests go to google, i.e. wouldn't google be the one with the information/control necessary to de-anonymize?
15
Illniyar 8 hours ago 2 replies      
A lot of comments here talk about recaptcha having a distinctive traffic signature, but I don't understand this.

Why does recaptcha have a distinct signature and if it does couldn't an attacker just make a distinct signature without recaptcha?

And why does recaptcha have a traffic signature that can distinguish between users? I mean how does a simple request response create a distinct traffic?

16
mabbo 9 hours ago 0 replies      
>No one is that incompetent.

Well, I'm not sure I'd go that far.

17
muthdra 10 hours ago 0 replies      
"No one is that incompetent."Yeah I don't think so. Beautiful article, otherwise.
18
gcb0 11 hours ago 1 reply      
does that captcha works without JavaScript?
19
lumberjack 10 hours ago 1 reply      
Browser signatures are probably easier still.
20
LinuxFreedom 9 hours ago 2 replies      
It is an USA company - that is enough to not trust them.

We do not need any more evidence, there is enough out there about gag orders, secret courts, worldwide compromise of network security.

USA tec company inhabitants and founders, read this: please move out of the country, build your companies in other places, do it now. There is no time to waste. You can not repair the system, that corrupt bureaucrats have irreversibly destroyed.

It will take one or two generations to rebuild a freedom oriented democracy in some other place. Currently Europe still seems to be a good starting point, especially now that the main USA influence channel GB is out.

Please give up the false hope and act now. Get out of that failed state! Freedom can not be rebuild in a fascist system without help from the outside - you can do help much better from outside!

People who still stay in USA will be seen as cooperators by history, the window of opportunity is closing, hurry on and get out asap. Help to defend freedom in other places!

19
A Wait-Free Stack arxiv.org
278 points by EvgeniyZh  2 days ago   62 comments top 12
1
kumagi 2 days ago 2 replies      
I already wrote wait-free stack https://gist.github.com/kumagi/d259274270fdc1385f81It is much difficult than lock-free stack.https://gist.github.com/kumagi/b9a4715b1ce0dd511922

And published as book(in Japanese sorry)http://longgate.co.jp/books/grimoire-vol3.html

2
duiker101 2 days ago 4 replies      
So uhm the top 2 link of HN in the last 5 hours have been about this and while they are highly voted there is very little discussion going. Can someone give some context as to why this is attracting so much (surely deserved) attention? How will this affect us? Is it likely to have a deep effect on general computing performances? In what ways will this be applied? I'm sorry if this are silly questions but I find interesting the lack of discussion going on.
3
bnjmn 2 days ago 1 reply      
Make sure you check out Appendix A at the end of the paper (Asymptotic Worst-Case Time Complexity), in case you imagined the name "stack" implies constant-time push/pop performance. This data structure is only a "stack" in the sense that it provides last-in-first-out access.
4
zilchers 2 days ago 3 replies      
I'm not sure I see what's novel about this - maybe it's a verbiage thing around "wait free," but if they're atomically updating the top pointer and linked list, there will be lock contention on writes, and similarly when marking an item popped, on reads. I suppose the contention is bounded by the number of readers or writers, but I wouldn't consider that wait free (again, that could just be a verbiage thing). But, more to the point, this is just a slight twist on how Kafka works (stack vs log / queue, but same with the pointer holding place and a cleanup operation), I don't really see it as particularly novel...perhaps I'm missing something?
5
nemetroid 2 days ago 0 replies      
The first few lines of pop() are:

 mytop <- top.get() curr <- mytop while curr != sentinel do mark <- curr.mark.getAndSet(true) ...
What's keeping curr from becoming a dangling pointer if the size of the stack is bounded?

6
programmer_dude 2 days ago 2 replies      
What is a wait-free stack?
7
amaks 2 days ago 0 replies      
How often would you find a situation where lock free data queue or stack would bring huge performance gains? Usually bad performance comes from a poor choice of data structure(s), bad data locality or by locking is too coarse or too fine causing livelocks/convoys/excessive context switches etc. What I'm saying is that using lock free or wait free algorithm is not a panacea.
8
xchip 2 days ago 1 reply      
I love papers but I love it more when their code is in github :) thanks for sharing!
9
appleflaxen 2 days ago 1 reply      
10
pzh 2 days ago 1 reply      
Correct me if I'm wrong, but wasn't this already described in The Art of Multi Processor Programming?

https://www.amazon.com/gp/aw/d/0123973376/ref=mp_s_a_1_1?ie=...

11
snarfy 2 days ago 1 reply      
> Subsequently, it is lazily deleted by a cleanup operation.

So, it's wait free until this happens?

12
cia48621793 2 days ago 1 reply      
We've banned this account for violating the HN guidelines.
20
Google will use Chrome browsing data for ad tailoring twitter.com
317 points by dchest  2 days ago   200 comments top 29
1
corecoder 1 day ago 13 replies      
I don't know, I'm starting to think that all this is either propaganda for ad sellers or a prank Google, Facebook et all are doing to me.

Google has my full search history, all my hangout chats, all my e-mails, and yet:

* On my Android, it keeps proposing ultra boring "stories to read" about soccer, wannabe celebrities and YouTube videos of stupid teenagers doing stupid things;

* With the sole exception of when the sponsored link is exactly the same as the first result, ad words in my search results have never ever been relevant or interesting in any way.

Facebook is supposed to know everything I like, yet it only shows me ads about stuff I dislike.

The same for Twitter and everything else.

They are supposed to know the inside of my heart and mind, but they have, till now, utterly failed to prove it.

So, do they know me and just pretend they don't for some strange reason, or do they actually know shit and just pretend they do so that they can sell to advertisers at a higher price?

2
dochtman 1 day ago 7 replies      
Meanwhile, Firefox Sync encrypts your data with a password-derived key so that Mozilla can't even see your browsing history.

Consider your choices.

3
sesutton 2 days ago 5 replies      
The way the screenshot is cropped is misleading. The actual page makes it clear this feature is opt-in.

If you're logged into a Google account and haven't already made a choice on the page you can see it at http://www.google.com/settings/ads.

4
neotek 1 day ago 5 replies      
You can't go to Google and say "I want to buy a dataset (anonymous or otherwise) of males aged between 18 to 34 who likes cars and drink beer".

You can go to Google and say "here's an ad I want you to show to a group of people, none of whom will ever be identifiable to me in any way, who are male aged between 18 to 34 who like cars and drink beer."

That's what I don't understand about all this outrage - nobody, not Facebook or Google or Amazon or Apple, is selling your personal information to anyone for any reason, all they're doing is providing a platform that lets advertisers specify broad categories of people to show ads to. What's the problem with that? Ads will be slightly more relevant to my interests? So what?

To be clear, I completely understand why people don't want Google collecting their data in the first place, and that's a perfectly legitimate concern, I'm saying that once the data is collected, what difference does it make if that data is used to refine which adverts you see?

5
jalami 2 days ago 2 replies      
There's Chromium Inox[0] which is just a patchset on top of the Chromium build to remove much of the mothership home-calling. Inox seems to be much closer to Chromium than many of the other Chrome-privacy spinoffs. Contrary to popular belief, Chromium still has a lot of Google in it.

I always feel like artificially patching a project that doesn't care about your concerns natively to be plugging holes in a sponge boat, but I realize sometimes you need to use that boat because reasons.

I installed it from the AUR and it seems to work pretty well for the testing I do in it anyway. I don't daily driver it or Chrome.

[0] https://github.com/gcarq/inox-patchset

6
zimbatm 1 day ago 2 replies      
If you have the time, try Firefox Developer Edition[1]. It's one of the first edition of Firefox to have the per-tab sand-boxing enabled (called Electrolysis).

Some plugins can interfere with Electrolysis. To check if it's enabled go to about:support and look at the "Multiprocess Windows" entry, it should be 1/1 or higher.

https://www.mozilla.org/en-US/firefox/developer/

7
dangrossman 2 days ago 0 replies      
I saw this screen this week. I recently reinstalled Windows and Chrome, that might have been when. It was asking me to opt in, not opt out, before sharing this data.
8
lensi 1 day ago 2 replies      
Why is it that most developers use Chrome when we have Firefox? If the devtools are better, use Chrome for that and use Firefox for browsing.
9
wangweij 2 days ago 4 replies      
While I am able to switch to another browser (I already did a long time ago), I don't believe I can avoid visiting sites "that show ads from Google". How much can Google collect from those sites?
10
HappyFunGuy 2 days ago 3 replies      
Am I correct in assuming that chrome is now a keylogger in regards to your url bar, and perhaps clipboard?
11
0xmohit 1 day ago 1 reply      
Welcome to the world of context-sensitive search.

Not that it's new, but it appears that the context has now been expanded to include the entire browsing history, just not the session.

[One should really consider using Firefox, and DuckDuckGo for search.]

12
xbmcuser 2 days ago 0 replies      
Wait they were not doing it already
13
Brainix 2 days ago 4 replies      
Dear 2016,

You haven't figured it out yet, but we consider advertising unethical.

Yours,2066

14
yefim 2 days ago 5 replies      
Does this apply to Chromium? Because if not, I'm definitely switching.
15
_stuart 2 days ago 0 replies      
You can tell Google not to save your chrome browsing info at myactivity.google.com in the Activity Controls tab.
16
incrediblygood 2 days ago 4 replies      
I recommend Opera. Same rendering engine as Chrome, but faster and with built-in ad blocking.
17
bestnameever 1 day ago 0 replies      
I don't think this is just Chrome. They will likely also track you through other browsers as long as you are signed into your account. Many sites have your browser reach out to Google through things like adsense advertisements and google-analytics.
18
_Understated_ 1 day ago 0 replies      
I thought Chrome allocated a unique identifier to each browser install anyway meaning that changing this setting still allows them to track you regardless.

Edit: Turns out that the Unique Id is an install-only thing and is gone after the first update [1] (look at "Identifiers in Chrome" section) but it appears they can conduct "Field Trials" without your knowledge (certainly appears to be without your knowledge from what I can see)

Edit 2: removed pointless text

[1] - https://www.google.com/chrome/browser/privacy/

19
oolongCat 2 days ago 2 replies      
Whenever I see these things I wonder, are there any plugins or any automated tools that would help fuzz search results.

So for example, send search requests to google randomly for items like "ducks", "fishing ponds", "banana leaves" etc, totally unrelated nonsense that will skew these tracking giants provided enough people install and run these tools.

20
therealmarv 1 day ago 0 replies      
I think a Chrome sync passphrase will protect you against that. Because (theoretically and I hope also practically) only you know your passphrase Google will not be able to analyze your history etc.
21
KaneEdger 18 hours ago 0 replies      
Brave and Vivaldi. They do not get that Google crap :D
22
yaur 2 days ago 0 replies      
OTOH when I got the opt in page I took the opportunity to opt out of everything.
23
andrewvijay 2 days ago 0 replies      
Time for another opensource browser but this time which does not drink all the ram. Probably with the same dev tools.
24
wfunction 2 days ago 1 reply      
I can't actually find this screen anywhere.
25
dreidohlen 1 day ago 0 replies      
Incognito mode?
26
Havoc 1 day ago 1 reply      
Google is straying from its Don't Be Evil path...
27
malmsteen 1 day ago 0 replies      
Ad ? What's that ? Oh that thing before "block".

I head about that.. it's a TV stuff no ?

28
metaos 2 days ago 2 replies      
Vivaldi is an excellent replacement
29
koolba 2 days ago 0 replies      
Where's the man bites dog?
21
The Surprising Ease of Plain Text Accounting vincecima.com
309 points by vincecima  1 day ago   230 comments top 25
1
somberi 18 hours ago 7 replies      
As a middle-aged, married man, my current view is:

1. I had kept personal accounts for a while, until I realized my expenses swayed month-to-month, but within a narrow band (+- 10%). This made the "insights" less interesting and approximations worked well enough for my use case. I suspect that this is so, for most people.

For who have not kept personal accounts, what I am describing is similar to when you got your Fitbit - The first 2-3 weeks are full of glorious insights and a month goes by and you do not really need a fitbit to tell you how many steps you walked. Approximation approaches real count. Same for personal ledgers.

2. I now focus my efforts on income expansion rather than cost control. Some of you may say, recording costs is not controlling it. In that case, I find accounting, even less useful (once I have arrived at approximate cash outflow). Income expansion, puts me in a different frame, which I find strangely empowering. Even more so than the satisfaction that comes from having controlled costs.

2
vinceguidry 23 hours ago 28 replies      
Accounting for personal finances is a waste of time, in my opinion, if you plan to be upwardly mobile. Instead of making the most of what you have, just make more. Spend all the time and resources you need to to make yourself better. In five years, I've gone from making $30K / year to over $100K. Accounting will not get you there. Removing all the barriers in the way of success will.

Businesses need accounting. Humans need nurturing and growth. Don't nurture your business and account for yourself. Account for the business and nurture yourself. Your business can't love you back.

3
chj 21 hours ago 4 replies      
Seriously, spreadsheets are born for these tasks.

I tried but didn't like accounting apps because they force you to do too much work and just aren't that easy to find the functions I need. In the end I designed an excel file tailored to my needs (and it's giving me more fun than writing code), which gives monthly view of my financial activities, with support for currency conversion. So at the end of every month I would spend about half an hour logging all the numbers and get a nice overview. I can also put in spending and earning projections so I can easily see how I will be at the end of year.

4
ams6110 1 day ago 0 replies      
A few years ago I volunteered to be treasurer of a small non-profit. They had been keeping their books on paper. I looked at QuickBooks, and it frankly baffled me. Being an old hand at command line and plain-text computing, I found ledger and immediately loved it. I used the emacs ledger mode to manage the ledger file. I kept the ledger file version controlled in mercurial, so I could easily see diffs between various points in time which was sometimes helpful.

When I turned the books over to the next treasurer, I helped him import into QuickBooks, which he was already familiar with. That's the one shortcoming of the tool, that you are unlikely to find anyone else (e.g. professional accountants) who's willing to work with it.

5
kovrik 1 day ago 13 replies      
These accounting apps never worked for me. I think I'm missing something, but how do you actually use them?

Do you really put every cent you spend into it? How is it possible? How do you remember all that. for example, during a weekend trip?

Next step: ok, assume we put every cent spent into the app. How does it help?

Even more: I already have all my transactions listed in my bank's mobile app. How to use it?

6
preek 19 hours ago 1 reply      
We are actually using this in our company and couldn't be happier. We're using Emacs ledger-mode and git to keep track of everything.

Ledger is also tied into our billing system. A couple of shell scripts and Latex easily beats everything that I have seen in the last 10 years in the market. That is, of course, a subjective statement and is meant to be such.

7
cyberferret 1 day ago 2 replies      
This is basically the old EMACS/Vim vs Sublime/Atom debate that developers have battled for years, moved across to the accounting world, isn't it?

Having been involved in accounting system implementation for > 30 years, I understand that people are usually used to doing things one way and find it hard to adapt to change, but even after all my years in the industry, command line accounting fills me with unease - and this is coming from someone who still loves working in the DOS prompt!

I am sure for a simple expense/budget ledger it will work OK, but when it comes to recurring journals, multiple reconciliation accounts, inter company transfers, control account tracing etc., give me a nice GUI any day...

8
noufalibrahim 22 hours ago 0 replies      
This article mirrors much of my own experiences with using ledger. I wrote about my setup here http://nibrahim.net.in/2015/11/07/ledger_and_personal_financ...
9
steve_taylor 22 hours ago 0 replies      
Slight correction, if I may. Interest is piqued, not peaked.
10
Chickenosaurus 14 hours ago 0 replies      
Related mini Show HN: Accounting is a small Java double-entry bookkeeping component.

Accounting is supposed to be integrated in applications that can make use of an accounting feature. It is not a stand-alone application. Accounting makes as few assumptions as possible about the used technologies which enables it to be used very flexibly. On the other hand, this means you will likely have to write glue code to integrate accounting into your application landscape.

Check out accounting on GitHub: https://github.com/Nick-Triller/accounting

11
wesm 23 hours ago 1 reply      
On plain text accounting, you should check out Beancount (I am unaffiliated, but know the author): http://furius.ca/beancount/
12
dade_ 18 hours ago 0 replies      
I still remember cringing the day I finally had to move my parents business off CA Simply Accounting for DOS.There was definately a learning curve for the text based app but it took far fewer steps to actually input the data and print off cheques, etc. 15 years of accounting easily fit on a 1.44MB floppy disk. Now, it's a monster database with a terrible UI. In subsequent years Simply has become anything but simple and I cringe at the thought of giving in and moving to the cloud. Ledger doesn't look like a great fit for business though, and accountants want data from accounting packages they know. Quite a racket.
13
aklemm 1 day ago 4 replies      
I'm right on the verge of going with the new YNAB (first serious attempt to organize finances and budget outside my wife's use of Mint), but maybe I'll give Ledger a shot first.
14
Pamar 22 hours ago 0 replies      
Here is a short dscription of how I do this: https://news.ycombinator.com/item?id=12085755

(It also works for repeating or future expenses - I often "initialize" future files with stuff like rent or an average monthly expense for stuff like groceries, then I update it with receipts I have collected on the last few days)

15
rosstex 23 hours ago 0 replies      
I love the use of asciinema in this. It's handy for me all the time when I want to show a friend something.
16
arvinsim 1 day ago 2 replies      
I am using Spacemacs so I like the idea of using plain text files. But until there is way to input new transactions via my smartphone easily, I will probably stick with classic YNAB for the foreseeable future. Otherwise, I might have to learn Android programming to create my own app to do so.
17
kilroy123 22 hours ago 0 replies      
I might try this out. Right now, I'm using a combo of mint and my own google spreadsheet. Mint.com with the mojito plugin for chrome:

https://chrome.google.com/webstore/detail/mojito-mint-with-a...

The mojito google spreadsheets look amazing, but I couldn't get things to work, and ran into a lot of errors.

The plethora of budget tracking software is frustrating. It seems no one tool is a clear winner. To me, this still looks painful. Mint is closest there, IMO. However, it's pretty much abandonware at this point.

18
dbrgn 15 hours ago 0 replies      
19
annnnd 15 hours ago 0 replies      
> I had tried similar services before (Mint, Buxfer, PocketSmith, etc) and never found the sacrifice of privacy worth it....> With YNAB 4 now unsupported, I started looking for alternatives. Would I finally surrender to Intuit and start using Quicken? Was a Google Docs spreadsheet enough to get me through?

So OP has privacy concerns with accounting services, but not with Google? Interesting.

20
graeme 14 hours ago 0 replies      
Can anyone comment on experiences with Quicken? I had been planning to set up accounting with Xero, but I don't like cloud services for something that requires focus. It can be more difficult to concentrate on something in the browser.

Quicken is the only desktop app that works for Canada and Quebec, I think. How is it?

I'm on Mac, but I can run it on a VM.

21
CorvusCrypto 23 hours ago 2 replies      
So what is this sacrifice of privacy the author mentions? I mean is this a legitimate statement? While I can sympathize with the author's mindset this seems like a loaded statement based on nothing. If I'm wrong and Mint is leaking my info like a sieve please tell me now.
22
tdkl 22 hours ago 2 replies      
Cool. Now add those on fly on a mobile device. YNAB app does it in couple taps.

Also YNAB 4 won't just suddenly stop working.

23
smichael 7 hours ago 1 reply      
hledger author and plain text accounting fan here!Thanks for this nice blog post and interesting discussion. There are some great questions raised here, perhaps I can steal them for a FAQ ?Here are some answers:

Isn't personal accounting a waste of time ?

People have very different needs and practise personal accounting for many different reasons. There is of course a point of diminishing returns; tailor your accounting practices to your needs.Needs change over time.Some of us would benefit from doing more (or better) accounting,some less (I would guess this second group is smaller).

In [The Millionaire Next Door](https://en.wikipedia.org/wiki/The_Millionaire_Next_Door) (highly recommended), one research finding was that above-average wealth accumulators spend more time on financial planning,which for many of us requires accounting as a foundation."Minimal time dedicated to financial planning is a leading indicator of a UAW [Under Accumulator of Wealth]".

Isn't a command-line tool too limited for real-world accounting needs ?

"I am sure for a simple expense/budget ledger it will work OK, butwhen it comes to recurring journals, multiple reconciliationaccounts, inter company transfers, control account tracing etc.,give me a nice GUI any day..."

Understandable. The current plain text accounting tools provide avery generic double entry accounting system with which you canmodel such things, and script them.

There are a number of generic GUIs available (hledger has cursesand web interfaces, and there are web/curses/GTK interfaces forLedger and beancount). But there are not yet a lot of richtask-specific GUIs. There's no reason they can't be built, though.

Isn't a plain text format too limited for large organizations ?

"it's pretty obvious that plain-text files don't scale to amultinational, with hundreds of accountants of various types alltrying to work with the same files. Even with proper use of Git Ibet that would get old fast. You would instead want a realdatabase, with a schema, and some data validation and someprograms/webpages to smooth out the data entry and querying andwhatnot."

I'm not sure.Current plain text accounting tools can do some schema definitionand data validation, and will do more in future.The plain text storage format is open, human-readable,future-proof (useful even without the software), scales smoothlyfrom simple to complex needs, and taps a huge ecosystem of highlyuseful tooling, such as version control systems.And, despite the name, there's no reason these tools can't support other kinds of storage, such as a database.(hledger has four storage formats and is designed to accept more).

Do you really enter every little transaction ?

Yes! Many folks in our community do it. Mahatma Gandhi reconciled to the penny every night. J.D. Rockefeller was famous for his ledgers.

It's not required. I started doing it as a temporary learning exercise, and still like it.It makes troubleshooting and reconciling easier.

How is that possible ?

Practice, and a process/toolset that suits you. Some folks import most of the data from their banks, so little manual data entry is required.A few prefer to manually enter everything, for the increased awareness and insight."Manual" data entry is usually assisted in some way: interactive console tools (hledger add and similar),web-based tools (hledger-web and similar), GUI tools (ledgerhelpers), smart editors (eg emacs & ledger-mode),recurring transaction scripts.I currently use a mixture of bank CSV import and rapid copy/paste in emacs.I spend 15 minutes a day on average, and for me that's currently a good investment.

How do I use the transaction data in my bank's web or mobile app ?

If you can export it as CSV, you can import it and run queries against it.There are also some tools for converting OFX, QIF etc.

So I've got a huge list of transactions recorded, duplicating my bank statements. How does that help ?

Accounting is modelling flows of money (or other value).Such a model aggregates information from many sources, in one trusted place.With it you can efficiently generate reports, forecast things (cashflow!), answer questions, try experiments.

Some people need a very simple model, others benefit from a more detailed one, and we don't know up front what we might need in future.The most fundamental accounting data is a simple list of transactions (the journal).Once you have captured this, you can mine it for anything you may want later on.

Plain text accounting provides nice open data format(s), tools and practices for doing this, and could be a good foundation for more powerful tools.

Where can I see a comparison of hledger, Ledger, beancount, and the rest ?

Glad you asked! http://plaintextaccounting.org gives an overview of these tools and practices.In particular see "Ledger-likes" and "comparisons" under "read more".hledger's FAQ discusses differences from Ledger. Beancount docs probably do too.

How do you do budgeting ?

Some notes are [here](http://plaintextaccounting.org/#budgeting).I emulate YNAB-ish envelope budgetting (see third link).

Double entry accounting ? Where are the debits and credits ?

Most (not all) plain text accounting implementations use signed amountsinstead of debits and credits. This makes them "double entry light" perhaps, but it has been a rather successful simplification, intuitive to most newcomers.

24
bakhy 16 hours ago 0 replies      
the repeated usage of the phrase "my net worth" stings. i presume it's just a turn of phrase and does not really mean much, but it's ugly. you're worth more than any dollar amount! ;)
25
Dr_tldr 22 hours ago 2 replies      
Personal attacks, which this crosses into, are not allowed on Hacker News. We ban accounts that do this, so please don't do it.

We detached this subthread from https://news.ycombinator.com/item?id=12119693 and marked it off-topic.

22
Dissonant tones sound fine to people not raised on Western music arstechnica.com
241 points by shawndumas  3 days ago   84 comments top 28
1
TheOtherHobbes 3 days ago 2 replies      
Most of these chords aren't truly dissonant. They're very rare in pre-romantic classical music, but in jazz/pop theory they're considered basic triads with added colour notes, not weird exotic sounds from musical hell.

Most people will have grown up hearing sequences that use mostly 7ths and/or 9ths, and won't think of them as exotic or strange at all.

The ones that are dissonant - like the Dom 7#11, and some of the clusters - fall later down the preference list, just as you'd expect them to.

The simplest dissonant chords are the basic augmented and diminished triads, and the authors didn't include them in the tests - which is a bizarre and curious omission.

2
weinzierl 3 days ago 3 replies      
Quote from the arstechnica article:

> This suggests that preference for consonance over dissonance isnt baked into universal human auditory processing, but is rather something we develop by being exposed to certain kinds of frequency relationships in the music that we hear.

No, that is not a conclusion that can be drawn from the experiment and not the conclusion the paper draws.

Quote from the paper:

> The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music [...]

From what I understand the experiment shows that there are peoples that have not developed harmony until now. They make music, but they only play notes in a sequence never simultaneously.This is not surprising, western music developed real harmony only in the modern era, music of the middle ages was mostly melodic.

That they have no harmony does not mean that preference for consonance over dissonance isnt baked into universal human auditory processing because if they would develop harmony they might end up with the same preferences we have because we share the same human auditory processing.

3
mrob 3 days ago 2 replies      
Although the article talks about pitch ratios, there's no evidence that the human brain contains any hardware for pitch ratio detection. Consonance of simple ratios is a result of a simpler underlying rule, explained by William Sethares here:

http://sethares.engr.wisc.edu/consemi.html

Note that this only happens with musical instruments featuring overtones based around the harmonic series, which usually means instruments based on vibrating strings or vibrating columns of air. These are by far the most common instruments in Western music, but this is not the case for cultures that heavily use tuned percussion. Tuned percussion can easily have enharmonic timbres, which require different tuning systems for control of consonance and dissonance.

Note that Sethares' theory says nothing about dissonance being good or bad, and this new study says nothing about whether Tsimane people are capable of detecting dissonance or not. As the vast majority of humans can detect consonance and dissonance, including babies (see https://www.researchgate.net/publication/222455086_'Infants'... ), I would personally be surprised if they were incapable of perceiving it. I think it's more likely that their culture simply does not value consonance. The same is true of some Western subcultures. Atonal composition is high status with many music academics, and it's genuinely enjoyable to many of them. I've enjoyed listening to atonal compositions myself.

4
kazinator 3 days ago 1 reply      
Dissonant tones also sound fine to people raised on Western music, which contains dissonant tones.

"Dissonant" is not a synonym for "not fine".

Culture doesn't fix the fact that if you play a perfect perfect fifth (exactly 2:3 frequency mix) the blend is very homogeneous, whereas if the ratio is off, you hear "beats" in it. Those beats are not produced by your upbringing.

5
Sylos 3 days ago 1 reply      
I would be interested in their reaction to two notes being played which are just slightly out of tune.

Because it definitely seems like something which can be trained - professional musicians often have a pain-like reaction to it, while your Average Joe just notices that it sounds bad.

But at the same time, your Average Joe still notices that it sounds bad, and yeah, just like with dissonances, it's hard to imagine someone who doesn't.

Personally I do think that they would dislike it, because I have this theory, which is somewhat similar to / a generalization of the Uncanny Valley theory [0]. That theory is that we fundamentally dislike anything which is not similar enough to something else to be perceived as the same thing, but neither different enough from it to be clearly distinguishable as something different.

So, no matter if that's two tones which are slightly out of tune, a painting with bright blue right next to baby blue, a shadow in the dark, or rice with noodles.

[0]: https://en.wikipedia.org/wiki/Uncanny_valley

6
SuperPaintMan 3 days ago 3 replies      
This pops up in abstracted painting as well, colours (more specfically the feelings/emotions we ascribe to a particular combunations) are entirely created due our cultural context. The greatest example of this is the colour of death, Black for western contries, White for eastern.

Western fine art is more formulaic in it's creation, due to training using colour wheels (complementary across, traidic and other geometric harmonies) and a analytic notion of these harmonies.Even the pleasing ratios of area are codified (5:1 Yellow:Purple for weight balence) Itten wrote quite a few books on balence/design and became a major influence on modern art. Contrast this with naieve/primal forms created by those without that training.

One size fits all does not work for the arts. It cant be a product that is objectively correct.

Now if someone played a diasonant frequency and time the beat such that it fell in line with the tempo of a composition. That would be a cool way to add structure to a work. No idea on how to do the math here, but how many cent under standard tuning would generate this for an andante tempo?

7
dahart 3 days ago 0 replies      
> Tsimane music also doesnt make use of harmony: only one series of notes is played at a time, so the relationships between notes dont matter in their musical tradition.

This makes me wonder if the Tsimane don't hear dissonant harmony, but just two different melodies, like two different people talking at once rather than a chord.

Maybe in a way this is similar but opposite to how it takes a little bit of listening before you can hear Tuvan throat singing clearly - at first it sounds like one note, but after a while you get better at hearing the overtones they're modulating.

Personally, I find that some chords that sound dissonant when played alone and out of context turn around completely and sound gorgeous and intriguing when played as part of a piece.

Minor 6 chords are amazing in the right context, and diminished and augmented scales too. Some of my favorite Bach pieces are the ones that hit a measure or two of fully diminished key.

Is it possible that part of the surprise here is that we've overstated dissonance as something negative, and led ourselves to expect something "dissonant" to be more cacophonous than it really is? I do prefer describing music more in terms of tension and resolution than consonance and dissonance. And tension is a critical part for the resolution to have it's full effect and power, it provides the contrast, and it often comes with as much or more subtlety and beauty, as far as I can tell.

8
ollybee 3 days ago 0 replies      
There was a dicsussion and interview with studys author (srongly named as Johsn in the arstechnica aticle) recently on BBC radio 4's Inside cience. http://www.bbc.co.uk/programmes/b07jqr1y 15:10
9
nabla9 3 days ago 1 reply      
They should test how well if Tsimane people are able to differentiate between consonant harmony and dissonant harmony.

It's weird that if you go all the trouble travel into far away places to test things, but you don't test for that.

10
ChuckMcM 3 days ago 0 replies      
It makes me wonder about the birds. To my ear birdsong is typically more consonant in general use and dissonant in alarm or alert use. But a quick search did not turn up a lot of research in that space.
11
bbgm 3 days ago 0 replies      
It's not just for western music. Hindustani music is more "quantized" around scales than Carnatic music, and many North Indians find it harder to listen to Carnatic music for that reason.
12
elihu 3 days ago 0 replies      
> To explore the effects of harmonicity and roughness found in Study 1, we measured pleasantness ratings for pairs of pure tones (single frequencies) separated by intervals from the chromatic scale (08 semitones)21 (Fig. 3f). This range includes some consonant intervals, for which the tone frequencies approximate harmonics of a common fundamental (and are thus related by simple integer ratios), and some dissonant intervals, for which the tone frequencies are inharmonic.

So... they went and found a group of people unfamiliar with western music, and then tested which of the intervals from 12-tone-equal-temperament sound good to them?

What would be the result if we were to repeat the test with people familiar with western music, but the notes were taken from 13-tone-equal-temperament? What conclusions (if any) could we draw from that about whether those familiar with western music appreciate harmony?

Granted, notes like the equal-tempered major third are probably close enough to their just equivalents (such as the 5:4 major third) that to someone who was familiar with the latter, the former would probably be recognized as such even to one who had never heard equal-tempered intervals before, but still they'd probably sound weirdly out-of-tune. (The ET third is about 15 cents sharp of the just third.) Why not just test with just intervals, which are more universal/fundamental than equal tempered intervals, which are sort of a modern "good enough" compromise we use for convenience.

13
djhworld 3 days ago 0 replies      
That Bobby McFerrin video is one of my favourite videos on YouTube, never fails to make me smile
14
white-flame 3 days ago 1 reply      
> Musical perception is, surprisingly, not shared by all humans.

s/surprisingly/unsurprisingly/

15
fiatjaf 3 days ago 0 replies      

 With cross-cultural tasks, there is always a risk that participants dont have the right idea about what they should be doing. To control for this, the researchers also played participants vocal noises and found that all groups preferred laughter rather than gasps.
What? It is a much different thing noticing differences between melodies and differences between laughs and gasps.

16
tamana 3 days ago 0 replies      
17
gluelogic 3 days ago 0 replies      
The idea that a culture can arbitrary accept "dissonant tones" as long as they have not been conditioned otherwise suggests that I can button-mash a keyboard and maybe they would find it pleasurable.

Perhaps consonance and dissonance are concepts that are largely defined by musical circumstances. If that's the case, then it doesn't make sense to approach the problem with a sort of western definition of dissonance.

In other words, I think it's safe to assume every culture has an idea of what is tasteful and what is not in their music. The way I see it, if you take that position, then you can also assume the culture has their own definitions of what is consonant and what is dissonant. These may or may not coincide with the western theory of common practice period harmony.

18
hellofunk 3 days ago 0 replies      
The irony is that the Western concept of something that is "in tune" is actually dissonant to the same Western cultures of the past. Well-tempered tuning has only been around since Bach's time. Our music today would not sound right to those during or before his time, in general.
19
mmaunder 3 days ago 0 replies      
Hard to believe a 5th doesn't sound sweet(er) to all. Even Pythagoras thought so.
20
scandox 3 days ago 0 replies      
I noticed my four year old daughter relishes quite outlandish and dissonant modern classical music. The sort of stuff most people ask me to shut off the moment it starts.
21
plorg 3 days ago 1 reply      
They used to teach this as an open enrollment course in college (at least mine, which was not atypical of the larger liberal arts college scene). It was called "music of non-western cultures", and pretty much everyone who took it came out with the opinion that this non-western music was pretty interesting and cool.

I mean, c'mon. Anyone who has been exposed to the concept of non-western music will know that the 12-tone scale is not a universal construction.

22
wcoenen 3 days ago 0 replies      
Too bad that the article doesn't even attempt to explain what is going on mathematically. Dissonance is closely related to beating[1], the phenomenon where the sum of two slightly different frequencies results in an oscillation at a low frequency.

[1] https://en.wikipedia.org/wiki/Beat_%28acoustics%29

23
fiatjaf 3 days ago 1 reply      
Or we could interpret it into saying that cultures with less developed musical culture, like this Tsimane, do not notice very well the subtleties of the songs, and may well find any strange noise "good", just like the young people in our culture (it doesn't matter that the strange noises young people hear are always tonal, they are used here just as an example).
24
geff82 3 days ago 0 replies      
Being married to an Iranian, I hear a lot of persian music. Especially the traditional one sounds really "dissonant" to my western ear and it took me years to accustom. Of course for my wife and all of her family persian tunes sound perfectly harmonious.
25
cel1ne 3 days ago 4 replies      
Not really news.

Also quarter-tone music sounds weird to western ears and is common around the world.

26
pessimizer 3 days ago 0 replies      
27
kingkawn 3 days ago 0 replies      
More glory for semantics at the expense of understanding
28
fiatjaf 3 days ago 0 replies      
This seems like clickbait. I'll read it, however.
23
Microsoft REST API Guidelines github.com
276 points by alpb  11 hours ago   112 comments top 14
1
deathanatos 10 hours ago 5 replies      
Pagination is one of those things I feel like so many of these things get wrong. LIMIT/OFFSET (or as MS likes to call it, TOP/SEEK) style results in O(n) operations; an automated tool trying to pull the entirety of a large collection in such a scenario is not good. I have to again recommend the excellent "Pagination done the Right Way" presentation[1] from Use the Index, Luke (an equally excellent site).

Just return a link to the "next page" (make the next page opaque); while this removes the ability to of the client to go to an arbitrary page on its own, in practice, I've never seen that matter, and the generalized URL format allows you to seek instead of by LIMIT/OFFSET by something indexable by the DB; HTTP even includes a standardized header for just this purpose[2].

I also think the generalized URL is more in line with Fielding's definition of REST, in that the response links to another resource. (I don't know if being in the header invalidates this; to me, it does not.)

If you get the metadata out of the data section of your response, and move it to the headers, where it belongs, this usually then lets you keep the "collection" of items you are return as an array (because if you need to put metadata along side it, you need:

 { "the_data_you_really_want": [1, 2, 3], "metadata, e.g., pagination": {} }
vs.

 ["the", "data", "you", "wanted", "in a sensible format"]
)

(I've seen metadata-pushed-into-the-body turn API endpoints that literally need to return no more than "true" or "false" into object that then force a client to know and look up the correct key in an object sigh.)

[1]: http://use-the-index-luke.com/no-offset

[2]: https://tools.ietf.org/html/rfc5988

2
mythz 10 hours ago 3 replies      
Funny that Microsoft uses OData as an example of a bad URL:

https://github.com/Microsoft/api-guidelines/blob/master/Guid...

Whilst continuing to praise OData as "the best way to REST" - http://www.odata.org

3
daxfohl 10 hours ago 1 reply      
Well presented. It would be great if there was a language / framework that made this guaranteed. As-is everything just returns 500 error on any exception, lets you return errors with 200, allows update on GET, etc. Even the Haskell ones.
4
blakeyrat 9 hours ago 3 replies      
I'd love to see some guidance on how you're supposed to do file uploads (think: uploading an image avatar for a user) and fit it into the REST structure. These guidelines don't even mention "x-www-form-urlencoded" except in a bit of fluff about CORS.

Made more frustrating by Microsoft's own WebAPI2 basically having zero support for file uploads, meaning we had to go way outside the library to code support for it.

Not sure why that's such a blind spot. Doesn't every web service need to upload files eventually?

5
iheart2code 10 hours ago 2 replies      
Very cool document. I kind of got stuck at delta queries, though. How do you implement that? I can't find any reference to delta/removed queries on Mongo, Postgres, or MySQL. Do you just keep all records in the database and add a "removed" field? How would that solution work with user privacy & users truly wanting to remove data?
6
spdustin 3 hours ago 0 replies      
Seems ... strange to bash their own "Best Way to REST" offering, OData [0]

[0]: http://www.odata.org

7
captn3m0 7 hours ago 0 replies      
Interesting to see Microsoft talk about "human-readable URLs". I still remember the mess kb.microsoft (or the surrounding MS infra) was at a time.

Nice to see them support such stuff, still.

8
mikro2nd 10 hours ago 3 replies      
Funny thing: I've been thinking a bit about API versioning quite a bit lately, and the best solution I've come up with is the ONE thing not at all covered in this: put an `api-version` header into the request. I've seen both of the schemes recommended here, and I like neither very much. So what's wrong with my (header) solution?
9
raz32dust 2 hours ago 0 replies      
Jeez... anyone else find the caps hurting the eyes? Is it necessary? Sorry I have nothing constructive to add. Just had to bring this up.
10
danpalmer 10 hours ago 8 replies      
Being that guy again, (and sacrificing my karma) but...

This is not REST, it contains nothing about hypermedia, entities having knowledge of their URIs, or any way of discovering features of an API programmatically.

While I'm sure there's plenty of good stuff in here (it looks otherwise fairly comprehensive), APIs will continue to be a disparate system that requires custom code for every integration until we can embrace the full benefits of REST, and develop the technology and conventions/standards for integrating RESTful services.

Edit: for an example of what's possible with 'real' REST, check out this talk: https://vimeo.com/20781278 better documentation, easy parsing, API optimisation without client changes, discovery, and the API is fully human readable/usable as well.

11
yeukhon 10 hours ago 3 replies      
https://evertpot.com/dropbox-post-api/

This discuss some option for the GET vs POST.

In Kibana, everything is done over GET as get parameters, and I find that extremely annoying and a poor design.

A lot of public APIs also don't honor or have any intentions in supporting or using PATCH. Most APIs I have worked with only use PUT for modification. Anything resembles "creation" is automatically a POST.

12
intellix2 6 hours ago 0 replies      
Surprised people are still talking about REST after GraphQL
13
intellix2 6 hours ago 0 replies      
Surprised people are still talking about REST after Facebook unveiled GraphQL
14
zeveb 10 hours ago 1 reply      
Wow, they REALLY LIKE TO SHOUT THEIR HEADINGS.

Otherwise, what I've read so far looks like a really good start. Say what one will about Microsoft's products, but there are a lot of smart folks there.

24
Vim GIFs vimgifs.com
319 points by shawndumas  11 hours ago   88 comments top 14
1
kartD 9 hours ago 7 replies      
Great idea, vim is intuitive ( -ish?) once you understand what each character means ( y for yank being a simple example ). If anyone is looking to learn vim or any IDE really, check this place out. https://www.shortcutfoo.com/app/dojos/vim
2
naviehuynh 3 minutes ago 0 replies      
finally a comprehensive list of vim shortcuts
3
ue_ 9 hours ago 3 replies      
The performance could be increased (i.e less choppy) and filesizes could be decreased (i.e taking up less transfer of data for the same quality) by filming webm or similar rather than gif.

The only disadvantage visible is Internet Explorer users who don't have the plugin installed, in which case you could provide a gif fallback.

4
captn3m0 7 hours ago 0 replies      
This is a very creative (and definitely not bandwidth-happy) way of teaching vim. Bookmarked.
5
MrQuincle 8 hours ago 2 replies      
+1 Minor point. Somehow I first checked f, it gives me the description for b.
6
foota 8 hours ago 0 replies      
One of my coworkers once shared the use of set relativenumber, which causes line numbers to be relative to your current position in the file. Great for movements.
7
stockkid 4 hours ago 0 replies      
feedback: 'b' section: http://vimgifs.com/07-18-2016/b/ has instructions about some other keys. Similarly http://vimgifs.com/07-19-2016/g/ has instructions about 'b'.
8
devin 4 hours ago 1 reply      
I've used Vim, and I don't get it. I have no idea why this is useful. Is this about generating gifs? Why is the alphabet shown with dates?
9
hbz 5 hours ago 0 replies      
Does anybody know what program was used to generate the gifs?
10
sdegutis 9 hours ago 3 replies      
Speaking of vim, just a reminder that @antirez made a terminal based text editor in pure C with no dependencies, not even ncurses, in under 1000 lines of code, complete with search feature, and customizable syntax highlighting for variable number of languages.

https://github.com/antirez/kilo

Lots of people have forked it and are actively making it their very own editor. Sure, none of them may ever take off and become a new emacs or vim competitor. Or maybe they will. Let's not discount innovation.

Plus it's amazing to me that to make a full fledged terminal editor, almost all you really need is the ability to read from stdin and write to stdout. Only a tiny little bit of glue C code is needed (for handling signals like SIGWINCH, or for setting or unsetting raw mode in the terminal), but almost the entire rest of it can be written in, say, Lua.

11
zuck9 6 hours ago 9 replies      
I've never understood, what does Vim or Emacs have to offer that people use it over Atom or Sublime Text?

I think I can edit and navigate through text as fast as Vim users, using the native OS text editing hotkeys.

12
vegabook 8 hours ago 0 replies      
This is actually very good! My initial reaction was to roll my eyes at another Vim tutorial, but this suits my (low) attention span by giving me exactly what each letter does in a spoonfeed (read animated) way. Brilliant taxonomy (a-z) and brilliant bite-size aborption medium that doesn't require me to trawl through a tutorial. Bookmarked.
13
hosh 8 hours ago 1 reply      
This is a great idea!

At least until I started to seriously use it to level up my vi skills. Some of the links (say, for `a` or `g`) goes to the wrong places -- to the entry for `b`. When I stepped back and looked at how to navigate it, I realized the structure is disordered so I have no idea how to get to the info I'd like to see.

14
jegutman 9 hours ago 0 replies      
Wow, this is really cool.
25
SpaceX Picks Rocket for First Relaunch fortune.com
237 points by ValG  2 days ago   102 comments top 10
1
martythemaniak 2 days ago 2 replies      
Friendly reminder, they are launching a Dragon on an ISS resupply mission tonight at 1245am EST. Secondary mission is trying to land the first stage on Landing Zone 1, like the first landing in December.

You can watch it live on spacex.com or their YouTube or or ustream channel.

2
ohitsdom 2 days ago 2 replies      
It'd be HUGE if they can recover this rocket a second time. The data from a rocket that has flown twice would be priceless.
3
scarygliders 2 days ago 7 replies      
I imagine the refurbishment of a used Falcon 9 1st stage will involve a wee bit more than just a bit of spit and polish.

It'll be examining the rocket engines for faults, replacement of degraded parts, examining the whole structure for defects, and replacement of any other parts if required - however, that will still be cheaper than completely building a 1st stage from scratch.

Here's hoping the re-use will be a successful one!

4
gwern 2 days ago 2 replies      
I wonder who's going to sign up to put their cargo on the first refurb rocket? I suppose SpaceX will be offering steep discounts in order to prove the prototype.
5
bargl 1 day ago 1 reply      
One thing a lot of people don't think about is that the price of the payload isn't going down yet. The satellites that launch to GEO tend to be 300-500 million dollars. I've heard figures all around there.

There is a VERY low risk factor that companies which launch these will accept. These satellites drive business for companies for years their life cycle is 20 years or so. They are a massive investment, and any risk on getting the satellite to space is seen as something to avoid at all costs. Different rocket companies are considered to be the gold standard because they introduce VERY low risk to your payload. Even though these payloads are insured it's still not worth loosing one because you loose out on a ton of revenue.

I'm speaking strictly of GEO satellites right now, but the whole economy around satellites will have to be rethought if we can cheaply get devices out there. I suspect that using "tried and true" hardware would become much less important and the cost of the massive satellites would go down because we can launch 5 a year every year instead of 1 or 2 a year. If one of them fails that's fine, we can replace it with something new and margins of safety that are applied in the industry can be reduced.

6
exabrial 2 days ago 2 replies      
What blows my mind is it only takes one engine to land the thing out of like the 9(?) it has.
7
ryanmarsh 1 day ago 1 reply      
How do you xray something as big as a rocket for stress fractures?
8
riyadparvez 1 day ago 1 reply      
Just landed safely.
9
greglindahl 2 days ago 2 replies      
This Fortune post is a bad-blog-rewrite of a nearly-fact-free Mashable blog entry. Not a very good choice!
10
dummy_01 1 day ago 1 reply      
Are there pokemon in space?
26
What ever happened to Wordstar? dvorak.org
232 points by DanBC  1 day ago   169 comments top 34
1
Evolved 1 day ago 4 replies      
George R. R. Martin said he still uses it and one of the reasons he cited was that he types all his novels on a computer not connected to the Internet so he doesn't have to worry about hackers leaking his material and also because of all the names, places, etc. in the book being made up, it doesn't have spell check to keep interfering with his writing.(0)

(0) http://www.slate.com/blogs/future_tense/2014/05/14/george_r_...

2
technofiend 1 day ago 1 reply      
Wordstar was good stuff! Between Wordstar and MultiWrite we sold hundreds of Kaypro machines bundled with loud as hell daisy wheel printers to my fellow college classmates. Unless they lived alone almost everyone returned a day or two later for the sound proofed printer enclosure. Why daisy wheel printers? At the time our professors refused to take dot matrix printouts for class assignments or papers; it took a couple of years for Epson's higher print density heads to appear and produce "good enough" output.

More amusingly - WordStar inspired a slew of -star knockoffs including one for farm management called DirtStar. Urban Dictionary has a very different definition for that word today.

At least when I was selling personal computers out of a MicroAge what slowly strangled WordStar sales was IBM DisplayWrite and Word Perfect both of which served the legal community much better. Microsoft Word and Apple's MacWrite didn't help either. And to a much lesser degree IBM's choice of swapping the control and shift keys made WordStar's navigation unnatural. You either had to adapt or order a special keyboard with CTRL back where it belonged.I've never forgiven them for that change. (Damn you, IBM! :shakes fist:)

3
fractallyte 1 day ago 0 replies      
The WordStar mailing list at YahooGroups is still (occasionally) active, but shrinking, as many members are getting rather elderly...

A recurrent topic is how to keep the software working on the latest versions of Windows. Science fiction author Robert Sawyer recently posted definitive instructions: http://sfwriter.com/ws-vdos.htm

4
RcouF1uZ4gsC 1 day ago 2 replies      
There is a chapter on WordStar in "In Search of Stupidity" by Merrill Chapman.

In it Chapman says that this was due to internal and external confusion and competition between WordStar and Wordstar2000.

The book is a great read by the way.

5
morazow 1 day ago 3 replies      
Offtopic, I know one person who still uses Wordstar heavily. And I hope he keeps using it :)https://www.youtube.com/watch?v=X5REM-3nWHg
6
grimgrin 1 day ago 0 replies      
I'm thinking I don't have all my facts straight with this question, but:

A really successful bookstore in my town has been using a networked DOS POS system since as long as they've had POS software. I asked about it once, liking the look of the old monitors and antiquated user interface, and I thought they called it Wordstar. They gave me some more details like how there is still one support company that will actually help them from time to time when they need something looked into. Mostly saying that they've never moved on because it does (almost) exactly what they need.

But now I'm thinking I either a) misheard them, or b) I'm just confusing myself right now because the wording was similar.

Has Wordstar been used in addition to word processing for custom solutions/software?

edit: giving it some more thought, I'm now wondering if they were using Wordstar or otherwise for inventory purposes only, and using ANOTHER piece of software for the actual sales.

edit: looks like I am very possibly thinking of http://www.wordstock.com/o_view.html

7
colanderman 1 day ago 1 reply      
Note that those looking for a WordStar/TurboC-like experience these days can find solace in JOE: https://en.wikipedia.org/wiki/Joe's_Own_Editor
8
kragen 1 day ago 1 reply      
> Wordstar was the product that invented the what you see is what you get notion later to be dubbed WYSIWYG

TECO Emacs was already as WYSIWYG than WordStar years previously, to say nothing of Bravo and other such PARC work from the early 1970s.

> It invented numerous features including overlays, later to develop into DLLs.

Overlays date from at least the 1960s; Fred Brooks (I think? Unless it was Plauger?) famously lamented that the OS/360 linker had history's best overlay support, at exactly the moment that the advent of paged virtual memory made overlays obsolete.

WordStar's UI certainly was very influential.

9
kpgraham 1 day ago 0 replies      
I remember sitting with John Dvorak at his private party in the Sands at Comdex in Vegas. John was buying and we got very very drunk. The occasion was a wake for the obvious demise of Wordstar, which we both preferred. Word Perfect basically out marketed Wordstar by giving away tens of thousands of demo copies at Comdex and Soft-Sell. Word Perfect had a half-assed WYSIWYG but John called it WISHYWYG (you wish what what you see is what you get). The doomed Wordstar was a better product in almost all respects, but Word Perfect was able to market them out of business.
10
pitchups 1 day ago 3 replies      
>In four months Barnaby wrote 137,000 lines of bullet-proof assembly language code. Rubenstein later checked with some friends from IBM who calculated Barnabys output as 42-man years.*

That effectively makes Barnaby 42*3 or 126 times more productive than a normal programmer!

11
DanBC 1 day ago 0 replies      
One thing this article doesn't mention is "executive word processors".

These were cut down versions of full word processors, because executives were not expected to have spent time learning the full package.

Here's an article from 1988 comparing a bunch of different PC word processors: http://jvlone.com/computerpub/InfoWorld/IW_1988-09-26_x-x_Ex...

12
TorKlingberg 1 day ago 2 replies      
I can imagine adding "undo" to a 137,000 line assembly application was not easy.
13
mih 1 day ago 0 replies      
14
anexprogrammer 1 day ago 0 replies      
"Eventually Word Perfect rose to the top based on its superior support program"

Wordstar 2000 wasn't bad, and was far more accessible than Wordperfect that had a distinctly unfriendly learning curve for non techies. Many never took to Wordperfect as you had to learn its codes and it wasn't even WYSIWYG. Even when it became so it was grudgingly and only sort-of.

Wordstar floundered around for a decade seemingly with no idea where they were going. That they survived at all through that period is surprising and speaks of the success of the earlier product.

15
zubairq 1 day ago 0 replies      
Wordstar 2000 was made by Edward Jong, who has since moved on to create his own graph programming language, Beads:

http://www.e-dejong.com/the-beads-project/

16
mhd 1 day ago 2 replies      
Keyboards got cursor keys and mice, so the Wordstar diamond wasn't as great/needed anymore for the general user.

It has its fanbase, but then again, a lot of dead word processors do. Especially amongst journalists, where Amiga word processors and XyWrite still are being kept alive artificially...

17
drauh 1 day ago 1 reply      
I learnt it in high school on a counterfeit Apple ][ with an 80-column display card. (Maybe it ran CP/M? My memory is hazy.) I had to type up the school's Prize-Giving Day program. When I got to college (late 1980s) and got my first programming job on campus, I copped a grad student's .emacs file which made Emacs behave like WordStar. I used it all the way through to my postdoc, modifying the necessary major modes to not conflict with WS bindings. I ran into the author of that .emacs file, but he had moved on to Jove, by then.

The upside of using WordStar bindings was that, as a de facto sysadmin in various academic departments and labs, I could never answer peoples' questions about Emacs.

A more complete history is here: http://www.wordstar.org/index.php/wordstar-history

18
jacquesm 1 day ago 2 replies      
It was murdered by VC's and Microsoft Word ate the remains.
19
webwanderings 1 day ago 2 replies      
I used to use Wordstar a lot in young age. There was another software I used to use, which would let you create tables by hand, and let you fill data in it. For the life of me, I can't recall the name of that software. Unlike today's HTML tables, things were really WYSIWYG back then, between your monitor and the dot matrix printer.
20
jamespitts 1 day ago 0 replies      
I have a lot to thank the humble and useful Wordstar for.

In college, I worked at a law firm and they were still using wordstar as well as wordperfect. Wordstar had this interesting markup that I quickly learned, and it was faster to use than moving around the screen with a mouse. Later on when I first viewed source on a web page, the markup looked quite familiar.

My new career as a web developer had begun.

21
pjmlp 1 day ago 0 replies      
"It invented numerous features including overlays, later to develop into DLLs. It was the first product with dynamic pagination and even help levels among other new features. All modern word processors owe their existence to Wordstar perhaps one of the greatest single software efforts in the history of computing."

I imagine he never saw the word processors developed at Xerox PARC, nor their OSes dynamic loading capabilities.

22
Keyframe 1 day ago 0 replies      
Wordstar had a sort of vim-like quality to it, where you could type your way with ease and power without using a mouse.
23
chmaynard 1 day ago 0 replies      
In today's startup culture, Rob Barnaby would probably have been a co-founder instead of a technical lead. With an ownership stake in the company, perhaps he would have taken the initiative and re-written Wordstar in a high-level language.
24
sbmassey 1 day ago 0 replies      
It sounds like the underlying problem was that the original, elegant software implementation quickly turned into a ball of mud when other programmers started working on it. Wouldn't happen nowadays, of course.
25
guelo 1 day ago 0 replies      
I was more interested in the Barnaby fellow than the businessman's boring story.
26
elmar 1 day ago 0 replies      
Talk about productivity

"In four months Barnaby wrote 137,000 lines of bullet-proof assembly language code. Rubenstein later checked with some friends from IBM who calculated Barnabys output as 42-man years."

27
js2 1 day ago 0 replies      
My Apple II had a CP/M coprocessor card (manufactured by Microsoft) just so we could run Wordstar.
28
projectramo 1 day ago 1 reply      
Whatever happened to Rob Barnaby?
29
PhantomGremlin 1 day ago 1 reply      
This program was fantastic, given the limitations of the computers at the time.

 WordStar ran well on a 4 MHz Z80 computer with 64K bytes of memory.
Go ahead, read that sentence again!

WordStar would try to update its (sort of WYSIWYG) screen display while you were typing, but it would often lag. That was because many computers were connected not to an internal video display card, but to an external terminal via an RS-232 serial interface running at 19,200 baud (1,920 characters per second). Fortunately the display output was as ASCII characters, not bitmapped, so that lessened the processing requirements quite a bit.

Whenever WordStar detected that its screen rendering was lagging behind the input typing, it would very cleverly "skip ahead" in the rendering. The result was that the computer felt quite responsive at all times.

30
rottyguy 1 day ago 0 replies      
IIRC, Word Perfect was a peer/competitor to Word Star back in the day.
31
zerr 1 day ago 1 reply      
Anybody remember ChiWriter? :)
32
dredmorbius 1 day ago 0 replies      
Dvorak's comment on piracy stands out -- it was a useful marketing model:

And despite complaints by the company and others, people wanted software they could copy and use on more than one machine. During this era piracy sold software and created market share. People would use a bootleg copy of Wordstar and eventually buy a copy. Wordstar may have been the most pirated software in the world, which in many ways accounted for its success. (Software companies dont like to admit to this as a possibility.)

Bill Gates somewhat famously did admit this talking of piracy of Windows especially in East Asia -- he'd much rather a pirated copy of Windows was running than a legal copy of GNU/Linux.

33
Angostura 1 day ago 0 replies      
^KD
34
drivingmenuts 1 day ago 2 replies      
We changed the syntax and now it's called Markdown.
27
D4 Declarative Data-Driven Documents js.org
257 points by joelburget  1 day ago   77 comments top 19
1
krebby 1 day ago 4 replies      
This is a good start but it glosses over the hardest part of making React work with D3, which is animations. Most animations you want to do with D3 don't fall neatly into categories you can use with CSS animation (namely updating paths and entering / updating / exiting animations).

React as an ecosystem is pretty fantastic but the React animation story is still pretty terrible unfortunately. "Portals" / "reparenting" is a majorly hacky way of getting an element from one part of the DOM to another, and even React Motion, which solves a few of the animation gripes, is hard to use, slow, and brittle in my experience. There just isn't a good substitute for the enter-update-exit selections that make up the core of D3.

My usual workflow is to get as much DOM building done in React as I can, with D3 filling out the tricky bits. D3's lifecycles and direct DOM manipulation are much more complicated to reason about than React, on the whole.

2
weego 1 day ago 5 replies      
Sorry, I don't get it. The code in any of the given example comparisons has no less obvious complexity and just removes the abstraction that writes the SVG nodes for you with having to explicitly write them in your JS which I similarly have never understood why we're intent on going back to.

Different for the sake of different is fine, but "preferable" is a bold statement that appears to have no evidence.

3
deno 1 day ago 0 replies      
Hybrid approach:

> let React do all the migraine-inducing heavy lifting to figure out what to enter and exit, and have D3 take care of the updates.

https://medium.com/@sxywu/on-d3-react-and-a-little-bit-of-fl...

4
colinmegill 1 day ago 0 replies      
Love this - awesome work.

Wrote an article on this concept last year: http://formidable.com/blog/2015/05/21/react-d3-layouts/

and spoke about it at Reactive2015: https://www.youtube.com/watch?v=n8TwLWsR40Y

Have been doing data visualizations like this ever since, it works. Yes, the hardest part is animation, which we had to address, as well as the axis components and other higher order functionality that builds on d3 itself:

https://github.com/FormidableLabs/victory-animation

http://formidable.com/open-source/victory/docs/victory-axis/

5
dechov 1 day ago 1 reply      
My team and I have been using and finding success with such a pattern for a couple of years now.

As others are alluding to, transitions are not currently so easy to express with React alone. We wrote and recently open-sourced a React component that aims to encapsulate the power and simplicity of d3 transitions (feedback & contributions welcome): https://github.com/dechov/join-transition/

6
otoburb 1 day ago 1 reply      
>>There are some pieces of d3 that I would love to use but aren't easily portable. For example, d3-drag and d3-zoom smooth over a lot of the quirks you'd have to deal with when implementing dragging and zooming, but they're only designed to work with d3 selections [...] Ideally we could decouple these libraries from selections, but it might also be possible to mock the selection interface they expect while still using React.

According to the changelog for the recently released version 4.0.0 of D3.js by Mike Bostock[1], one of the big shifts is for D3 v4.0.0 to be "composed of many small libraries that"[2] can be used independently. So the author of D4.js can now more easily examine Bostock's refactored code for Zoom and Drag as they please.

[1] https://github.com/d3/d3/blob/master/CHANGES.md

[2] https://github.com/d3/d3/releases/tag/v4.0.0

7
aurelianito 1 day ago 0 replies      
I find that D3 is great, but it needs an abstraction for the general update pattern (https://bl.ocks.org/mbostock/3808218).

I did that! One function that receives actions for update, create, delete, etc. Now my code is easier to read. Check the function gupChildren (AKA: General update pattern for child nodes of a selection) in my code, and feel free to use it: https://bitbucket.org/aurelito/sandro-lib/src/83b81c4b556848...

At last, unless you understand what you are doing, use selectAll and no select.

Cheers!

8
jordache 1 day ago 3 replies      
argh.. I hate the style of mixing JS with markup language....
9
jtwaleson 1 day ago 3 replies      
I'm not sure if this is it, but the world needs a simple way to integrate D3 and React. Looks good at first sight though.
10
pathsjs 1 day ago 0 replies      
Shameless plug: I have been proposing this approach for long. I found that the missing piece is how to generate the actual geometric information starting from data, hence I wrote a library to this effect https://github.com/andreaferretti/paths-js

For the animations, it is often just a matter of continuosly updating the state of some component, something that I did in this mixin https://github.com/andreaferretti/paths-js-react-demo/blob/m...

11
athenot 1 day ago 0 replies      
I've done the opposite: use d3 to provide React-like semantics.

- Basically the data fetches get pushed to a lightweight queue/topic system in the browser (using publish.js).

- Each part of the page listens for messages it cares about and updates the page accordingly.

There is still some of the d3 plumbing (I must specify how updates differ from mere element creations, and handle the exits). But I still get a nice decoupling between the different parts of my app. Coupled with browserify, this ends up with semantics not unlike Erlang (I even named the listener method "receive" and it acts on patterns of messages).

12
b34r 1 day ago 0 replies      
Intelligently-optimized JS animations are generally more performant than CSS ones by an order of magnitude. See Greensock for relevant examples. Side note, your examples don't assign keys to your array-mapped paths, which will emit warnings.
13
koopatrol 1 day ago 1 reply      
I'm excited to play with this. I wonder what kind of overhead it has. I imagine it would be fairly low.
14
FoeNyx 1 day ago 0 replies      
@joelburget, btw the "demos" link leads to a 404.

( https://github.com/joelburget/d4/demo )

15
guylepage3 1 day ago 2 replies      
Very cool.. My only question is.. Is this "D4.js" a full feature replacement for D3.js? Could be a bit misleading. Might want to have a new name for it? Thoughts?
16
adamwong246 1 day ago 0 replies      
I've been working with a similar approach for a while and I give it my vote of confidence. Much more elegant than d3's awkward data-binding.
17
nthitz 1 day ago 1 reply      
d3.transition works on element attributes not just CSS. Wish React had something like that!
18
choward 1 day ago 0 replies      
What is this? I can't even scroll.
19
colemannerd 1 day ago 1 reply      
I need something exactly like this.
28
Ask HN: Insider history of the demise of Kodak?
333 points by lawpoop  2 days ago   225 comments top 43
1
intrasight 2 days ago 5 replies      
This is something I'd posted previously on HN:

I worked at Kodak as a summer intern in '85. Was the era of the disk camera. Was also my first programming job. Lotus 1-2-3.

Most people today can't comprehend the scale of American manufacturing as it still was at that time. The Elmgrove plant where I worked (one of a dozen facilities in the Rochester area) has over 14 thousand employees. Our start and end times were staggered in 7 minute increments to manage traffic flow.

That none of that would exist 20 years later was inconceivable at the time. The word "disruption" wasn't in business vocabulary. Nor was the phrase "made in China". Some senior technical managers saw the "digital" writing on the wall. But what could they do? What could anyone do? There was no way to turn that aircraft carrier on a dime.

At the end of my summer internship, I attended a presentation that our small team gave to more senior managers at the top of Kodak Tower in the conference room adjacent to President Chandler's office. One of the managers took me to the window and pointed out to me different plants and facilities of the vast Kodak empire spread out across the Rochester region. I assumed like many that Kodak had a bright future ahead because they had a world-renowned brand and excellent scientists and engineers. What many at the time didn't yet recognize was that there was no business model in digital cameras that would employ 100 thousand engineers, managers, factory workers, technicians, and staff. There were certainly no senior managers willing or able to sacrifice the golden goose of film to pursue something entirely different.

2
Syzygies 2 days ago 3 replies      
My dad devised the "Bayer filter" used in digital cameras, in the 1970's in the Kodak Park Research Labs. It is hard to convey now exactly how remote and speculative the idea of a digital camera was then. The HP-35 calculator was the cutting edge, very expensive consumer electronics of the day; the idea of an iPhone was science fiction. Simply put, my dad was playing.

This was the decade that the Hunt brothers were cornering the silver market. Kodak's practical interest in digital methods was to use less silver while keeping customers happy. The idea was for Kodak to insert a digital step before printing enlargements, to reduce the inevitable grain that came with using less silver. Black and white digital prints were scattered about our home, often involving the challenging textural details of bathing beauties on rugs.

3
thisjustinm 2 days ago 3 replies      
I worked for Kodak from 2002-2009 in their Windsor, CO plant, in the Thermal Media Manufacturing division and also wrote the occasional post for the corporate blog (thermal media == Dye sublimation printer media used in Kodak Picture Kiosks mainly). AMA.

It was the same thing every year - digital is cannibalizing film faster than we thought, we need to close down X or lay off N people. A year or two of that sure but over and over again and it became clear the executives were just not getting it. Looking back I wonder if they just decided to slowly ride the ship down, extracting nice salaries along the way. I still can't understand how someone - activist shareholders, a board member with half a brain, an executive willing to speak out - didn't make a bigger stink and try to get fresh leadership.

I remember one year at an all division meeting they showed the latest corporate "motivational" video - "Winds of Change" (http://m.youtube.com/watch?v=JYW49bsiP4k). We thought finally, they get it and are admitting we've been stagnating and now we're gonna turn it all around. Everyone was super pumped up for weeks.

Then we realized the only thing that had changed was our ad agency who had produced the video.

4
afian 2 days ago 2 replies      
From the warren buffet autobiography ...

What about Kodak? asked Bill Ruane. He looked back at Gates to see what he would say.Kodak is toast, said Gates.8Nobody else in the Buffett Group knew that digital technology would make film cameras toast. In 1991, even Kodak didnt know that it was toast.9Bill probably thinks all the television networks are going to get killed, said Larry Tisch, whose company, Loews Corp., owned a stake in the CBS network.No, its not that simple, said Gates. The way networks create and expose shows is different than camera film, and nothing is going to come in and fundamentally change that.

Youll see some falloff as people move toward variety, but the networks own the content and they can repurpose it. The networks face an interesting challenge as we move the transport of TV onto the Internet. But its not like photography, where you get rid of film so knowing how to make film becomes absolutely irrelevant."

Check out this book on the iBooks Store: https://itun.es/us/hj_bz.l

Excerpt From: Schroeder, Alice. The Snowball. Bantam Books, 2009. iBooks.

5
themgt 2 days ago 3 replies      
I lived in Rochester, NY and got a 20 hr/week job interning at Kodak in the slide film research department during HS in 1999-2000.

I remember on one occasion my boss, a mid-level executive (head of new slide film research? something like that) asked me what I thought about digital cameras, I think both because I was young and seen as "the computer guy". I didn't own one but I'd read about them a decent amount. I told him I understood they were expensive/low-quality at the moment but the advantage of ditching film and using ever-improving digital tech still seemed huge. I don't remember his exact words, but he couldn't really see the appeal or promise.

The 7-story building I worked in is just a mound of grass last I looked. I have thought from time to time just about the institutional inertia that fights against seeing what's going on and adapting. There were just tens of thousands of people with very highly specialized skills around film and chemicals and processing and dark rooms and paper and so many things most of which simply aren't relevant to a digital photography world.

Now, granted, other film/pre-digital camera companies did a far better job making the jump. I'd argue maybe in part again that Kodak seeing itself as emotionally wed to film while other companies saw themselves moreso as camera/photo companies. That Fuji has been able to survive is more surprising to me than Canon/Nikon/Olympus/etc.

6
bendykstra 2 days ago 2 replies      
My high school physics teacher worked at Kodak's research labs on CCDs, including some that were for use in "specialized applications," which I assume to mean spy satellites. He said that he had quit when Kodak decided to stop shooting themselves in the foot and shot themselves in the head instead. That would have been the early 90s, I think. I don't have any specific mismanagement stories, but I'll send you his email address (the most recent that I could find.) I always wished that he would talk about it in more detail and now that it is some years later, maybe he will.

He told us one fun story. One of his research papers had been cleared by the censors for publication and accepted by a scientific journal. However, just before the journal went to print, the censors changed their minds. It was too late to fix the layout, so the edition was released with a sheaf of blank pages in the middle. He said that it was the proudest anyone had ever been of some blank paper.

Edit: Apparently, there is no way to private message an HN user. If you have a Reddit account or similar, I can pm you there.

7
brudgers 2 days ago 0 replies      
Ten years ago, "the mobile web" might have been a reference to WAP. Nine years ago, when the iPhone arrived, mobile web was Edge: where you could get it and when you could afford it.

Companies like RIM and Nokia had smartphones. The people running them were smart. Their engineering was good. It had to be because wireless access to the internet wasn't ubiquitous. Suddenly the companies faced the first mover disadvantage.

It's difficult to imagine how revolutionary the technology of Kodachrome was. It utterly disrupted consumer photography and photojournalism and professional photography. The fact that Kodak was experimenting with digital photography in the 1970's shows how out front they were and they were right to treat it as a technology that wouldn't be viable for more than two decades...and one that nobody has figured out how to monitize except via the sale of hardware.

There's probably no plausible alternate universe where Kodak managed to produce sustainable profit from processing and storing digital images or selling media or anything related to their core business. Digital photography moved image production out of retail channels. I could text an image to ten of my relatives without a trip to Walgreens for one hour processing, and $0.08 3x5 prints in an hour available in the mid 1990's was a pretty amazing innovation versus the four or five days and significantly higher prices that were typical in the 70's and 80's.

Kodak wasn't a company standing still. It just didn't have a good way to make money from digital imagery: HP had them beat in the printer ink as liquid gold market and the camera manufacturers weren't going anywhere: optics are still bounded by the physics of optics.

Circling back, I think it wasn't so much people sitting around and thinking "we're Kodak" as it was the fact that Kodak wasn't Nokia and hence didn't have a history of selling off it's mainline business and moving into a new industry. Not that as a publicly traded company in the US that would have ever really been a viable option. Quarter by quarter, Kodak was obligated to maximize stockholder value for the short rather than long term.

Companies become obsolescent. Consider Sun.

8
aaronbrethorst 2 days ago 3 replies      
There are quite a few articles online that go into some detail about it. Just search for "Kodak Clay Christensen", or something similar. Here's one: http://www.economist.com/node/21542796

 Another reason why Kodak was slow to change was that its executives suffered from a mentality of perfect products, rather than the high-tech mindset of make it, launch it, fix it... Bad luck played a role, too. Kodak thought that the thousands of chemicals its researchers had created for use in film might instead be turned into drugs. But its pharmaceutical operations fizzled, and were sold in the 1990s.
On a related note, I just purchased a 10 pack of Kodak Portra 400 4x5" sheet film last night. I normally shoot Kodak's Tri-X and 5222 cine film, or Fuji's Acros 100, but thought it'd be fun to get into large format color photography on occasion.

9
YZF 2 days ago 2 replies      
I worked for Creo when it was acquired by Kodak in 2005 for $1B and then later for Kodak for quite a few years. At the time Kodak still had cash but film was obviously in decline. This acquisition was part of a $3B acquisition spree.

Kodak's mangement proceeded to run Creo into the ground through a series of layoffs, remote micromanagement, shuffling things around etc. At the time of the acquisition Creo was profitable (though definitely with some challenges) and had a few growth initiatives that looked promising (all cancelled). Very capable management, good people, and well run. There were a lot of opportunities to create some long term value in different areas but the only Kodak strategy was to keep cost cutting and milk all the businesses to their death.

What was amazing to me is that the CEO kept his job even after Kodak's market cap went below $1B. I forget what that market cap was at the time of the acquisition but probably in the $10-$20B range. Gotta be one of the top ten value destroying CEOs of all times.

For many many years it didn't matter what management did, film kept printing money for the company. Only when things changed you could tell that management was actually incompetent. Before that you didn't need to be competent to keep making money. Kind of like Warren Buffet says, only when the tide goes out you find who is swimming without their bathing suits...

I just saw something on Bloomberg the other day about Kodak finally getting some anti-counterfeiting technology that AFAIK is the same one developed in Creo over 10 years ago (Traceless) released.

EDIT: Another personal anecdote is that the first "real" digital camera I ever used, I'm guessing around 1996, was a Kodak. It was pretty decent. I think the price tag was quite high. At that point in time Kodak had a good reputation in digital cameras. The problem is digital cameras would never replace film as a business even if Kodak went 100% into digital. They needed to diversify.

10
WalterBright 2 days ago 3 replies      
I (and I've noticed this in others my age) still have a lingering thought that photography is expensive, and to conserve 'film'. Back in the day, even with slides, the finished cost per slide was about a buck each. It's just hard to dispel the habitual thought "is this shot worth taking?"

I'm still getting used to the ability to snap a pic and casually text it to a friend instead of trying to describe it.

I just downloaded a phone app that turns it into a scanner, complete with OCR. It's not as good as my flatbed, but it's incredibly convenient.

I feel like I'm living in the future.

11
lordnacho 1 day ago 1 reply      
I don't see it as so much of a failure of management.

To grab a large slice of some niche, you need to be invested in exploiting it. In Kodak's case, that meant having all these huge facilities for film technology. There's no other way to be a big player in this niche. The upside is nobody will ever avoid thinking of your brand when they are shopping for film.

The downside is you can't just change direction. If film becomes obsolete, the market in the replacement is not going to be big enough at the stage when you can see it. And its size is in itself what you will be using to judge whether it will happen, so it's quite hard to decide to dump all the old tech in favour of a tiny industry that might run into trouble. I'm sure there were many naysayers pointing out various flaws and potential roadblocks with digital.

If you do change direction, you inevitable step on the toes of someone else in the value chain. You make your own camera, you annoy the camera manufacturers. You want to be a chemicals business? It's full here, too. Remember you're a huge operation, and there's only so many things to do.

We should not mourn the death of large corporations though. Just like in nature, it frees up resources for new players doing new things.

12
QueueUnderflow 2 days ago 0 replies      
Steven Sasson, the man who invented the camera gave a talk to my program (CE at Rochester Institute of Tech) about the process and downfall of his invention at Kodak. It all started as some backroom research project with no funding to see if they could do something with the new technology of CDC cells. The camera ended up at .01 Megapixels and stored on a tape (with about 20+ second write time). What most people did not realize is he also made a device for displaying the image onto a TV (what good is a digital image if there is no way to display it). IIRC the first "public" demo of this technology was during the Tiananmen Square Massacre to smuggle/transmit the video out of the county. If you watch one of the original broadcasts the only sign of it was a little Kodak logo at the bottom of the screen.Now there is something you need to know about film.... it was probably one of the most profitable products ever. Also Kodak was not really a pure technology company, it was more of a consumer tech/chemical company. While they did have a lot of engineers (many of my professors all worked there), some were more focused on the manufacturing process rather than an end product. A lot of their internal structure was focused on making and processing chemical film. When he showed this product to the upper level (however never the CEO) the demo went something like: walk into room with the camera, take picture of attendees, show picture on TV. While this was totally unheard of the time, mostly they all just laughed at the quality of the image and brushed it off. They also saw no appeal in viewing pictures on a TV screen vs an actual picture. A patent was filed and that was mostly the end of it.A lot of people laugh at Kodak for that mistake, but it really its just another example of the inventors dilemma. At that time though it made perfect sense for Kodak to make that decision because they were just making so much money. One could say the technology was ahead of its time.Well then digital really started to take rise. They ended up turning a lot of their focus into making personal photo printers. In 2004 Kodak then realized that almost every company that made a digital camera infringed on their patents and started to sue them all. They ended up winning a lot of money but they could never get their foot back into the market.TLDR: Camera was ahead of its time and threatened to undo the whole company whose main focus was producing and processing chemical film.
13
earthserver 2 days ago 0 replies      
It's worth noting that Eastman Chemical, which was spun off from Kodak in 1994, is doing just fine as a Fortune 500 company with about 15,000 employees and a market cap of $10B USD.

It's stock is up some 200%+ since the spin-off from Kodak in '94 (compared to 400%+ for the S&P500).

If you had kept your Eastman Chemical stock after the spin-off from Kodak in '94, sold your Kodak stock, and reinvested the proceeds into Eastman Chemical, you would have more than doubled your capital, not including dividends.

14
rando444 2 days ago 1 reply      
Digital cameras in the 70s and 80s were impractical, low quality devices that were mostly useless without a computer, which is not something that the majority of people had in their homes.

Kodak did follow up with digital though. They had a digital SLR by 1994 (before Canon or Nikon) and digital cameras for $299 by 1996.

The advantage that companies like Nikon and Canon had are their lenses, and the huge investments photographers made in their lenses is what locked them in to a particular brand.

This is why a lot of traditional camera companies (Konica, Minolta, Fuji, Polaroid) were unable to keep up.

15
hemancuso 2 days ago 0 replies      
Grew up in Rochester. Both my father and Grandfather did about 30 years at Kodak at a fairly high level.

How can you transition from the highest margin CPG product of all time to a new regime where there are almost no marginal costs per use? Kodak had well over 100k employees at its peak and was a global brand just like coke. They even saw their demise decades before it occurred.

But what is management supposed to do in a quarterly environment where CEOs are getting ousted other than double down on their success and cut costs? There was never any real money to be made in digital photography and if anyone was going to figure it out, they certainly weren't living in Rochester.

16
DoubleGlazing 1 day ago 0 replies      
I worked for Kodak UK for a few months in 2002. I was a contractor working on their ERP systems, specifically reporting and BI tools. I wasn't privy to anything strategic but I could see some odd things.

The oddest thing was that they were still trying hard to push the APS format, despite the fact that is was a pretty poor format when it first came out it was now looking even worse when compared to early mass market digital cameras.

There was a certain arrogance amongst higher up managers. A sort of "We're Kodak, we're the best, we dictate the market!" There was a noticeable staff churn problem as a result. I was on a team of 20 of which 14 were contractors.

I can see some interesting parallels between Kodak and Nokia. Two giants who dominated their sectors and just didn't anticipate changes in consumer demand and then couldn't/wouldn't adapt.

17
mathattack 1 day ago 0 replies      
Here is the irony.... They were not too late, they were too early. [0] Their CEO pushed hard in digital from the early 90s, but they couldn't handle the losses. This is an example of a company's executives seeing the right thing, but not being able to survive the losses until the market caught up.

[0] http://www.wsj.com/articles/SB869781042786228000

18
mxwll 2 days ago 0 replies      
Recent design grad from RIT. I learned from / worked with many former Kodak employees from the late 90's into early 00's. The majority of them worked on the consumer product side of the business.

It is my understanding that Kodak's efforts were too little too late. The organization was driven by the business people and was hemorrhaging money and only staying afloat through licensing and selling off their patent portfolio and the medical imaging devision. They watched their consumer product profits from film evaporate and failed to transition into the digital age.

These designers and engineers that I have learned from and worked with are certainly brilliant individuals. It seems that the organizational culture did not provide them with the creative freedom needed to envision and develop products competitive with those coming from smaller, nimbler companies.

19
miesman 2 days ago 1 reply      
I worked for a subsidiary of Kodak around 91' and can offer a small view. My impression was that they were very aware of what was coming. They came out with the PhotoCD during this time. If you look at it was really a way to lock people into film for a longer time since you still needed to take your negatives in to be transferred.

I also remember taking a photography class in 95 and having the teacher say right at the beginning that in 10 years everything is going to be digital. My impression of the time is that everyone knew what was coming.

20
drited 13 hours ago 0 replies      
They didn't entirely have their heads in the sand. --They invented the world's first digital camera Source: http://petapixel.com/2010/08/05/the-worlds-first-digital-cam...

-- They developed a massive patent portfolio including many digital photography patents that had an estimated value of $1.8 to $4.5 billion at the time of their Chapter 11 filing (read the article if you're interested for why it got sold for far less - it's fascinating) Source: http://spectrum.ieee.org/at-work/innovation/the-lowballing-o...

21
vibrolax 2 days ago 1 reply      
Rochester NY resident since 1982, Kodak's peak employment year. Wrote embedded software for Xerox for a couple years around 2000. My belief is that companies like Kodak, Xerox, HP, IBM, etc. have business structures that cannot survive in a low margin business. These companies drove and thrived with technological change for decades. The leadership could not accept the change of their key lines of business into low margin commodities.
22
bksteele 1 day ago 0 replies      
Interesting and sad too. I worked there from 1987 to 1996 mostly in the film business. However, the lab I worked in used digital imaging to analyze and measure silver halide grains used to make film emulsion! So we were well aware of progress being made in digital imaging.

When I got to photocd, the managements focus was on replicating the quality and detail of film in the digital space, and they did not notice/understand that the lower level of quality of digital would still be useful. Their marketing was driven too much by the voices of professional photography and the ad industrys need for the quality capability of enlargement.

My colleagues and I actually started work on a business plan to use lower resolution digital imaging for applications in real estate and other industries that could use snapshots of lower resolution. We had really just gotten started on this, when management found out via a personal dispute of one of my colleagues and hauled us in for discipline. We stated our case and said we thought Kodak should get involved, but that fell on deaf ears. Our group fell apart when Kodak did not renew one of our contracts, and he went to work for Sony writing their digital image storage software, which became one of the top tools at that time.

The impression I got was that upper management was well aware of the digital advance but completely oblivious to the speed of its progress. They thought in terms of the huge length of time it took to develop new film products and their associated factories. In 1992, they actually thought that digital would not be a serious threat until about 2005. Oops!

23
timbarrett 2 days ago 0 replies      
Spent a 3 weeks integrating the kodak robotic microfilm system with VMS based Vax in a joint bid with Kodak and Digital in this time frame. This system was supposed to save kodak. The Kodak VP of Government Systems from Washington flew in to "supervise" and royally screwed everything up. Guy was the biggest nut bag in the world.
24
TheCondor 1 day ago 0 replies      
I hate these topics. After the fact it is always easy to make better choices and every big company has made more than a few mistakes, some of which really stand out in hindsight...

Kodak was a darling of the entire business world, for like a century. They printed money, they provided great jobs to hundreds of thousands of people. They were disruptive and put photography into everyone's hands. I think companies have a fixed life and theirs came and went.

Trust me, if they had a big pivot that was blown off, it was in to industrial chemicals or energy chemical manufacturing or something, it wasn't simply a matter of repositioning as an imaging company from a film company.

Tell me, how should Pony Express have pivoted when the telegraph came on line? Kodak, the brand and most of it was in a similar position.

25
_ph_ 2 days ago 1 reply      
I have no inside information what so ever, but I am still today using a digital camera with a Kodak sensor almost daily (a M9). I am still baffled, how a company of this size and revenue can fall apart like this - probably only paralleled by the demise of Nokia. Till the early 2000s, Kodak was not only a leader in film but also in digital photography. Their CCD sensors are still made today, their sensor division now owned by OnSemi.

All the first Nikon and Canon DSLRs had Kodak technology in them. Not sure how things happened in detail, but eventually Nikon and Canon had their own digital technology, mostly based on CMOS sensors. Kodak had one more CMOS camera of their own - the 14n based on a Nikon body. But while being the highest resolution body at that time, it got mixed reviews. Still at that time Kodak probably would have had enough cash to buy Nikon.

Yet they withdrew from professional digital cameras, the last life sign was providing sensors for the Leica M9 and S. And with no other leg to stand on, the company mostly vanished with the collapse of the film market.

Ironically, the price you pay per roll of film is on an all-time high, so in theory, the business of making film should be more profitable than it was, but the scale just is not there any more.

26
syats 1 day ago 0 replies      
Just thought it make the thread more interesting if I mentioned the fact that Kodak used a non-standard calendar up until 1989.

https://en.wikipedia.org/wiki/International_Fixed_Calendar

27
Broken_Hippo 2 days ago 0 replies      
This isn't exactly insider, but offhand experience. I worked at a pharmacy from 2005-2013 that had Kodak film processing and digital printing machines. Early on, there was a still a great deal of actual film. Digital cameras still were expensive or not rivaling the quality of film. Many machines didn't offer negative-to-cd, but offered a floppy disk. This slow updating of their equipment was a theme in the company. As film sales slowed, Kodak grew worse. Outsourced tech support for the machines, who were graded on whether or not they had to send a technician out to the store. That was switched to another system, where a tech called back. Most times this was recorded as done, but wasn't. This sort of thing was very common, and didn't improve as they sold off and outsourced bits of the company. They did finally start updating machines, and having fewer locations that processed film quickly and phased out send-away processing - truly focusing on digital products. By then, their advantage had slipped. While they relied on quality film before, their prints seemed no better quality than competitors.

Part of the problem was obviously that they didn't take digital seriously soon enough. This in combination with what seemed like a poorly run company meant software always lagged behind sometimes by years. For example, it took some time before folks could put video from their camera onto a dvd, and the cd's themselves could only hold 120 pictures while SD cards were keeping thousands. It was a software issue. In addition, some of the retail outfits they contracted through seemed overpriced.

I think the official story was filled with lament over not taking digital technology seriously early on. While I read some stuff about them, it has been some years and I don't know how much was reserved for the workplace environment.

28
guscost 2 days ago 0 replies      
Here's a memoir from Brad Paxton, a family friend. Might be worth checking out (disclaimer: I'm related to the editor) since he was a VP who had a lot to do with high-tech projects during this time:

https://www.amazon.com/Pictures-Pop-Bottles-Pills-Electronic...

29
bkyan 1 day ago 0 replies      
I was a summer intern working in OLED research at Kodak in the early 90's. While not that many people associate Kodak with OLED, Kodak was considered one of the world leaders in OLED research at that time. There is a short paragraph about the folks I was working for, on the OLED page on Wikipedia ( https://en.wikipedia.org/wiki/OLED ). While Kodak was able to license the associated technology/patents, it didn't seem like they came out with any consumer OLED products of their own in the years that followed my short stay, there.
30
ernestbro 2 days ago 3 replies      
Thought experiment..even if you could time travel, how would you have saved Kodak?
31
shas3 2 days ago 3 replies      
Big ships are slow to turn. One systemic cause based on my experience in old-big companies is that if profit margins on some products are high, mid and some high level managers maximize that product (e.g. roll-to-roll manufactured films in Kodak). With such optimization, you end up at "local-maxima". This continues until you have an "iceberg right ahead" moment when profits start to spiral downward. Depending on the pace of growth of the technology/sales of your competitor(s), this descent is rapid and irreversible.
32
power 1 day ago 0 replies      
It's tangential, but I'm reading Dracula atm and Jonathan Harker mentions taking pictures with a Kodak. It was written in 1897.
33
markbnj 2 days ago 0 replies      
We lived in upstate NY during the 1960's and I visited the Kodak campus in Rochester, I think as a cub scout. It was an amazing place. I remember feeling awe as we were walked through this long dark place where they proofed new rolls of film. The memory of the feeling, at least, is still quite clear after all these years. Great to read all the anecdotes about the company here.
34
nl 2 days ago 1 reply      
There are few insider details from a NYTimes profile on Steven Sasson[1] who invented the digital camera while at Kodak.

[1] http://lens.blogs.nytimes.com/2015/08/12/kodaks-first-digita...

35
redtexture 2 days ago 3 replies      
The story of staggering institutional inertia and failure is amazingly similar to Xerox's, and similar to hundreds of other US corporate manufacturing giants declining into bankruptcy over the last 40 years, by failing to be concerned with, or collectively act on (board, executive, staff) the future year risks of on a several-decade time scale. This list of inertially failing organizations, among others, includes General Motors, IBM, HP, AT&T, and others.

[Can any public company plan on a multi-decade time scale? This is an anthropological, sociological, cultural and business question of interest in the academic literature.]

Others, along with Harvard Business School professor Clayton M. Christensen, have written extensively on the general cultural difficulty of spending money now for an uncertain future while making money presently on a multi-decade investment.

Xerox was originally under the thumb of Kodak, in Rochester, New York, as the maker of photographic paper, chemicals and related equipment, known as the "The Haloid Photographic Company" (halides -- bromine and related chemical elements, along with silver, being the key photographic chemicals of photography) and founded in 1906.

The owner-leadership of the Haloid company in the 1930s and 1940s knew the company was doomed in the long-run with their then-present capability. Later on, it turns out that neither the Xerox of 1995, nor the Kodak of 1995 understood the necessity of having a keen paranoia of their technology's changing future and future unreliable income.

Xerox's, Kodak's, and other corporate stories are both studied in graduate-level business and engineering programs looking at the risks of failing to plan for change, the end of patent monopoly and market domination, and the cultural and financial necessity for understanding, committing to, and planning on the demise of the present technology currently sustaining an organization.

The Haloid company was not crushed by Kodak, in part, because of antitrust laws, but it was always in danger of failing in a competitive market dominated by Kodak, and from the 1930s onward had committed money to find and own a technology that was sustainable and patentable to survive: the Haloid company invented the term "xerography" (dry image transfer).

After the growing success of xerography in the late 1950s, and 1960s, after near-death experiences bringing the technology to market, the culture of developing and ultimately surviving on the new market of new-but different technology was lost in the profits of xerography in the minds of the then-present executive leadership, addicted to the amazing profits of xerographic patents: thus the results of new research and technology failed to be supported to make it to market.

Yet the Xerox company's creations, developments and discoveries are the present backbone of the present era of computing: ethernet, laser printers, personal computers, graphical user interfaces, Smalltalk (see also Object Oriented Programming and Virtual Machines, and the like). Look up the history of Xerox's "Palo Also Research Center (PARC). The Smallalk VM was the foundation of Self, and the later Java VM, with the addition of a couple decades of further work.

See "Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer" by Douglas K. Smith and Robert C. Alexander for that story, published in 1988, nearly three decades ago. The cultural failures and non-response parallels are remarkable.

Doubtless there are dozens of similar reports on the Kodak cultural inertia and failure to understand change, in academic articles, and books. Google is your friend.

Kodak leadership thought that they could not profit from digital cameras, and early and expensive technology could only be used by large institutions (NASA). As costs dropped, Kodak thought that their market was commodity consumables, so they licensed the technology they created. The consumables chemistry and photographic films and papers market ended with with the rise of digital photography (no negatives), and digital, color ink-jet and xerographic reproduction of images.

EDIT: halide elemental chemical family

36
jwillis85 1 day ago 0 replies      
In some sort of alternate history. Setting Apple acquisition aside.. Kodak would have acquired Kinkos and branched out into online ondemand photo proofing for digital images. Then went totally Android and came out with something more like an iPad than a phone, perhaps leaning in the direction of GeoTagging and Environmental sensing. Phone tech might take a second seat to messaging and something more akin to video calls on demand.

Yeah Shoulda Coulda Woulda, it just seems there were less barriers to survivial and thriving than simply egos and business as usual attitudes.

37
tmaly 1 day ago 0 replies      
I was at RIT during the later half of the 90s. Most of my professors had done work at Kodak, and the mantra was that if you had a job there, you were set for life.

I think the digital camera and the market crashed from the dotcom bust worked in tandem to take down Kodak.

They had some really smart people working there, and they invented some really important tech that was used beyond cameras. Think image processing for satellites etc.

38
devin 1 day ago 0 replies      
I know of a story where a company selling software that banked on digital, being told quite seriously by a member of the board that digital would never be a thing. This was around 97-98.
39
rmason 2 days ago 2 replies      
What I've always been curious about was whether Apple's engineers reached out to Kodak when designing the iphone's camera?

It would seem logical that Kodak could have potentially been a possible supplier.

40
cm2187 1 day ago 0 replies      
What I don't get is that my very first digital camera was a Kodak (must have been in 2000). It wasn't very good (but no camera were then) but it was one of the cheapest. So it's not like they were absent from the digital market. But still somehow they missed the boat.
41
estrabd 2 days ago 0 replies      
I heard a similar story about Polaroid having their own digital camera. Not sure about the details.
42
DanBC 2 days ago 1 reply      
I'm also interested in the more specialist products - holography, xray, and movie film.
43
MicroBerto 2 days ago 1 reply      
Please don't do this here.
29
Robert Fano has died mit.edu
200 points by tjr  4 days ago   26 comments top 9
1
hcs 4 days ago 3 replies      
I've never been able to find an electronic copy of his 1949 report "The Transmission of Information", it's missing from DSpace@MIT, so here's a scan:

https://archive.org/details/fano-tr65.7z

This was an original conception of information theory, somewhat beat to the press by Shannon, though the two had talked about it (and indeed Shannon cites this in "A Mathematical Theory of Communication").

I have a copy of TR 149, Transmission of Information II, as well, but no scanner anymore.

2
cvwright 4 days ago 2 replies      
Among other things, he was known for Fano's Inequality, which gives a limit for how well you can predict the value of a random variable based on knowledge of a related variable. Apparently this is quite useful in signal processing for communications over a noisy channel, where the signal that you receive is not quite the same as what was sent.

https://en.wikipedia.org/wiki/Fano%27s_inequality

3
ChuckMcM 4 days ago 0 replies      
I did not realize he had developed CTSS. Folks in the AI lab developed ITS which was the "Incompatible Timesharing System" that had a user interface which was a REPL to the debugger (DDT at the time). They gave out 'tourist' accounts for anyone who wanted one and I used mine and met several people that way.

It sounds like he was way ahead of his peers in understanding the privacy aspects of future computers.

4
dorianm 4 days ago 0 replies      

 In those days [batch processing] programmers never even documented their programs, because it was assumed that nobody else would ever use them. Now, however, time-sharing had made exchanging software trivial: you just stored one copy in the public repository and therby effectively gave it to the world. Immediately people began to document their programs and to think of them as being usable by others. They started to build on each other's work.
Robert Fano Scientist

http://www.inspiringquotes.us/quotes/pCID_em9jKLpk

5
j2kun 4 days ago 2 replies      
Also notable as the son of Gino Fano: https://en.wikipedia.org/wiki/Gino_Fano
6
jpmattia 4 days ago 0 replies      
In addition to the other contributions listed, we should add the Fano/Adler/Chu E&M series, which was one of the touchstones for an earlier gen of students. RIP
7
jmspring 4 days ago 1 reply      
Another passing in my college lineage. Studied under Glem Langdon (passed a few years back), Huffman (a few years prior to that), I recall Shannon passing.

Glen Langdon and David Huffman I knew from UC Santa Cruz. Shannon and Fano, I only heard stories about. Jorma Rissanen is still alive at the age of 98.

Pillars in compression and information theory.

8
chronic81 4 days ago 1 reply      
I just wanted to chime in and hopefully prevent selection bias (for posting) and say:

Who?

9
cia48621793 4 days ago 1 reply      
Rest in peace, the father of mainframes.
30
Misfortune tesla.com
223 points by dwaxe  2 days ago   169 comments top 22
1
cjensen 2 days ago 11 replies      
From Tesla's statement: "contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all"

I think this is a foolish statement: fatal accidents in the US are 1.3 per 100M miles driven. Tesla reports autopilot has driven 100M miles. That's not enough data collected to draw this kind of conclusion.

There's a parallel here with software testing. I've seen bugs that happen 25% of the time, for example, and take 10 minutes to run a test. Our test guys have great intentions, but if they test 5 times and see no bug, they think the bug is gone. There is no instinctive understanding that they have insufficient data to draw a conclusion.

2
Dwolb 2 days ago 10 replies      
I didn't really have a problem with Tesla or Autopilot's latest issues until I re-read this sentence:

>Autopilot was not operating as designed and as described to users: specifically, as a driver assistance system that maintains a vehicle's position in lane and adjusts the vehicle's speed to match surrounding traffic.

My problem is with Autopilot's branding - it's called AUTOPILOT.

The name isn't "Maintain Lane Position" or "Cruise Distance" or something boring that describes it better - it has AUTO in the name.

Typical drivers aren't airline pilots who complete thousands of hours in flight training and have heavily regulated schedules. We're just people who are busy and looking for solutions to our problems.

If Tesla doesn't want people to think Autopilot functions as crash avoidance/smart vehicle control/better than humans in all situations or blame Tesla for accidents (whether human or machine is at fault) it should come up with a less sexy name.

3
Animats 2 days ago 4 replies      
I haven't found US figures, but for the UK, motorway driving has a far lower fatality rate than non-motorway driving. "Although motorways carry around 21 per cent of traffic, they only account for 6 per cent of fatalities and 5 per cent of injured casualties. In 2015, the number of fatalities on motorways rose from 96 deaths to 110."[1] Since Tesla's "autopilot" is only rated for motorway (freeway) driving, it should be compared against motorway fatality rates, which are about a third of general driving rates. So a realistic estimate of the fatal accident rate for human freeway driving is maybe 0.3 per 100 million miles driven. Tesla is doing much worse than that.

To rate an automatic driving system, you want to look at accident rates, not fatality rates. Accident rates reflect how well the control system works. Fatality rates reflect crash survivability. Tesla needs to publish their crash data. That's going to be disclosed, because the NHTSA ordered Tesla to submit that information.

[1] http://www.racfoundation.org/motoring-faqs/safety#a5

4
nl 2 days ago 2 replies      
I don't get why people are so eager to defend Tesla autopilot. We've had Andrew Ng call it irresponsible[1] and Li Fei Fei say she wouldn't let it drive with her family in the car[2]. These aren't anti-tech luddites, but people with a very good understanding of the current state of the art.

I love Tesla, but they are SO weak at taking criticism or realising when they make a mistake.

[1] http://electrek.co/2016/05/30/google-deep-learning-andrew-ng...

[2] http://a16z.com/2016/06/29/feifei-li-a16z-professor-in-resid... (you'll need to listen to the podcast though)

5
_sentient 2 days ago 1 reply      
This is from July 6th BTW. There has since been a fair amount of back-and-forth on this between Musk and Stephen from Fortune. This episode has also granted us this particularly delightful AMA on reddit, wherein Stephen roundly ignores comments calling out the questionable links between recent Fortune coverage and the Koch's ongoing crusade against renewable energy: https://www.reddit.com/r/IAmA/comments/4rqa6q/hey_i_am_steph...
6
cubaia 2 days ago 0 replies      
"That Tesla Autopilot had been safely used in over 100 million miles [...] That contrasted against worldwide accident data, customers using Autopilot are statistically safer than those not using it at all."

That is such a weak statistical claim, that it border the disingenuous.

Previous discussion: https://news.ycombinator.com/item?id=12082893

7
ghughes 2 days ago 0 replies      
Belligerent as usual. I wonder if Musk writes these himself.

edit: I'm being downvoted for this, but I wasn't using "belligerent" negatively here; I was wondering aloud whether Tesla's characteristically aggressive approach to damage control is the result of direct involvement from Musk. Doesn't seem that crazy to imagine that it is.

8
alfredxing 2 days ago 0 replies      
9
smegel 2 days ago 5 replies      
I don't get Tesla. What is so special about them? Why does everyone want one? If the demand for electric vehicles is so high, why hasn't all the big makers already started offering electric versions of their little hot hatches? That way you get an electric vehicle AND a build quality based on decades and decades of quality control engineering. Autopilot seems to be a curio rather than a drawcard, but again, surely the big makers, with access to billions of metric readouts from existing cars would be best place to develop AI to control them. Is it the Elon Musk factor? Seems to be a strange reason to buy a car. The fan factor might be justified for an Apple car...but I just don't get the buzz and hype around Tesla.
10
taneq 2 days ago 0 replies      
I think their points here are valid, but I must admit that it's starting to shake my confidence the way that, every time something bad happens, they instantly respond with such strident defensiveness.
11
aerovistae 2 days ago 1 reply      
This is from 10 days ago. Why am I finding it at the top of HN now ?
12
euske 2 days ago 1 reply      
Is there such a thing as the universally acknowledged definition of "car safety"? When you start combining the word "statistically" and "safe", I feel that the statement loses its scientific rigor. To me "safe" is a very vague term and it's mostly used in a subjective context. It makes me wonder if their assumption was really valid when they were testing this feature.
13
abalone 2 days ago 0 replies      
Labeling this a "statistical inevitability" obscures the issue. It seems clear from numerous demos posted to YouTube that some autopilot users are very comfortable taking their hands off the wheel. That's an outright violation of the TOS. Yet, lots of people do it. Some erroneously refer to it as "hands free" mode.[1] Some even observe the fact that they're not supposed to do it while doing it.[2]

The human factors here are tough. But safe design needs to account for human factors. The enthusiast community seems especially prone to over-trusting the autopilot, and that's something Tesla should be examining in their safeguards.

[1] https://www.youtube.com/watch?v=2geQ4hvvkNA

[2] https://www.youtube.com/watch?v=8H1qUhpjE5M

14
free2rhyme214 2 days ago 0 replies      
Once again this confirms Aaron Swartz was right about the news - http://www.aaronsw.com/weblog/hatethenews
15
0xmohit 2 days ago 0 replies      
> there is no evidence to suggest that Autopilot was not operating as designed

Obviously, a dead man wouldn't be available to testify.

> That Tesla Autopilot had been safely used in over 100 million miles of driving by tens of thousands of customers worldwide, with zero confirmed fatalities and a wealth of internal data demonstrating safer, more predictable vehicle control performance when the system is properly used.

https://en.wikipedia.org/wiki/Lies,_damned_lies,_and_statist...

16
Theodores 2 days ago 3 replies      
> We self-insure against the risk of product liability claims, meaning that any product liability claims will have to be paid from company funds, not by insurance.

Why do they do this? I can understand it when government property is not insured, e.g. UK Civil Service, as the enterprise is so vast and general taxation can fill the gaps. I can also understand that some things can`t be insured, e.g. nuclear power plants, but why does Tesla `vertically integrate` insurance, particularly given that the product is statistically likely to kill someone in due course?

17
18
pastagawins 2 days ago 0 replies      
It could end up in a legal historical litigation.

You Know the story with the microwave and the cat inside. The old ma that just wanted a Quick solution to dry its beloved pet.

That's the same thing with the guy driving in its Tesla with the Autopilot on. He just believed in the marketing campaing

19
XJOKOLAT 2 days ago 0 replies      
Of course, I sympathize with any casualties. However, this seems to be a question of responsibility in my eyes.

If you're not aware that potential dangers still exist when you step into a car, you shouldn't be driving the car (which is a shame as it's a fantastic car).

Sorry. Tesla is not at fault here - however much people want to call it that way.

The Model S is not some magical car designed by aliens. It's a machine. Problems may occur. We are not at autonomous-vehicle stage yet. However Autopilot is a damned comfortable upgrade compared to the old cruise control.

I can't believe people are blaming Mr Musk or the marketing department for people not taking responsibility or being careful when they get into a car. As they should in any car. Especially any car with autopilot like capabilities.

20
Alexey_Nigin 2 days ago 0 replies      
While I agree that Tesla's article is not perfectly logical and its marketing campaign is not impeccable, I would like to demonstrate that people at Tesla Motors have a point.

1. "STATISTICALLY SAFER" CUSTOMERS. Yes, this statement makes no sense. One fatal crash is not a large enough sample size to make such conclusion. However, this article was aimed not at Hacker News readers, but at average buyers. Most of them do not have a firm grasp of high school math, so for them "statistically safer" means just "don't worry." And indeed there are reasons for them to worry, given that independent news agencies continually publish hysterical things (It is a a wake-up call! Reassess self-driving cars! The crash is raising safety concerns for everyone in Florida! [1]). Tesla's response was nothing but a necessary defence. Or did you expect them to say, "You know, there are not enough data yet, so let's wait until 10 or so more people die, and then we will draw conclusions." This is much more logical, but I feel that customers wouldn't like it.

2. WHY IT IS CALLED "AUTOPILOT." This is just marketing. They couldn't sell it under the name "The Beta Version Of The System That Keeps Your Vehicle In Lane As Long As You Keep Your Hands On The Steering Wheel And Are Ready To Regain Control At Any Moment." And honestly, I do not think that even relatively stupid customers will just press the button and hope for the best without reading what the Autopilot is all about in advance.

In my opinion, it is now a difficult time for Tesla, and we should not criticise it for trying to stay afloat.

[1] http://www.vanityfair.com/news/2016/07/how-the-media-screwed...

EDIT: You might think that the phrase "trying to stay afloat" is unnecessary pathos, since a single crash, even coupled with a bunch of nonsense news articles, cannot lead to anything serious. However, the history shows it can. In 2000, Concorde crashed during takeoff, killing all people on board [2]. The event was caused by metal debris on the runway, not by some problem with the plane itself. Nevertheless, Concorde lost it reputation of one of the safest planes in the world. The passenger numbers plummeted, and Concorde retired three years later. That crash is the number one reason why it now takes 12 hours to get from Europe to America.

[2] https://en.wikipedia.org/wiki/Air_France_Flight_4590

21
calgaryeng 2 days ago 0 replies      
Sue them for libel then?
22
dayaz36 2 days ago 0 replies      
This is two weeks old and is on front page of HN for the first time on a sat night...a little late to the discussion
       cached 20 July 2016 04:11:02 GMT