hacker news with inline top comments    .. more ..    27 May 2016 News
home   ask   best   2 years ago   
1
Jury in Oracle v. Google finds in Google's favour twitter.com
1040 points by LukeB_UK  4 hours ago   246 comments top 34
1
grellas 3 hours ago 5 replies      
Law evolves and the law of copyright in particular is ripe for "disruption" - and I say this not as one who opposes the idea of copyright but, on the contrary, as one who strongly supports it.

It is right that the author of a creative work get protection for having conceived that work and reduced it to tangible form. Developers do this all the time with their code. So too do many, many others. Many today disagree with this because they grew up in a digital age where copyright was seen as simply an unnecessary impediment to the otherwise limitless and basically cost-free capacity we all have to reproduce digital products in our modern world and hence an impediment to the social good that would come from widespread sharing of such products for free. Yet, as much as people believe that information ought to be free, it is a fact that simply letting any casual passer-by copy and distribute any creative work with impunity would certainly work to rob those who may have spent countless hours developing such works of the commercial value of their efforts. I will grant that this is a social policy judgment on which the law could come down on either side. I stand with the idea of copyright protection.

Even granting the correctness of copyright as a body of law that protects certain property interests, there are still many abuses in the way it is implemented and enforced. Copyright terms have been extended to the point of absurdity, and certainly well beyond what is needed to give the original author an opportunity to gain the fruits of his or her labor. Enforcement statutes are heavy-handed and potentially abusive, especially as they apply to relatively minor acts of infringement by end-users. And the list goes on.

The point is that many people are fed up with copyright law as currently implemented and, when there is widespread discontent in society over the effects of a law, the time is ripe for a change.

I believe this is where copyright law is today.

The Bono law may have slipped through Congress with nary a dissent in its day but this will not happen again, whatever the lobbying power of Disney and others. And the same is true for the scope of copyright law as it applies to APIs.

Ours is a world of digital interoperability. People see and like its benefits. Society benefits hugely from it. Those who are creatively working to change the world - developers - loath having artificial barriers that block those benefits and that may subject them to potential legal liabilities to boot. Therefore, the idea that an API is copyrightable is loathsome to them. And it is becoming increasingly so to the society as a whole.

The copyright law around APIs had developed in fits and starts throughout the 1980s and 1990s, primarily in the Ninth Circuit where Silicon Valley is located. When Oracle sued Google in this case, that law was basically a mess. Yet Judge Alsup, the judge assigned to this case, did a brilliant synthesis in coming up with a coherent and logically defensible legal justification for why APIs in the abstract should not be protected by copyright. He did this by going back to the purpose of copyright, by examining in detail what it is that APIs do, and by applying the law in light of its original purpose. The result was simple and compelling (though the judicial skill it took to get there was pretty amazing).

Legal decisions are binding or not depending on the authority of the court making them and on whether a particular dispute in under the authority of one court or another when it is heard.

The decision by Judge Alsup is that of a trial judge and hence not legally binding as precedent on any other judge. It could be hugely persuasive or influential but no court is bound to follow it in a subsequent case.

The Federal Circuit decision that reversed Judge Alsup and held APIs to be copyrightable is not that of a trial judge and has much more precedential effect. Yet it too has limited authority. The Federal Circuit Court does not even have copyright as its area of jurisdiction. It is a specialty court set up to hear patent appeals. The only reason it heard this case was because the original set of claims brought by Oracle included patent claims and this became a technical ground by which the Federal Circuit Court gained jurisdiction to hear the appeal. But there are many other Federal Circuit courts in the U.S. and the effect of the Federal Circuit Court decision concerning copyrights is not binding on them. There is also the U.S. Supreme Court. It has the final authority and its decisions are binding on all lower federal courts as concerns copyright law.

The point is that the battle over this issue is not over. It is true that the Federal Circuit decision was a large setback for those who believe APIs should not be subject to copyright. Yet there remains that whole issue of social resistance and that is huge. It will undoubtedly take some time but the law can and does change in ways that tend to reflect what people actually think and want, at least in important areas. No one has a stake in seeing that Oracle be awarded $9 billion in damages just because it bought Sun Microsystems and found an opportunity through its lawyers to make a big money grab against Google. But a lot of people have a stake in keeping software interoperability open and free and many, many people in society benefit from this. Nor is this simply an issue of unsophisticated people fighting the shark lawyers and the big corporations. Many prominent organizations such as EFF are in the mix and are strongly advocating for the needed changes. Thus, this fight over APIs will continue and I believe the law will eventually change for the better.

In this immediate case, I believe the jury likely applied common sense in concluding unanimously that, notwithstanding Oracle's technical arguments, the use here was in fact benign given the ultimate purposes of copyright law. I leave the technical analysis to others but, to me, this seems to be a microcosm of the pattern I describe above: when something repels, and you have a legitimate chance to reject it, you do. Here, the idea of fair use gave the jury a big, fat opening and the jury took it.

2
rayiner 4 hours ago 15 replies      
These are the statutory fair use factors the jury was required to consider (17 U.S.C. 107):

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

It's a somewhat surprising result, because two of the factors weigh heavily against Google (it's a commercial work, and was important to Android gaining developer market-share). Oracle's strategy going forward, both in post-trial motions and in any subsequent appeal, will be based on arguing that no rational jury could have applied these factors to the undisputed facts of the case and concluded that the fair use test was met.

It's also not a particularly satisfying result for anybody. If API's are copyrightable, then I can't think of a better case for protecting them than in this one, where Google created a commercial product for profit and there was no research or scientific motivation. It wasn't even really a case (like say, Samba) where copying was necessary to interoperate with a closed, proprietary system. Davlik isn't drop-in compatible with the JVM anyway.

That makes Oracle's win on the subject matter issue basically a pyrrhic victory for anyone looking to protect their APIs. They're protectable, but can't be protected in any realistic scenario.

And if you're in the camp that believes APIs should not be protected, this precedent--if it stands--means that you'll have to shoulder the expense of going to trial on the fair use issue before winning on the merits.

3
nabaraz 4 hours ago 2 replies      
My favourite part of the trial was when the judge told Oracle that a high schooler could write rangeCheck[1].

[1] https://developers.slashdot.org/story/12/05/16/1612228/judge...

4
phasmantistes 4 hours ago 2 replies      
I'd also just like to give huge props to Sarah Jeong for keeping up such a high-quality live stream of tweets over the course of the entire trial. That's reporting done right.
5
mythz 4 hours ago 2 replies      
Whew, Oracle's lawyers and blind greed doesn't get to destroy interoperability for the entire Tech Industry.

But the fact that Oracle could get this close and spin deceit to a non-technical Jury to decide whether using API declarations from an OSS code-base would in some universe entitle them to a $9B payday, is frightening.

6
davis 3 hours ago 1 reply      
If you found Sarah's coverage of the trial useful, she is accepting payments on PayPal since she was doing it with her own money:https://twitter.com/sarahjeong/status/731687243916529665
7
koolba 4 hours ago 1 reply      
Wow. I suddenly have a lot more faith in the courts and juries to land sane verdicts in technology trials. Still sad that it takes a billion dollar company to be able to stand up to this (as anybody smaller would be crushed by the trial expense) but let's celebrate it none the less.

Any lawyers around? I wonder if Google can claim legal expenses back from Oracle.

8
jhh 1 hour ago 1 reply      
What I don't understand about this: Why didn't Google/Android use Java under the Open Source license under which it has been provided. Wouldn't that have saved all the trouble?
9
tostitos1979 3 hours ago 4 replies      
Despite the win, I think it would have been far better for the computer industry if Google had bought Sun. Unlike other companies with crap (IMHO ... Nokia, Motorola), Sun actually had stuff of value. This is a lesson that geeks get but I'm not sure MBAs do or will ever get.
10
Cyph0n 4 hours ago 0 replies      
Great news! This is a win for us software devs :)

I'd like to note that Ars Technica's coverage of the trial has been excellent throughout.

11
chatmasta 2 hours ago 1 reply      
What impact does this have on reverse engineering private APIs and reimplementing them? And selling those reimplementations?

Can I reverse engineer the private API of a mobile app, then implement my own client to talk to its servers?

What if I create my own "bridge" API to talk to the private API? Can I then sell access to the bridge API, allowing developers to use the private API of the app through my service?

And how does this relate to, e.g. running private world of warcraft servers with modded code that allows purchasing in-game items? (See http://www.themarysue.com/blizzard-private-server-lawsuit/)

12
shmerl 4 hours ago 0 replies      
Congratulations! It's a pity that previous decision declared APIs copyrightable. This never should have happened. But at least fair use worked out.

I wonder though how universal that ruling would be. Is any reimplementation of APIs going to be fair use, and if not, what are the criteria?

13
BinaryIdiot 4 hours ago 0 replies      
Great news to a degree. It still means APIs can be copy-written which is a bit unfortunate in my opinion. But they won on fair use which is still a victory.

Anyways I wonder how long this is going to keep going on for as I'm assuming Oracle will appeal.

14
grizzles 4 hours ago 0 replies      
Apis are still copyrightable according to the Federal Circuit Court of Appeals. That's not great, and I hope Congress does something about it for the other languages (eg. C#/.NET) that haven't yet been whitelisted as fair to use by the judicial system.
15
musesum 3 hours ago 0 replies      
> "For me, declaring code is not code," Page said.

Unless, of course, the declaring code is declaring declaring code, as in Prolog and its ilk.

16
Analemma_ 4 hours ago 1 reply      
Now we have to hope this doesn't get overturned by a Circuit judge like it did before. Still, this is excellent news.
17
blacktulip 4 hours ago 2 replies      
Excuse me here but I have to ask. Is this final? Because I've read that Oracle won the case some time ago.
18
zerocrates 4 hours ago 4 replies      
See, as ever, Florian Mueller for a... different perspective: http://www.fosspatents.com/2016/05/oracle-v-google-jury-find...
19
bitmapbrother 3 hours ago 0 replies      
Oracle will likely appeal, but they'll lose again. Overturning a unanimous jury verdict is very difficult.
20
mark_l_watson 3 hours ago 0 replies      
I remember that afternoon deserts served in Google restaurants when I worked there were very tasty - I hope everyone is celebrating with a good snack :-)

Seriously, I think this is a good verdict. I think that Oracle has been doing a good job sheparding Java, but this law suit really seemed to me to be too much of a money grab.

21
yeukhon 24 minutes ago 0 replies      
I for one would like to have a public digital recording of the actual trial available...
22
pavpanchekha 4 hours ago 7 replies      
This is possibly my best-case scenario. APIs are copywriteable (so says the Supreme Court), and this seems reasonable, since some APIs really are very good and treating them like an artistic work has benefits. But implementing them is fair use, preserving the utility of APIs for compatibility. Great news!
23
brotherjerky 4 hours ago 0 replies      
This is fantastic news!
24
cm3 4 hours ago 0 replies      
So, APIs are still thought as copyright'able and that was a different trial, right?

Now Google was ruled okay to use that single, small function, or what was this about?

A little more info would be nice for those who aren't following this closely.

25
jcdavis 4 hours ago 2 replies      
What is the room for appeals here?

This is a massive ruling, heres to hoping it stands

26
satysin 3 hours ago 0 replies      
So is this really over for good now? Can Oracle appeal and drag this on for another decade?
27
spelunker 4 hours ago 1 reply      
Is this legit? If so, thank goodness.
28
steffenfrost 2 hours ago 0 replies      
Would swizzling methods violate copyright?
29
EddieRingle 4 hours ago 1 reply      
Hopefully soon we can stop focusing on legalities and get back to building cool stuff.
30
crispyambulance 4 hours ago 0 replies      
Perhaps the jury was not as clueless as some here were assuming?
31
ShaneBonich 4 hours ago 0 replies      
That was expected. Happy for Google.
32
jrochkind1 2 hours ago 0 replies      
thank god.
33
VeejayRampay 4 hours ago 0 replies      
Nelson Muntz would rejoice at this verdict. Oracle's claim was laughable.
34
suyash 4 hours ago 3 replies      
Today is a sad day for Silicon Valley. Our legal process has demonstrated how incompetent it is when it comes to Technology IP protection.
2
Twilio S-1 sec.gov
303 points by kressaty  8 hours ago   177 comments top 22
1
Animats 7 hours ago 11 replies      
There's a lot there not to like:

 Revenue: $166,919,000 Net Loss: $38,896,000
So they're still not profitable. This is surprising, since they don't have any big capital investments. They're not doing anything that takes a lot of R&D. The thing runs on Amazon AWS. They've been operating for years and should be profitable by now. Yes, they're growing fast, but the costs don't rise in advance of the growth. You don't have to prepay Amazon for AWS.

"Each share of Class A common stock is entitled to one vote. Each share of Class B common stock is entitled to 10 votes and is convertible at any time into one share of Class A common stock."

So the public stockholders have no power. The insiders can't be fired. Google and Facebook did that, but they were big successes before the IPO. It's unusual to try to pull that off when you're unprofitable. The NYSE, on which they want to list, didn't allow multiple classes of stock until 1986.

WhatsApp is only 15% of their revenue, so that's not a big problem.

Twilio's big thing is telephony integration. They have a SS7 gateway and can integrate Internet and telephony. If Amazon or Google offered that, Twilio would have a big problem. Google has Google Voice and Google Hangouts, but doesn't offer telephony integration via a usable API. Yet.

This IPO is an exit for their VCs. They were all the way up to a series E round, and since they grew fast by losing money, the early investors had to pour in a lot of cash.

2
markolschesky 7 hours ago 4 replies      
Had no idea that WhatsApp was even a Twilio customer let alone one of its largest.

>We currently generate significant revenue from WhatsApp and the loss of WhatsApp could harm our business, results of operations and financial condition.

>In 2013, 2014 and 2015 and the three months ended March 31, 2016, WhatsApp accounted for 11%, 13%, 17% and 15% of our revenue, respectively. WhatsApp uses our Programmable Voice products and Programmable Messaging products in its applications to verify new and existing users on its service. We have seen year-over-year growth in WhatsApp's use of our products since 2013 as its service has expanded and as it has increased the use of our products within its applications.

>Our Variable Customer Accounts, including WhatsApp, do not have long-term contracts with us and may reduce or fully terminate their usage of our products at any time without penalty or termination charges. In addition, the usage of our products by WhatsApp and other Variable Customer Accounts may change significantly between periods.

3
calcsam 6 hours ago 2 replies      
Revenue is great; sales & marketing costs are fine, competitive environment is great; the main concern here is the cost of ongoing service.

Revenue: Twilio made $166M in 2015. From Q1 2015 to Q1 2016, thew grew 80% -- so we can project a 2016 revenue of around $300M. At that pace, they'll hit ~$1B in 2018 or 2019.

Landscape: They have very few competitors, in constrast to other high-profile enterprise startups like Box.

Cost of revenue: Their cost of revenue -- servers, telecom bandwidth, customer support -- is ~45% of revenue. Typical SaaS startups run around 20-30%. I suppose this is the danger of being in the telecom space -- you do have high data costs.

Sales & marketing: Coming in at ~$50M, or ~30% of revenue is quite reasonable. Box raised concerns a couple of years back when S&M were 125% of revenue; they were able to get it down to 65% or so and then they IPO-ed. 30% is fine.

4
breaker05 7 hours ago 19 replies      
Good for them! I love their service and use it on GoatAttack.com
5
stanmancan 7 hours ago 8 replies      
This is the first time I've ever looked through an S-1 before, but in the Risks section they say:

 We have a history of losses and we are uncertain about our future profitability
Is it normal to go public when being uncertain if you'll ever be profitable?

6
rottencupcakes 7 hours ago 0 replies      
Looking at their escalating losses, I have to wonder if this IPO is a desperation play after failing to raise private money at an acceptable valuation in the current climate.
7
andyfleming 5 hours ago 1 reply      
I'm surprised all of these comments are so negative. Yeah, maybe there are some things that don't quite add up yet, but they have great developer engagement, solid services, and are actually innovating in the space. Twilio is a winner. When or how much is another question.
8
liquidise 7 hours ago 6 replies      
I continue to be surprised by the trend of companies who have fallen short of profitability filing for IPO's. So often i find myself asking "is this the best for the company and its employees or is it best for the investors who want a faster return, at the expense of the company and its employees?"
9
karangoeluw 7 hours ago 1 reply      
I love Twilio, but I'm a little surprised they lost $35M on $166M rev in 2015. I thought they were very close to profitability.
10
nickbaum 3 hours ago 1 reply      
It's hard to overstate how much easier the Twilio API has made it for developers to interact with SMS and phones.

When I first built StoryWorth, it only worked over email because I thought voice recording would be too complex (both to use and to implement).

However, users kept asking for it so I finally bit the bullet... and it was way easier than I expected. Using the Twilio API, I had voice recordings over the phone working within days.

The team is also super friendly. Less than a month after our launch, someone reached out to me and they wrote about our company on their blog:

https://www.twilio.com/blog/2013/05/old-stories-new-tech-sto...

Really glad to see their continued success!

11
anotherhacker 5 hours ago 0 replies      
"Disrupt" appears 18 times in this filing. Is that code for "one day we'll be profitable, I promise!"
12
tqi 5 hours ago 0 replies      
Does anyone have recommendations for resources/guides to S-1s (ie what to look for, which sections are usually boilerplate, what is normal/abnormal, etc)?
13
hamana 7 hours ago 6 replies      

 > Jeff Lawson(1) --- 8,623,617 --- 11.9%
How pathetic is this? Around 90% of the company is taken by the vulture capitalists and you, as a founder, only get to keep 12%. Bill Gates at the time of Microsoft's IPO had around 50% of the company.

14
tschellenbach 7 hours ago 0 replies      
We use Twilio for our phone support over at getstream.io, it was really easy to setup. Makes it fun to build this type of stuff :) Congrats to the their team!
15
patrickg_zill 5 hours ago 0 replies      
I am skeptical, a bit, of their longterm ability to fight off margin compression.

What do people use them for?

If minutes of calling, inbound or outbound, that is trivial in terms of switching costs.

If sms, there are plenty of competitors, including teli.net (disclaimer, I know people that work there), haven't directly compared pricing however.

The real question to ask is how much lockin they have managed to generate. Without lockin they will eventually suffer margin compression.

16
shaqbert 6 hours ago 0 replies      
Interesting that they are "selling" base revenues. After all total revenues in 2015 were $167m, and base revenues only $137m.

The definition of the $30m "missing" revenues seems to indicate that this piece of the business might churn at any moment, or is just a brief "burst" of revenues w/o the transactional nature of SaaS.

Depending on the lumpiness of these bursts, that is a smart decision to "ring-fence" in the reporting of an otherwise sound recurring business. Guess this is a good CFO here...

17
joshhart 7 hours ago 1 reply      
Not even cash flow positive. Stay away.
18
josefdlange 7 hours ago 2 replies      
As a naive nincompoop, is there any way for me to guess the initial price of a share when they become available?
19
cmcginty 3 hours ago 0 replies      
Did they actually set the expected value of the stock to $.001/share?
20
cissou 7 hours ago 2 replies      
If someone with better knowledge of S-1s could shed light on this: where can we see the option pool / option grants awarded to employees? Must be somewhere in there, no?
21
krschultz 7 hours ago 0 replies      
Stop spamming for Paysa. 100% of your comments are blatant advertisements.
22
dang 4 hours ago 3 replies      
We detached this subthread from https://news.ycombinator.com/item?id=11780119 and marked it off-topic.
3
Symantec Issues Intermediate CA Certificate for Blue Coat Public Services crt.sh
125 points by AaronFriel  3 hours ago   43 comments top 13
1
AaronFriel 3 hours ago 1 reply      
I'm posting this because within the past year, Symantec has gotten in hot water for issuing rogue certificates[1]. While Symantec has agreed to certificate transparency, Blue Coat is a known operator of MITM services they sell to nation-states, and this certificate would allow Blue Coat to issue arbitrary MITM certificates.

It's not clear to me why Blue Coat would need to be a trusted CA by all systems and browsers, but given their own checkered history[2] I don't think it would be unreasonable to suggest they're going to use this for MITM purposes.

[1] - https://security.googleblog.com/2015/09/improved-digital-cer...

[2] - https://en.wikipedia.org/w/index.php?title=Blue_Coat_Systems...

2
Aoreias 1 hour ago 3 replies      
This isn't necessarily as nefarious as it seems - Blue Coat is going to have to comply with Symantec's Certification Practice Statement(CPS) which prohibits the issuance of MitM certificates. In all likelihood it's to allow Blue Coat to roll out a service that allows it to create certificates for clients of its security services. Any deviation from that CPS would necessitate revoking this intermediate certificate.

That said, I'm quite curious though if Google is going to require that Blue Coat submit all issued certificates to be submitted to Certificate Transparency logs like the rest of Symantec's certificates[0].

[0] https://security.googleblog.com/2015/10/sustaining-digital-c...

3
bigiain 2 hours ago 2 replies      
I wonder just how many certs I'd notice failing if I pulled Symantec's root out of my keystore - and if I'd get any mileage contacting the sites that end up broken and explaining why.

This is exactly the sort of thing I'd like to have the "CA death penalty" seriously considered against Symantec - but I fear they're going to be judged "too big to fail". A grass roots campaign of contacting sites (especially sites I've got paid accounts with) saying "Sorry, I can't use your site anymore because I've had to disable Symantec's root keys (see this link for reasons), can I please cancel my billing." might be the only thing I can do.

(Oh, and joy! https://www.apple.com is secured by a Symantec cert for me right now. How much would you bet against all my Mac OS X and iOS software updates also being "secured" that way?)

4
vardump 2 hours ago 0 replies      
What was Symantec thinking? Doing decisions that undermine their whole business?

Unless there's something that can explain this in better light, it's time to untrust Symantec brands and stop purchasing their certificates. Is it really that VeriSign, one of the pioneers in CA industry, cannot be trusted anymore?

Remember that also Thawte is their brand.

Incredible.

5
0x0 30 minutes ago 0 replies      
You know, the main job for a web CA is to verify the owner of a domain. What if... domain registrars had that job instead? They definitively know the domain registrant, no need to play games with email verification tokens or http challenges.
6
taylorbuley 2 hours ago 0 replies      
Assad's Syria, among other newsworthy nations, is reportedly a BlueCoat customer.
7
0x0 2 hours ago 0 replies      
It really sucks that there's no way to block this intermediate CA or even the root symantec CA on iOS; the type of roaming device where it's most needed (and most likely to be used with random wifi).
9
supergeek133 2 hours ago 0 replies      
This is very much a case of "the who not the what"... they've been in some shady dealings with countries that explicitly have goals to deny freedom of information.
10
knd775 2 hours ago 0 replies      
Oh hell. This can't be good.
11
StavrosK 2 hours ago 2 replies      
So, how much of my internet will break if I distrust the Symantec cert on my machine?
12
CiPHPerCoder 3 hours ago 2 replies      
https://archive.is/FEZfj just in case this goes down.

How to untrust this certificate: https://blog.filippo.io/untrusting-an-intermediate-ca-on-os-...

13
mtgx 1 hour ago 1 reply      
Isn't the dead-line for Google removing Symantec from Chrome supposed to be this June? (if they don't adopt CT for all of their certificates by then, that is)
4
Did the Clinton Email Server Have an Internet-Based Printer? krebsonsecurity.com
78 points by whbk  1 hour ago   86 comments top 13
1
untog 18 minutes ago 3 replies      
Among the more disappointing things in all of this is that there is a rational, important conversation to be had about everyday awareness of security and government inflexibility. But there won't be, because she is Hillary Clinton and it is 2016.

Supposedly she got the server set up because the NSA refused to give a politician who travels frequently a secure smartphone. She (I personally believe) was likely ignorant of many of the security requirements of such a server (even one set up for unclassified e-mail), as was whoever set it up. And no-one on her staff either knew enough or was willing enough to say anything. She is also supposedly not the first Secretary of State to have an arrangement of this nature.

This feels like the very definition of systematic failure and clearly needs to change. But the conversation is almost exclusively based around a) her having nefarious motivations, because she is Hillary Clinton, or b) this all being a Republican plot to derail the Democratic candidate for President.

It's all very depressing.

2
Jerry2 1 hour ago 1 reply      
Here's some more details about the state of security of her private server [0]:

>Outlook Web Access, or OWA, was running on port 80 without SSL (unencrypted)

>Remote Desktop Protocol, port 3389, was exposed through the DMZ (open to anyone on the internet.) This, at the time it was being used, was open to critical vulnerabilities that would allow for remote execution of code.

>VNC Remote Desktop, port 5900, was also exposed through the DMZ.

>SSL VPN used a self-signed certificate. This isn't inherently bad, but left them open for "spearphishing" attacks, which have already been confirmed to be received by Hillary Clinton and her staff

It's also interesting how they responded to attacks on the server [1]:

>Here is the section from page 41 of the report which references an attack:

> On January 9, 2011, the non-Departmental advisor to President Clinton who provided technical support to the Clinton email system notified the Secretarys Deputy Chief of Staff for Operations that he had to shut down the server because he believed someone was trying to hack us and while they did not get in i didnt [sic] want to let them have the chance to. Later that day, the advisor again wrote to the Deputy Chief of Staff for Operations, We were attacked again so I shut [the server] down for a few min. On January 10, the Deputy Chief of Staff for Operations emailed the Chief of Staff and the Deputy Chief of Staff for Planning and instructed them not to email the Secretary anything sensitive and stated that she could explain more in person.

[0] https://np.reddit.com/r/politics/comments/4j2r94/judicial_wa...

[1] http://lawnewz.com/high-profile/clinton-tech-says-private-em...

3
zaroth 1 hour ago 3 replies      
The emails themselves sent from Clinton's server were unencrypted for several months, so unencrypted printing is just more of the same.

There's no reasonable question anymore that laws on handling classified data were broken, the only question is will charges actually be brought?

4
slantedview 35 minutes ago 2 replies      
One of the commenters on the Krebs post makes a remarkable point [1]:

"It gets better. Do a dig mx clintonemail.com. Youll see that the machines incoming email was filtered by mxlogic.net, a spam filtering service that works by received all your emails, filtering out the spam, and forwarding you the rest.

This is because the hosting provider, Platte River Network, sold a package along with the hosting. The package included spam filtering and full-disk off-site backup (since then seized by the FBI).

So every email received by Clinton was going through many unsecured places, including a spam filtering queue, a backup appliance and an off-site backup server. Which has already been documented."

http://krebsonsecurity.com/2016/05/did-the-clinton-email-ser...

5
drakefire 1 hour ago 7 replies      
This story just keeps getting better. There is either a grand nefarious plot, or worse, horrific incompetence. I just can't find a third possibility.
6
mindslight 25 minutes ago 1 reply      
I really want to like Clinton for running her own server, respecting the decentralized basis of the Internet. Yet her domain name was clintonemail.com? What a pleb! Political corruption and murder is her family business, yet she can't be bothered to obtain a better online identity even with those capabilities? She may as well have been at hotmail or gmail and highlighted in blue!
7
internaut 4 minutes ago 0 replies      
The US government should give Guccifer the Medal of Honour. This is a farce.
8
dmritard96 43 minutes ago 0 replies      
Also curious about USB - are there any USB logs and is that something logged by whatever OS her server was running? seems like it would have been really easy for things to move from email to usb...
9
ghostly_s 1 hour ago 2 replies      
Does this really indicate any private correspondence was printed via the internet? Even if a printer was set up which _was_ writable via this web address, that doesn't mean that emails from the email server itself were printed to that address rather than directly to the device, does it? In fact, presumably the printer and email were hosted on the same server so it doesn't make much sense to me that they would send one to the other via the web address.
10
jrcii 1 hour ago 1 reply      
Any time in the last 10 years I setup an independent email server it had horrible deliverability rates. I wonder how they worked around that. Getting your server whitelisted with all the major providers is a major hassle.
11
jaboutboul 42 minutes ago 0 replies      
Bernie 2016?
12
Esau 1 hour ago 1 reply      
Am I the only one who dislikes the domain name itself? Every time I see it, I read it as "Clint One Mail", not "Clinton Email".
13
mergy 1 hour ago 1 reply      
Other very serious concerns:

1. Was it running RAID? If so, what level? Better not be RAID 5. Horrible write speed.

2. Let's REALLY dig into the DNS. What about reverse lookups and CNAMEs.

3. Any idea what the screensaver was? I'll reserve judgement until I have some confirmation.

4. NIC driver version: Hearing that she just ran a generic MS driver for the Intel dual network card. Unbelievable.

5
React Tutorial: Cloning Yelp fullstackreact.com
304 points by jashmenn  9 hours ago   166 comments top 33
1
swanson 7 hours ago 7 replies      
I'm no Javascript Boilerplate Fatigue apologist, but there are many comments in here that are treating this as a "Learn how to use React" tutorial. This is not what is advertised nor the stated reason this article was written.

From the first sentence: "we get a lot of questions about how to build large applications with React and how to integrate external APIs" Large application strucutre, external integrations...not "Hello World"

Of course there is no need for multiple environments, or a robust testing setup, or an icon font, or 6 different webpack loaders to learn the basics of React. This is not a tutorial on the basics of React -- this is written from the perspective of a large production level application. And that "overhead" and "insanity" are helpful for real projects that are staffed by more than one person and need to last more than a week.

It seems absurd that it might take you 2 hours to setup the environment before you get to hello world. But if you are planning to use this as the foundation of a 6 month long project, 2 hours to get everything setup in a sane way is not an issue.

There is absolutely churn in the JS world and some of it does seem unbelievable because things haven't contracted and settled into a steady-state. But don't just levy a blanket "lol fuck javascript tooling" every time you see a complicated setup process; you are doing yourself a disservice by not critically evaluating the choices the author of this post made and determining for yourself if they will benefit your project and/or learning efforts.

2
eterm 8 hours ago 18 replies      
To me this epitomizes what I feel as I'm trying to explore options for different front-end frameworks. In this article I'm 30 screens down (literally 30 page down presses) and it's not even finished setting up the environment and dependencies.

Sure, this is something that you only do once, so if it then leads to much better development workflow it makes sense to have a solid investment upfront, but it makes it very hard to compare technologies and make informed decisions given how much up-front "glue" is needed to get a demo application up and running.

I'm not blaming react, everything else in the javascript space is just as bad right now. There's at least 4 different ways of loading modules (UMD, AMD, commonJS, es (babel) modules), 2 of which (require(), import .. from) are used within this example.

In fact the whole process is so complex part of the process is "clone this package which has pre-configured parts of the process".

And all of this to glue together a couple of APIs.

3
qudat 1 hour ago 0 replies      
I've built a multitude of websites/applications in my lifetime spanning many different programming languages, build systems, tools, etc. From PHP, to python, to golang, to java, and JS. It seems painfully obvious to me that the people complaining about this setup/build system do not fully grasp the power and beauty of the JS ecosystem, more specifically react, redux, time traveling, and hot module reloading. It is without competition, the best development experience I have ever had been apart of. There is no lag time between when I make a change in JS or CSS to when those changes get applied to the application. There's no compiling (even tho there is a compile step), no refreshing the page, no stepping through all your previous steps to get to that part of the application, your changes are patched in real time and available to you automatically.

I guess the saying is true, that haters are going to hate, but there really is no competition in terms of development experience once you grok the ecosystem.

4
pjs_ 8 hours ago 3 replies      
This genuinely made me feel ill. The author has done a tremendous service to others - clearly and patiently listing the thousands of steps required to get a basic modern web app up and running. I agree with others that it is often difficult to find all the steps for a process like this in one place. At the same time, this is completely, totally fucking insane.
5
keyle 2 hours ago 1 reply      
I've dealt with many technology stacks. If this is the future, we're F#$$%.

Seriously, how can people be blasting older technology like Flash and Flex (which was GREAT), out of the water for not using web standards and then this Frankenstein of a "stack" is going so mainstream.

Sure, there is the VM problem, but the language was good and so was the framework. This looks horrendous and scary. Imagine maintaining this for the next 10 years when "the standards" will have moved on to you "next gen" javascript "framework".

My only way to write web apps has been using micro frameworks, jquery, and small libs that do one thing and one thing only. I can handle serving pages with Go thanks, and I don't need a routing system that looks like a vintage joke. Sorry if I sound jaded, I've been doing this for 15 years.

6
eagsalazar2 3 hours ago 1 reply      
The criticism here is really baffling in how it blames the js ecosystem for the complexity in building large production apps..

I wouldn't find fault with a similar tutorial for "how to set up a production, at scale, rails app with only sever side rendering and zero javascript". Between hosting, cacheing, deployment, worker queues, database provisioning, etc, LOL that tut would be gigantic and that makes sense!

If people are mad that making production client side applications isn't trivially easy, well that just isn't going to happen and that isn't because the js ecosystem is screwed up.

7
rvanmil 3 hours ago 1 reply      
I've been on the fence for quite a while, but a couple of weeks ago I finally bit the bullet and taught myself webpack, ES6, React, CSS modules, Redux, and everything else that comes along with these tools.It definitely felt like relearning my job (coming from working with Grunt, Backbone and jQuery) and it took a lot of time and effort to really get to understand everything (still learning many new things each day), but man was it worth it. I enjoy working with these tools very, very much and I am able to build apps significantly faster than I could before.
8
Cyph0n 7 hours ago 7 replies      
Great work on the tutorial. I'm sure it took a lot of time to setup, and it seems well written.

However I simply won't believe that setting up a simple React app requires so much overhead. Granted, I have no experience with React, and only marginal experience with frontend web dev.

As I read the tutorial, this is the list of questions I had:

1. Why do we need so many Babel presets? What do they do?

2. Why do we need Webpack exactly? Why not use a traditional build system like Gulp?

3. Why is Webpack so difficult to setup? Are there no pre-configured setups for React?

4. What the hell is postcss? Are Less and Sass out of fashion now?

5. And why all this added complexity to setup CSS? They are only stylesheets for God's sake!

6. Oh, so now we need to configure Webpack to support postcss? The definition of reinventing the wheel. Is there no plugin system for Webpack?

7. Why is it so complicated to setup multiple environments using Node and Webpack?

Phew, looks like we're done -- nope, we're not.

8. So many libraries just to setup a testing environment? I wouldn't be surprised if frontend apps aren't well tested...

9. Ah, we also need a "JSON loader", whatever the hell that is.

10. Great, another CLI tool for testing. And more configuration of course.

11. Webpack once more needs to be configured to support our new testing app.

12. We need a better spec reporter? Why? More configuration...

13. More Webpack configuration.. I'm already sick of it.

So many things to keep in mind, so many dependencies, so very many points of failure. If just one of these libraries is abandoned, or has a breaking change, your entire development environment is dead. Is this the current state of frontend web dev, or are these guys just overdoing it for the sake of the tutorial?

I find this all weird because I have the habit of thinking very carefully about every single dependency when I'm writing software. Do I really need it i.e. can the same task be achieved using the standard library? If not, how active is the development of the library (recent activity, issue response time, number of contributors)? How many libraries does it depend on - the fewer, the better? And even with all this, it's still not guaranteed that things will go smoothly!

9
showerst 9 hours ago 0 replies      
This is a really great tutorial, it's rare to find pieces that step you through the whole process with dev/prod and tests without assuming you already understand the arcane setup.

It also shows what an arcane dependency-hell react is... how much boilerplate does it take to get up a map with a valid route? I hope this is something that becomes a bit more standardized/easier as the ecosystem evolves.

10
blueside 8 hours ago 3 replies      
The setup required just to get a production-ready idiomatic `hello world` app in React is downright insane. Without the Facebook name behind it, I don't see how React could have ever made it this far.

Foolish of me to keep underestimating the pains that JS developers willingly tolerate.

11
hammeiam 7 hours ago 1 reply      
I also got frustrated by the setup time for a simple react app, so I make a starter repo that I just clone for all my new projects. It's made to be as simple, light, and understandable as possible while still being useful. Check it out and let me know if you have any questions! https://github.com/hammeiam/react-webpack-starter
12
pacomerh 3 hours ago 2 replies      
Cool tutorial, but first check if React is what you really need for your next thing. Facebook creates React to solve the problem of dealing with large applications with data that changes overtime. React became very popular to the point where the word gets spread saying this is the newest and coolest thing to learn. At this point the core idea and key aspects of why React is cool get misunderstood and we start assuming that React is the best choice for everything! and Virtual DOM is so cool and done!. So now apps that could be done in 1/3 of the time if you just used a simpler library or even vanilla Javascript are being written with all these complex flows, dispatching actions and thunking async operations when all you really needed was display a list of things with ajax...... I'm not saying React is not cool. I'm just saying, understand why React is relevant today and decide if its the right thing for you project. Of course these tutorials are gonna be using simple examples that are unrealistic. They're meant to help you understand how things connect with each other, but are you building a project that is worthy of all this setup?.
13
zeemonkee3 1 hour ago 0 replies      
The anger in this thread does not bode well for the future of React.

I predict big, complex, arcane React stacks will be a punchline in a few years, much like J2EE/EJB is today.

And yes, I know React itself is a small library - Java servlets were a small, simple API that formed the foundation for a ton of over-engineered abstraction on top.

14
missing_cipher 1 hour ago 0 replies      
I don't understand what people are saying, the basic Hello World App is right here: https://www.fullstackreact.com/articles/react-tutorial-cloni...

Like 1/15th of the guide down.

15
bdcravens 5 hours ago 0 replies      
I know there's a lot of dog-piling on this tutorial, but having gone through fullstack.io's Angular 2 book, if I wanted to learn React, I'd probably (and probably will) go with their title.
16
morgante 7 hours ago 1 reply      
People should stop complaining about how "hard" it is to get started or how "complicated" React is.

React is simple. Its core abstraction is trivial (making views into pure functions).

If you want to, you can get started with React today without installing any software at all. Just include it from the CDN: https://cdnjs.com/libraries/react/

The rest is there because things like live-reloading are genuinely helpful. But you don't need to roll them yourself. There are dozens of great boilerplates you can base a new project off of.

Also, I've never had as much difficulty setting up a React environment as the constant struggle it is to get even a basic Java app to build from source.

17
ErikAugust 8 hours ago 1 reply      
We (Stream) are releasing a React/Redux tutorial series - you build a photo sharing app (Instagram clone), for those you might be interested:

http://blog.getstream.io/react-redux-example-app-tutorials-p...

18
sergiotapia 5 hours ago 1 reply      
Meanwhile here's how you work with React in a Meteor application.

https://github.com/nanote-io/nanote-web

No need to mess around with plumbing because honestly, who cares about that stuff.

19
dandare 3 hours ago 2 replies      
There is something wrong with me but I can not/refuse to use command line. It has no discoverability and no visible system state information. Every time I try to use it I start to panic because I can not see my files, I can not see the state of my task, I can not see anything. Maybe I have some form of dyslexia. Anybody out there with similar symptoms? Anybody knows a remedy to my problem please?
20
hathawsh 7 hours ago 0 replies      
The complexity of this tutorial reflects the current state of web app development more than the complexity of React. React by itself is actually rather simple on the surface.

Even though I don't need this tutorial to be productive, I think I'm going to go through it anyway to fill in holes in my knowledge. It looks well written.

21
freyr 7 hours ago 0 replies      
All this effort to create a single page app that likely doesn't need to be a single page app in the first place.

If you're jumping through hoops to build an app that looks and feels like a traditional web site, you're doing it wrong.

22
drumttocs8 5 hours ago 2 replies      
I think most everyone agrees that the amount of work to get all these pieces glued together before you can even start is ridiculous- a problem that Meteor set out to fix years ago. It faltered, of course, by being too opinionated. Now that Meteor fully supports React and npm, though, is there any reason not to use it? Sure does remove some pain points.
23
arenaninja 6 hours ago 0 replies      
I haven't finished reading, but so far this is an excellent walkthrough. This goes far far FAR beyond the 'Hello World' and 'TodoApp' tutorials and demos the amount of tooling you have to dedicate to keep things as seamless as possible.

I recently wrote that in a side project it does not appear to be worth the effort, but that applies to my side project and nothing else. Your next project may well look a LOT like this.

24
k__ 4 hours ago 0 replies      
If you want to look into React, but aren't in the mood for all the setup:

https://github.com/kay-is/react-from-zero

25
JoshGlazebrook 8 hours ago 3 replies      
Am I the only person who just doesn't get the appeal of Webpack over using something like Gulp? It just seems to me like Gulp is so much easier to use and setup.
26
troncheadle 8 hours ago 0 replies      
I have something VERY similar to this in production built with Angular, and let me tell you, I'm looking forwards to the day that I get to refactor it in React.

I get that it's frustrating to do a lot of setup. But that's the nature of the game. We're all standing on the shoulders of giants.

React is a pleasure compared to other ways of conceptualizing and practicing building user interfaces on the front end.

27
tonetheman 5 hours ago 0 replies      
Interesting... he needs to include his webpage config from each stage. He linked the full one but that will not work when you are still in the starting parts.
28
int_handler 4 hours ago 0 replies      
The cover of this book is very a e s t h e t i c.
30
nijiko 3 hours ago 1 reply      
All of that, for that simple demo...
31
ahhsum 8 hours ago 1 reply      
This looks awfully close to python. Is that true, or am i imagining things?
32
zxcvcxz 5 hours ago 0 replies      
Lots of complaining but no one offers a better solution. Any article detailing how to build a yelp clone is going to be kind of long. It's really not that bad compared to doing something like this with LAMP. Much simpler than Angular too.

With Angular I feel like I have to re-learn web development and do everything the "angular way", and who knows when Angular three is coming out and the "Angular way" completely changes.

And then what are the non-javascript alternatives? It really doesn't matter much because I'll have to interface with javascript anyway if I'm a professional webdev, so why would I add even more clutter to the already cluttered web dev world?

With React, I can learn a small framework that's highly extensible and basically pure javascript. I don't feel like I have to re-learn everything I know about web-dev when using react like I do with Angular.

33
tschellenbach 7 hours ago 1 reply      
More tutorials coming up soon: cabin.getstream.io
6
Blogging cells tell their stories using CRISPR gene editing newscientist.com
5 points by brahmwg  26 minutes ago   1 comment top
1
mmastrac 22 minutes ago 0 replies      
Biotech is "borrowing" a few tricks from embedded development. Trying to get an LED to light up to prove your code is running - similar to what they've done with triggering fluorescing genes. This is sort of like writing sentinel values when interrupt routines get called and reading them out after the fact.

Once you have these tools in place, building the fundamental blocks to bring a system up is much easier.

7
Parallelizing the Naughty Dog Engine Using Fibers [video] gdcvault.com
61 points by Splines  4 hours ago   22 comments top 6
1
corysama 2 hours ago 0 replies      
Similar talk: CppCon 2014: Jeff Preshing "How Ubisoft Develops Games for Multicore - Before and After C++11"

https://www.youtube.com/watch?v=X1T3IQ4N-3g

Ubisoft's talk spends more time getting into the weeds with atomic ops. Naughty Dog's is more of an architecture discussion. If you can only watch one, I'll recommend Naughty Dog's.

2
rawnlq 2 hours ago 1 reply      
An older but similar approach from Doom III engine: http://fabiensanglard.net/doom3_bfg/threading.php

I wonder if there is a good open-sourced C++11 project for this pattern? (a job/task queue)

Also how does this pattern compare with just using future/promises with a parallel executor?

https://code.facebook.com/posts/1661982097368498/futures-for...

or grand central dispatch from objective C?

3
tomlu 50 minutes ago 1 reply      
Super interesting, but the audio level is too quiet for my laptop even with everything set to max. Can it be downloaded from anywhere?
4
dpc_pw 2 hours ago 0 replies      
For fibers (AKA Coroutines) in Rust, see mioco https://github.com/dpc/mioco
5
warmwaffles 2 hours ago 3 replies      
I still don't quite understand what he means by a "fiber". Is this basically 160 blocks of memory allocated on the heap?
6
hyperpallium 2 hours ago 2 replies      
Is there a downloadable version?
8
Person carrying bacteria resistant to antibiotics of last resort found in U.S. washingtonpost.com
107 points by dak1  6 hours ago   73 comments top 13
1
tjohns 3 hours ago 3 replies      
Relevant story: As a kid, one of my friends would frequently get strep throat. So his mom would give him amoxicillin until he appeared better... and then save the rest of the bottle for the next time he'd (invariably) get strep throat.

And that's how antibiotic resistance happens.

2
adrusi 3 hours ago 4 replies      
Look, this is scary and a big problem, but can we please stop talking about the "end of the road" for antibiotics?

The worry here isn't that antibiotics will suddenly become useless and whenever anyone gets a bacterial infection they'll have no hope. The worry is that there will be a number of prevalent bacterial illnesses which can't be treated with antibiotics.

Currently antibiotics work for an overwhelming majority of bacterial illnesses, that's not going to change overnight. What will change is the idea that bacterial illnesses are trifles because they can be cured every time by antibiotics. A few diseases will emerge, more and more over time, that have much worse consequences than we are used to thinking about right now, but the rest will be the same.

I don't mean to underplay the threat, but if we keep pushing this rhetoric, people will discredit the threat when it turns out that 50 years later we're still using antibiotics for most illnesses that people actually get (because antibiotic-resistant strains are effectively quarantined). People will compare it with the "we're going to run out of oil" scare.

3
slg 2 hours ago 4 replies      
The article doesn't touch on it, but the obvious followup question from the laymen is why can't we develop new antibiotics? I was curious and according to Wikipedia we haven't developed a new class of antibiotics in 30+ years. Can someone with knowledge on the subject explain why we seemingly can't discover/develop new forms of antibiotics to combat these resistant bugs?
4
Zelmor 2 hours ago 0 replies      
This is what happens when you raise livestock on antibiotics as the de facto standard. You are what you eat.
5
af16090 2 hours ago 1 reply      
The cover story for this past week's Economist was about antibiotic resistance: http://www.economist.com/news/briefing/21699115-evolution-pa...

And from that story, it talked about Colistin (the drug this patient's E. coli is resistant to): "Some of the antibiotics farmers use are those that doctors hold in reserve for the most difficult cases. Colistin is not much used in people because it can damage their kidneys, but it is a vital last line of defence against Acinetobacter, Pseudomonas aeruginosa, Klebsiella and Enterobacter, two of which are specifically mentioned on the CDC watch list. Last year bacteria with plasmids bearing colistin-resistant genes were discovered, to general horror, in hospital patients in China. Agricultural use of colistin is thought to be the culprit."

Considering the same article says that "In America 70% of [antibiotics] sold end up in beasts and fowl" it seems that an easy thing to do would be to stop giving antibiotics to animals

6
c3534l 2 hours ago 0 replies      
> Health officials said the case in Pennsylvania, by itself, is not cause for panic. The strain found in the woman is treatable with some other antibiotics.

Thanks for completely ignoring that advice with a headline and three paragraphs of misleading information designed specifically to cause panic.

7
rdtsc 1 hour ago 1 reply      
Wonder if we'll see a resurgence of phage therapy due to this.

Phage therapy is using viruses which will infect and attack the bacteria. Viruses can mutate and adapt just as well as bacteria (while say antibiotics are static in a way). So they can keep up with the mutations.

It is a pretty crazy but also ingenious approach.

https://en.wikipedia.org/wiki/Phage_therapy

9
jwatte 2 hours ago 1 reply      
If we can't kill these infections after they happen: Can we develop vaccines against them to prevent occurrence?
10
Practicality 4 hours ago 3 replies      
It might be time to start editing our (DNA) code to fight the bacteria. It seems like the only thing that will be fast enough to keep up with the mutations.
11
searine 3 hours ago 1 reply      
The are three solutions needed here :

1. Stricter regulation of antibiotics, particularly in farming.

2. Better government funding of antibiotic discovery.

3. Stricter regulation of antibiotic use. No solo-drugs, all antibiotics used in stacks of 3 or more. Better monitoring of complete antibiotic use cycles.

Biologic resistance can be managed, HIV is more than enough evidence of it working. We have to get serious about it, the age of reckless antibiotic use needs to end, now.

12
dctoedt 4 hours ago 2 replies      
Time for Congress to authorize a very big monetary prize for the company that comes up with a better solution, with that solution then being licensed for free to all U.S. manufacturers (or something like that to make it politically acceptable to the xenophobic elements in the GOP).
13
ams6110 3 hours ago 1 reply      
Meta: something about washingtonpost.com locks my browser every time.
9
Show HN: Automatic private time tracking for OS X qotoqot.com
315 points by ivm  10 hours ago   190 comments top 61
1
ivm 10 hours ago 17 replies      
I was not happy with features and UX of other productivity trackers. Most of the time tracking software is made for controlling employees or for billing clients and I just wanted an automated productivity measurement.

I tried RescueTime before but it was too expensive for its functionality ($72-108/year) and also collected all my tracked data on their servers. There is standalone ManicTime on Windows but OS X standalone trackers lack features and most of them are not automatic.

So I started to play with OS X accessibility and got promising results pretty fast. Then there were about 14 months of writing some code once in a week or two and 3 months of almost full time polishing and gathering feedback.

Now it's marketing time. Qbserve did well on PH but almost no other sites picked it from there. This week I pitched about 70 journalists and bloggers who write about Mac or productivity apps but the results are not clear yet.

I'll be very grateful for advices on how to promote it better and overall feedback. Thank you!

2
albertzeyer 7 hours ago 0 replies      
Fwiw, I developed a very puristic similar project: https://github.com/albertz/timecapture

So far, it's only tracking the time and recording which app is in the foreground and what file / url is currently opened in there. It doesn't have any GUI and it won't show you nice statistics like Qbserve. But it shouldn't be difficult to calculate any statistics you want from the data.

Python, Open Source, easy to add support for other platforms and apps (so far mostly MacOSX support). Patches are welcome. :)

3
deweerdt 8 hours ago 1 reply      
I bought the app, and I'm really happy with it, thanks!

I know it's a long shot, but some sort of shell integration would be awesome. My typical day is > 60% iTerm2. iTerm2 has shell integration: https://iterm2.com/shell_integration.html, and maybe that would be one way is something that Qbserve could be fetching info about what's going on in the terminal.

4
Joe8Bit 10 hours ago 2 replies      
Some feedback:

* Make the price more readily apparent on the landing page

* Tracking the '6,400 sites, apps and games' is great, but it would be good if I could find out if the ones I care about in that list!

* Make the above the fold screenshot bigger, I tried squinting/zooming before I realised I could scroll down

* Can I determine which things are productive/neutral/distractive? As I wouldn't want to buy it if that was static

Looks good though!

5
pault 1 hour ago 0 replies      
I've been using Rescuetime for the last 8 years or so, and I will be purchasing this in the next few weeks to see if it will work as a replacement. From what I can see, it shouldn't be a problem. Congratulations on shipping!
6
mrmondo 1 hour ago 1 reply      
Looks interesting! I have to ask: is it OS X native or is it some JavaScript thing?
7
Karunamon 7 hours ago 1 reply      
Minor UI feedback:

The settings UI is extremely hard to read on my screen. The headings are light grey on white, and no amount of messing with my screen's contrast settings leaves something easy to read.

The checkboxes also immediately convey "disabled" due to their coloring. Your UI in general is spot on and sanely designed, but please consider taking a cue from the OSX HIG[1] and use the system colors and leave the light grey stuff for actual disablement, it will make your app look a lot more native.

[1] (about halfway down the page): https://developer.apple.com/library/mac/documentation/UserEx...

8
daemonk 5 hours ago 2 replies      
I really like the UI. Is it possible to implement keyboard/mouse movement activity tracking? I don't mean keylogging or anything, but something like key presses per minute while an app is focused or mouse movement in pixels per minute while an app in focused.
9
ryanmarsh 8 hours ago 1 reply      
If you're a consultant or you work in a consulting firm I have some advice for you.

Get comfortable with fudging the numbers on your time reports. It's ok. Report what's reasonable given:

You aren't being paid for your minutes you're being paid for the ability to solve a customer's problem in minutes.

I bill my clients 40 no matter what because sometimes I give them 100 hours worth of value in 1 hour. It all balances out. It took a while to realize this wasn't an integrity violation.

You aren't a machine resource. You're a human working in immense complexity. Your productivity is a roller coaster. It's ok. Don't sell minutes. In reality your customer can't handle the unpredictablity in billable hours if you were exacting and billed what you're actually worth. Instead we smooth it into 40 (or whatever), and that's ok.

10
aantix 2 hours ago 1 reply      
I love the alerts. I setup an alert for when I have been distracted for more than 30 minutes.

Could you disable those distracting sites after 30 minutes? I'm only half-way kidding..

Still a fantastic app.

11
elevenfist 35 minutes ago 0 replies      
I love the idea of apps like these, but can people really not remember what they do all day? That thought is almost inconceivable.
12
r00fus 20 minutes ago 0 replies      
I like how qotoqot.com shows up as "productive"
13
baby 9 hours ago 1 reply      
Did you test it thoroughly for websites tracking? I made a Firefox plugin[1] to track how long I would spend on facebook but it never had really accurate results.

Do you track only the current tab? Do you still track it if it's not foreground? Even if Firefox has many windows?

How do you track tabs in the browser from the OS by the way?

[1]: https://github.com/mimoo/FirefoxTimeTracker

14
howlingfantods 10 hours ago 1 reply      
Love the idea! Only suggestion would to be switch "Distractive" to "Unproductive" or "Distracting." I'm sure distractive is technically a word, but this is my first time hearing it. But that's just me. I may just have a limited vocabulary.
15
kasperset 1 hour ago 0 replies      
I like this app as it is. New features would be welcome but I prefer lean and mean app. Simpler the better.
16
danielparks 5 hours ago 0 replies      
I've only been using it for 30 minutes, but so far it's great!

Being able to map a domain with all its subdomains to a category would be awesome. I access a whole bunch of hosts in an internal domain, and they're all productive.

17
joshcrowder 10 hours ago 1 reply      
Great! The fact that this is private is a huge +1 for me. Looking forward to trying it out! I saw one of your comments on the data being available at ~/Library/Application\ Support/Qbserve/ it would be good if the schema was documented on the site, maybe in a developers section?
18
imron 25 minutes ago 1 reply      
Any possibility of Mavericks support?
19
zzzmarcus 10 hours ago 0 replies      
I've been using Qbserve for a couple weeks and I'm really happy with it. For me the best feature is just having that little number in the menu bar that shows what percentage of my time has been focused. This, more than any other timer or tracker, has been a simple and effective motivator for me to keep creating.

There are a lot of features I can imagine that would let me slice and dice tracked data better, but for a V1, this is something special.

20
peternicky 6 hours ago 2 replies      
I have used Rescuetime for years and for the most part, am very satisfied with the service. It would be helpful if you added a simple comparison between Rescuetime and your service.
21
ivan_k 2 hours ago 1 reply      
Wonderful tool! I can see myself using it every single day.

Comment on usability: currently, different ports from the same domain name are recorded as different websites. I think it should be sufficient to group all the ports used with `localhost` as "productive".

Thanks for the great work!

22
lancefisher 9 hours ago 0 replies      
This is a cool project. I thought about building something similar a few years ago when I was doing a lot of consulting. The most annoying part of the work was accurately billing clients when some days I'd switch between several projects.

Here's a few things that could make it super useful:

* Track time spent writing email by contact* Track hangout/skype/etc by contact* Track time spent on code per project* Connect phone records to tie in the time on the phone with contacts

Good luck!

23
botreats 9 hours ago 1 reply      
I like the idea of this a lot, but not working with Firefox is a dealbreaker. If only 100% of my time spent in Firefox was actually productive....
24
daemonk 5 hours ago 1 reply      
Just bought it. I really like it. One thing that would make this perfect for me is the ability to show stats for specific time ranges within a day. I use my laptop for both work and home. It would be nice to see a set of stats for just 9am-6pm everyday; or whatever ranges I want.

I guess the time tracking works right now by just tallying up seconds for each category? And it isn't recording time stamps?Recording time stamps might end up taking up too much space?

25
Karunamon 7 hours ago 0 replies      
YES!

Finally an excuse to drop Rescuetime and their goofy UI. I've had this running for about an hour or so, and it seems to provide me exactly what they do, for cheaper, while respecting privacy.

Congrats on an awesome app, and I hope you do well selling this!

26
billions 3 hours ago 0 replies      
Would be nice to compare productivity with others. Just purchased the full version.
27
stinos 8 hours ago 2 replies      
Away from the keyboard or watching a movie? Idle time is detected intelligently.

Problem is it heavily depends on the person what is really idling. Ideally you should be able to read the mind to see if there's any work-related activity :] I'm still using manual time tracking mainly because of this (even despite the obvious disadvantage of forgetting to turn it on or off): there's all kinds of solutions like detecting mouse/keyboard idling to fancier ones like detecting if your phone is near your pc and stuff like that, but at least for me none of these are as correct as just manually saying 'now I'm working, now I'm not': they can't detect things like me sitting outside with pen & paper.

28
mcoppola 8 hours ago 1 reply      
Early impressions are excellent. I can easily see some billable/reporting functionality added as premium features. I'll be adding this to my daily routine and seeing how it works for the trial period - but you likely have a paid user in me already. Thank you!
29
knowtheory 9 hours ago 0 replies      
Just downloaded it and fired it up, and immediate first impressions is that there's a lot to like in the app so far.

I'll be curious to see if I can build gentle nudges back on task if i'm off in the woods, or how i can better categorize different types of app usage. Coupling to my todo lists might be helpful.

30
weinzierl 3 hours ago 0 replies      
I installed the trial version and it looks awesome.Unfortunately the lack of Firefox support is a deal breaker for me as I spend a lot of time in Firefox.This would be my top feature request.
31
maknz 1 hour ago 0 replies      
Damn this is good. I'll be buying.
32
Jonovono 8 hours ago 3 replies      
Looks beautiful. Any plans to add 'Focus' mode like Rescuetime. Basically just ability to block distracting websites? I'd probably switch over from rescuetime if that's added :)
33
pwelch 10 hours ago 0 replies      
+1 for privacy and storing locally
34
xufi 4 hours ago 0 replies      
This is pretty cool. I'd love to use it to keep track of time since Itend to get distracted by looking at other tabs and I've been looking for a way to keep track of where im wasting most of my time
35
graeme 8 hours ago 1 reply      
Does the license allow use on multiple computers? I have a computer for heavy work, and another where I do email and social media. I'd like to track both.
36
manish_gill 7 hours ago 1 reply      
Hi. Trying it out and would happily buy after using it for the next few days.

One query I have: Is there any way I can hide the app icon from the cmd+tab list? I want the ability for it to stay and works quietly behind the scene and since I have too many applications running on at the same time. Maybe a "hide icon" or some other thing? Thanks.

Looks like a fantastic product on first look. :)

37
asadhaider 10 hours ago 2 replies      
This looks like a simple way to keep track of time I spend on projects.

It would be perfect if it could also log more details such as what filename/project is open in Sublime, that way I know what I'm working on.

38
Nemant 3 hours ago 0 replies      
Only 7MB! Good job! Product looks awesome. If it works well for the next 10 days I'm definitely buying it :)
39
tharshan09 9 hours ago 1 reply      
Just curious, what is the tech stack?
40
spoinkaroo 8 hours ago 0 replies      
This looks like exactly what I need, I'm going to try the free trial and then let you know what I think and probably buy it.
41
muhammadusman 4 hours ago 0 replies      
The UI is so much nicer than RescueTime, I love it!
42
welder 9 hours ago 2 replies      
This is the new RescueTime! Now you just need to market it to all of RescueTime's users:

https://twitter.com/rescuetime/followers

Small nitpick: How can you guarantee data is kept locally without open-sourcing the app?

This is similar to WakaTime but only for OS X and not as granular data, because one is for programmers and the other more general users.

43
zombieprocess 8 hours ago 1 reply      
In terms of distribution - Could you create a dmg with the drag & drop to the Applications directory as is standard for OSX?
44
kentt 3 hours ago 0 replies      
Congratulation on shipping!
45
fintler 9 hours ago 0 replies      
On a 13' Macbook, I need to scroll down to click "Download" or "Buy Now". You might want to move those buttons up a bit.
46
r0m4n0 4 hours ago 0 replies      
What my employer thinks I'm doing: 8 hours on stackoverflow = 8 hours of research

What I'm actually doing: earning reputation to improve my resume

47
spark3k 3 hours ago 0 replies      
Timecamp have been at this for a while now. With syncing to project management apps.
48
rememberlenny 9 hours ago 1 reply      
This looks like a great tool. Im testing it out now.

I regularly use multiple computers for personal/work. Can there be a way to cross sync data across systems using an external host? I'd like to use Dropbox or some similar solution to keep the data files up to date.

Would that be possible?

49
Zirro 9 hours ago 1 reply      
This looks like it would be very useful to me. A pet peeve of mine is when an application does not use a monochrome icon in the menu bar. I don't suppose you could offer a monochrome option for the percentage, turning it the same colour as the icon?
50
Yhippa 10 hours ago 0 replies      
I really like this idea. Unfortunately I'm a multi-device user including things like using a Chromebook which doesn't have native apps. Would love to see this aggregate data across different types of devices.
51
imdsm 9 hours ago 1 reply      
How long is it -25%? If I try for ten days, can I still get the -25%?
52
jbverschoor 8 hours ago 1 reply      
OK tried it, but it's not for me.

I need to be able to track activity per project.

Projects can be determined from the open window path or url.

Timings does this, but it's just one big mess

53
sd8f9iu 9 hours ago 0 replies      
Looks great! The interface picture, halfway down, should be the top one it's too hard to tell what the app does from the first one. Might give it a try.
54
cdnsteve 9 hours ago 1 reply      
So was this developed using Swift?Curious, I see SQLite backend.
55
thuruv 9 hours ago 1 reply      
Dead link. :(
56
gruffj 8 hours ago 0 replies      
Great app, really enjoyed using it so far. I've found the percentage of productive time shown in the toolbar to be really useful.
57
ghostbrainalpha 8 hours ago 0 replies      
I'm very happy with the icon in the dock!
58
jbverschoor 8 hours ago 0 replies      
I'm currently testing Timings, but it's support for activities and projects isn't done properly.

Checking out this one

59
pibefision 9 hours ago 2 replies      
Why in this kind of sites, there is not a single person behind the marketing site? What's the reason to be hidden?
60
jrcii 10 hours ago 2 replies      
This looks fantastic! Great work. I have some feedback too: Right now it groups all of my CLI programs into iTerm2. I would be very interested in tracking the actual programs. vim time means I'm coding, irssi (IRC), newsbeauter, cmus (music), or sl probably means I'm goofing off.
61
zmarouf 8 hours ago 2 replies      
Am I right in assuming that Qbserve only tracks active windows? To elaborate: If I have a monitor set to a fullscreen OSX Desktop with either Spotify or VLC while actually coding, the time spent listening/watching won't count unless the application is active, correct?
10
Feds spend billions to run museum-ready computer systems ap.org
70 points by twakefield  7 hours ago   63 comments top 11
1
Johnny555 3 hours ago 5 replies      
Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do. I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.
2
nickpsecurity 5 hours ago 2 replies      
The real problem is there's huge, legacy systems tied to these platforms that they don't understand and are too risky to port/re-engineer. Think of our military systems or payroll going down since software was ported wrong or relied on underdocumented assembler or compiler feature.

There's some hope on reducing costs at least. Look up NuVAX for an example of emulators being designed to work exacly like old hardware for fraction of price, space, energy, and so on. I haven't heard attempts but next step might be instrumenting them to trace programs code/data for porting. Or binary translation to modern architectures. I know DEC did latter for VAX-to-Alpha port.

3
panic 33 minutes ago 0 replies      
"Feds spend billions to maintain museum-ready buildings"

"Feds spend billions to enforce museum-ready laws"

These computer systems aren't even that old compared to many things the government spends money on.

4
Animats 5 hours ago 5 replies      
On the other hand, they're getting decades out of the software. Use the bleeding-edge stuff, and it's obsolete in two or three years now. Use a "cloud" service, and the service probably goes away within five years. The new stuff has too much churn. Where will Rails be in ten years? Python? Java will still be around; it's the successor to COBOL.
5
paavokoya 7 hours ago 6 replies      
To be fair, my last employer (aerospace manuf.) ran an incredibly dated and ancient OS with pretty decent results. It was simple, to the point, ugly as hell but got the job done without needing constant updates etc etc. Also we never had a problem with malware (because who writes malware for a 30 year old OS?)

I understand this article mentions many different sectors and functions for antiquated systems but sometimes an update simply isn't needed.

6
protomyth 2 hours ago 3 replies      
I wonder, what would be your answer is someone came up to you and said "We need a computer that we can maintain and keep in service for 50 years or more. What should we do?".
7
gherkin0 5 hours ago 1 reply      
Honestly, I don't see good solution to this until the rate of technological change really slows down. It seems like your options are to either pay to periodically re-engineer every system or pay to maintain obsolete hardware.
8
syngrog66 1 hour ago 0 replies      
On one hand it seems inefficient and perhaps dangerous to be reliant on such old systems. On the other hand, the idea of a new software project to replace it also sounds at risk of being extremely expensive and overly complicated. Because of all the government contracting anti-patterns.

In theory there's a middle ground that avoids both these extremes. In reality, with government software... I'm skeptical it will happen.

9
DanielBMarkham 3 hours ago 0 replies      
Most federal systems that are anywhere over ten years old (and that's most of them) are complete mysteries to the people who both use and maintain them.

A long time ago, I was responsible for such a system. I didn't ask for the job; I simply was the smartest person in the room for too long.

I vividly remember one day we had a problem with folks in a remote location entering things and those things getting mangled and/or lost on the way to the system-of-record.

For one system, with maybe five thousand users and perhaps a few gigabytes of traffic a month, I was on a call with 30 people spanning most of the Earth. I learned that there were at least a dozen separate systems at that location between the person entering the data and the data being sent to HQ. Each system was old. Each system had a separate vendor which claimed to be the only vendor to understand that system (Sometimes this was true. Many times they were just bluffing.)

And -- and this was the kicker -- for each of our dozens of locations, each location manager, because of their friendship with politicians, made their own decisions about how machines were configured and which programs were installed. They were complaining to us because things were bad, but they did not feel like they answered to us.

I was responsible for fixing it.

At the end of that call, I was reminded of Arthur C. Clarke's quip: Any sufficiently advanced technology is indistinguishable from magic.

But I doubt I thought of it in the way he meant it.

10
tn13 49 minutes ago 0 replies      
Despite all the ipads and what note I have found a pen and notebook to be the better note taking equipment than anything else.

If it is getting the job done cheaply and efficiently that required and better than the alternatives it is the best technology to use.

11
Shivetya 2 hours ago 0 replies      
anecdotal, in the early to mid eighties I was in the US Air Force. The machine I was first assigned to watch over was in the secure comm center. This burroughs machine was the first non-tube computer made by burroughs. it could boot from paper tape or card and was replete with blinking lights.

later I moved up in tech to a sperry/unisys system. all our personnel data and such was loaded via cards, physical cards in multiple boxes till near 88.

So honestly I don't doubt they still do similar. I was just so glad we got out of boxes of cards because having to fix runs each night got old and all for bent card.

it got me into programming, turbo pascal at the time. why, when we moved off physical cards it was then onto 360k floppies. The problem was, the upload/download programs provided could take half an hour or more to transfer to the 1100/70. The turbo pascal program did it in five or less per disk without issue.

11
Dumb-jump: an Emacs jump to definition package github.com
96 points by jacktasia  6 hours ago   35 comments top 11
1
qwertyuiop924 2 hours ago 1 reply      
Ah yes, yet another proof that the Wrong Thing isn't necessarily the wrong thing. Sure it's objectively worse than [ce]tags, but haven't you heard? Worse is Better. :-D
2
kozikow 4 hours ago 1 reply      
I was planning to write this as a comment here, but it ended up growing so I published it on my blog post: https://kozikow.wordpress.com/2016/05/21/nice-new-emacs-pack... .

BTW, did you know that you can just do README.org and github parses it? I noticed that you have org file checked in, but README is in md. For example see https://github.com/kozikow/keyremaplinux .

3
Myrmornis 1 hour ago 0 replies      
This is cool, I've also been writing a similar emacs package: it uses `git grep` and `git ls-files` to quickly find patterns and file names in the current git repo, and has a simple interface for filtering the search results to narrow down to what you were looking for (and also uses a regexp heuristic for jumping to definitions).

https://github.com/dandavison/emacs-search-files

4
t1amat 3 hours ago 0 replies      
This is neat, and useful as a general purpose jump-to feature. It's probably worth mentioning that some languages have packages with better functionality for that language, such as tern-mode for JavaScript.
5
drewg123 4 hours ago 2 replies      
I must be missing something, but what is the advantage of this over something like Emacs Tags (https://www.emacswiki.org/emacs/EmacsTags) ?
6
webaholic 5 hours ago 7 replies      
What do people use to get something like this for C/C++? Also is there any package for auto-completion in C/C++?
7
soamv 6 hours ago 1 reply      
Nice! Just tried it out and it seems to work well.

Why does dumb-jump-go want me to save my files before jumping?

8
dilap 3 hours ago 0 replies      
this looks neat.

i notice one of the supported languages is go -- if you're doing much go, i recommend installing godef and using go-mode.el. no tags files or anything like that, and works perfectly well for jumping to definition.

(i also highly recommend gorename and guru.)

9
iKlsR 6 hours ago 6 replies      
Still learning VIM user here, do we have something like this?
10
sdegutis 1 hour ago 0 replies      
I really love how well this works with Cider though, for Clojure projects. It even lets you jump to the definition of Java or Clojure files that live outside your current project, e.g. in third party libraries or even the Java standard library. Why, just today I jumped to the definition of java.time.Month because I had never actually seen a real live Java enum before. (Sure enough, it's defined with the keyword `enum`. Neat!) Cider has become essential to my Clojure workflow at work.
11
mordocai 5 hours ago 2 replies      
Nice package! Unfortunately is completely falling on its face for my ruby on rails project, but I'm sure it can be improved.
12
Announcing Rust 1.9 rust-lang.org
263 points by steveklabnik  7 hours ago   40 comments top 10
1
spion 1 hour ago 0 replies      
I don't know how this unexpected vs expected errors philosophy gets propagated, but to me it always looked suspicious. Take array bounds for example: what if you have an API that lets users send a list of image transformations, and the user requests a face-detect crop, followed by a fixed crop with a given x, y, w, h.

Clearly your code can get out of (2D) array bounds with the fixed crop (if the image is such that the face-detect crop ends up small enough). Suddenly the thing that was "unexpected" at the array level becomes very much expected at the higher API level.

So the API provider can't decide whether an error is expected or not. Only the API consumer can do that. Applying this further, a mid-level consumer cannot (always) make the decision either. Which is why exceptions work the way they do: bubble until a consumer makes a decision, otherwise consider the whole error unexpected. Mid-level consumers should use finally (or even better, defer!) to clean up any potentially bad state.

I think Swift got this right. What you care about isn't what exactly was thrown (checked, typed exceptions) but the critical thing is whether a call throws or not. This informs us whether to use finally/defer to clean up. The rest of error handling is easy: we handle errors we expect, we fix errors we forgot to expect, but either way we don't crash for clean up because finally/defer/destructors take care of that.

2
asp2insp 6 hours ago 1 reply      
The time complexity of comparing variables for equivalence during type unification is reduced from O(n!) to O(n). As a result, some programming patterns compile much, much more quickly.

I love this. Not just my code compiling more quickly, but the underlying implementation is super interesting.

3
MichaelGG 4 hours ago 1 reply      
I don't understand the announcement on panics. Hasn't it always been the case that thread boundaries (via spawn) could contain panics?

It's also used to have the incorrect default that such panics were silently ignored. .NET made this same mistake: background threads could silently die. They reversed and made a breaking change so any uncaught exception kills the process. I'd imagine Rust will do so by encouraging a different API if they haven't already. (I opened an RFC on this last year, but I didn't understand enough Rust and made a mess of it. But still a great experience, speaking to the very kind and professional community. In particular several people were patient, but firm, in explaining my misunderstandings.)

4
shmerl 5 hours ago 0 replies      
Progress on specialization is good.

> Altogether, this relatively small extension to the trait system yields benefits for performance and code reuse, and it lays the groundwork for an "efficient inheritance" scheme that is largely based on the trait system

5
bryanray 7 hours ago 0 replies      
Looks like a great release. Controlled unwinding looks very interesting. #GreatJobRustTeam
6
JoshTriplett 5 hours ago 1 reply      
I'm really looking forward to the next stable version after this, which will hopefully stabilize the new '?' syntax for 'try!'.
7
jeffdavis 2 hours ago 0 replies      
Excited about unwinding becoming stable. I am hacking on postgres-extension.rs, which allows writing postgres extensions in rust. This will mean that postgres could call into rust code, then rust could call back into postgres code, and that postgres code could throw an error, and rust could safely unwind and re-throw. Cool!!
8
wyldfire 6 hours ago 0 replies      
Unexpected problems are bugs: they arise due to a contract or assertion being violated.

Speaking of which, DBC [1] would be an awesome feature for consideration. It's one of relatively few areas where D is superior to Rust IMO.

[1] https://github.com/rust-lang/rfcs/issues/1077

9
Animats 5 hours ago 5 replies      
catch_unwind

Exceptions, at last! Not very good exceptions, though. About at the level of Go's "recover()". If this is done right, so that locks unlock, destructors run, and reference counts are updated, it's all the complexity of exceptions for some of the benefits.

I'd rather have real exceptions than sort-of exceptions.

10
mtgx 7 hours ago 2 replies      
Wow, almost 2.0 already. Any major features reserved for 2.0 or will it be just another typical release like 1.8 or 1.9, more or less?
13
Ggplot2 docs remade in D3.js plot.ly
49 points by michaelsbradley  6 hours ago   3 comments top 2
1
th0ma5 2 hours ago 0 replies      
Plot.ly by default will tie in their social services and copy all of your data to other people's computers.

I raised this issue with the project maintainers and they stated that the wish of the parent company is for this to remain the default.

So, just a warning to enterprise developers, you have to fiddle with this to turn that off, but without a clear policy statement or a reasonable fork of the project that addresses the privacy and security issues, I've been advocating against use of Plot.ly.

2
minimaxir 3 hours ago 1 reply      
I was recently looking into ggiraph (https://cran.r-project.org/web/packages/ggiraph/vignettes/gg...) for converting ggplot2 to d3 since it keeps most of the same syntax as normal ggplot2 charts.

However, looking at the output of these plot.ly charts may make me reconsider since they have good performance and the conversions appear extremely accurate (and the software is open-source). Well played.

Unfortunately, the charts hit the same issue as every other JS library in that they are nigh unusable on mobile devices for nontrivial visualizations. Which is beginning to get annoying.

14
Snapchat raised $1.8B in a Series F round techcrunch.com
130 points by zhuxuefeng1994  10 hours ago   261 comments top 25
1
argonaut 9 hours ago 8 replies      
People need to stop focusing their criticism on how stupid the app is or how they're going to lose all their users. It's almost a meme by now of how people say this every time Snapchat is brought up. By now it's a foregone conclusion, and requires active ignorance and not really caring to keep an open mind, to continue to hold onto that belief in the face of tremendous evidence that 1) Snapchat is very popular, 2) Snapchat has been around for 5 years now and hasn't gone anywhere, and 3) Users of Snapchat use it very actively.

Personally, I think one good way to see why people use the product is to 1) be in a relationship, and 2) regularly send pictures of what you're doing / funny things you've encountered / your face, back and forth with your partner.

Where I feel like there is legitimate criticism of Snapchat is their ability to monetize. Unlike Google, there is no purchasing intent. Unlike Twitter or Facebook, there is little ability to target users beyond age/gender/location (and no amount of computer vision in the short term will change that). Their Discover / Live products (which allow you to see snippets from brands / news channels / publishers) are promising but unlike the core product, I think there is legitimate reason to be doubtful about the appeal of these products because they have nothing to do with chatting with friends.

2
gjolund 4 hours ago 14 replies      
The only real value of Snapchat is that now I dont have to scroll past the innane content snapchat is used for on other social networks.

Im not going to say snapchat is or isnt worth $20b. I am going to say that I am sad that so much energy and money is being thrown at such a trivial application.

Im a software engineer, and i like solving real problems. I cant imagine working at snapchat and trying to motivate myself to be excited about its useless app.

3
macandcheese 3 hours ago 1 reply      
Snapchat is the medium that most closely represents a real life conversation - between individuals, between friend groups, between performers & fans, etc. That's why it's popular and will continue to be so.

The beauty of Snapchat is that you don't need to think about what you are posting: It's ephemeral, it's kept close to the vest (you choose explicitly who sees it), and above all, it's without reservation. I don't consider the appropriateness of my content because I choose exactly who receives it, just like how in real life you probably speak differently around your friends than you do your co-workers, or your family, without really thinking about it.

Snapchat provides a medium for users to share content without the baggage of social validation that other networks like Twitter and Facebook are beholden to (intentionally and unintentionally). Nobody can see how popular your content is but yourself, and you control the reach down to the individual. No other content network is like this.

Plus, you can draw bunny ears on people.

4
cassieramen 5 hours ago 2 replies      
How are snapchat employees making out here? It seems like share dilution and even more investors getting exit preference has got to be eroding value like crazy.

Perhaps someone with some experience can shed some light here, if you joined snapchat 2 or 3 years ago are you seeing all of your supposed value evaporating away?

5
heavenlyhash 4 hours ago 4 replies      
1.8B.

1,800 Million, right? Stop me when the math goes wrong.

Divided into bundles of 200k: 9000x.

Let's say each member of a team costs 200k/yr. High for some, low for others, whatever.

1.8B dollars is enough money for a team of OVER NINE THOUSAND people.

What... exactly is taking that much effort, here? What is this money fueling? I honestly boggle to imagine.

6
wtvanhest 9 hours ago 3 replies      
Every single person here seems negative about Snapchat.

I am 30+, use it occasionally, and see more and more of my older friends signing up. Especially those friends far away from the echo chamber. This fundraising feels smart for investors who want a great return and for the company to use to get to an IPO.

If you don't use Snapchat, or haven't used it in a few years, it is probably time you take a second look.

7
fitzwatermellow 1 hour ago 0 replies      
Currently developing tools to do market research and user analytics on Instagram. Experience so far with the Instagram API is quite positive. Take a look at the endpoints on their dev page and its immediately evident: Instagram exposes a lot of precious data. Consequently, it is the platform of choice for users who accept sponsorship for posts. Also, when reaching out to the Instagram team about improving tag discovery and search, they were very amenable :)

If anyone from Snapchat is monitoring this convo, I would implore you guys to prioritize your API rollout. Provide the tools that allow your users to be able to make data driven decisions. Building an ecosystem of creators who can figure out how to monetize their Stories better than any one else can is "the key"

8
the_watcher 32 minutes ago 0 replies      
I've been impressed with Snapchat, and generally think they are moving in the right the direction. That said, can someone please explain what they are spending all this money on? They've raised over $2.5B. That's simply an astounding amount of money.
9
vgt 5 hours ago 4 replies      
Shameless Google Cloud plug:

Snapchat runs entirely on Google Cloud Platform, and have famously declared that they have 0 Ops people because everything they use is fully-managed and no-ops. This is one of the reasons they are able to deliver features at such a rapid pace - they aren't worried about scale or reliability the way anyone at the scale is on-premise or on AWS/Azure.

10
CameronBanga 9 hours ago 3 replies      
I know that I don't "get" venture capital, but is Snapchat even profitable (or close to) yet? If not, and it's going to take nearly 2 billion more to get there, do they really have a product that will ultimately return value?

It's the cynic in me, but by a series F, and needing 2 billion, this feels like the sort of funding where early investors are propping the valuation of Snapchat in order to sell off to someone like Yahoo for a crazy amount of money, where the product will go and eventually die. All in the name of advertising. I guess I just don't get it.

11
nooron 5 hours ago 0 replies      
I think Snapchat will own media curation for wearables/AR displays in a few years. They can be the Comcast of your face you'll be able to flip through a number of channels, from "X NAME OF HOLIDAY" to "THING HAPPENING IN CITY" to Vice to National Geographic. I think their goal is to serve as a demand-generation tool, like TV used to be.
12
mikeryan 9 hours ago 5 replies      
There's been a bunch of apps out there that have taken off and completely surprised me. But I can't think of another succes that makes me feel out of touch more than a 20B valuation for what launched as "the app that sends disappearing images". Kudos to the team though. Evan and Co, seem to have made a lot of very smart decisions in growing this business.
13
bqe 9 hours ago 13 replies      
Can someone who uses Snapchat routinely tell me why you use it routinely?

I've been using it for a few weeks to send a picture or two to the few people I know who use it, but it seems pretty limited. I follow a few "famous" people on it, and they basically just post random images of them walking around NYC or them traveling. I don't yet see the benefit.

14
connoredel 4 hours ago 0 replies      
I wonder if there was any secondary equity sold. Spiegel famously took out $10M back in 2013 and bought a Ferrari. As an investor/BOD member, I think you need to balance between giving founders a taste of something so they don't sell at the first offer or push to IPO too early and potentially giving someone generational wealth when there is still so much work to do.

Remember Secret? Founders got $3M while investors with preferred stock even couldn't recoup their loss.

I can't help but think employees with common stock are always the losers here. Founders' "common stock" is really given preferential treatment -- it's obviously more liquid. Your average employee didn't get to sell his shares directly to some poor VC before the company went under.

15
ljk 5 hours ago 1 reply      
For people who read the "How Technology Hijacks Peoples Minds" article, https://news.ycombinator.com/item?id=11737232, does snapchat implement any of the "traps" the article mentions?
16
rm_-rf_slash 9 hours ago 0 replies      
Nobody wants to be the last person to invest in Snapchat.
17
cpach 2 hours ago 0 replies      
Wow. I had no idea investment rounds were in the size of billions these days. Perhaps not so surprising though that investors wants larger yields than what the public stock markets can offer.
18
georgiecasey 9 hours ago 0 replies      
forget the valutation, what can a social media app possibly spend nearly $2B on? not like it's uber where they need money to enter new markets. maybe google appspot bumped up the bills lol

only logical thing is founders taking a load of money off the table

19
slackstation 7 hours ago 0 replies      
Snapchat and the like are low hanging fruit. Someone should start a DAO where users can either see ads or pay a small annual fee to pay for the service and also invest in their own security and sanity.

There are programmers who are interested, there is a public that could be convinced. All we need really is to solve the human problem or aligning money with incentives.

There should be a high quality social network where you use personal filters rather than relying on central intelligence to do the heavy lifting.

Rather than be beholden to people who are willing to invest billions in a free product so that they can traffic ads and/or data mine.

20
aznpwnzor 6 hours ago 0 replies      
I completely agree with the implied valuation, but had slight pause as to what this money could be used for.

I think Snapchat has had mild success with monetizing Discover and Stories. However, I think correct course of action is moving from a incentivizing content distribution role to incentivizing content creation role. For example creating partnerships. What you have right now are middling ready to explode and fame driven celebrities like YouTube had before partnership incentives. Snapchat is exactly there in their timeline.

21
xhrpost 7 hours ago 0 replies      
I've been denying the "bubble" hype for a few years now, but seeing numbers like this makes me start to wonder. However, perhaps this is not the effect of a bubble, but simply the effect of companies looking more to raise IPO level money from the private VC sector.http://www.vox.com/2014/6/26/5837638/the-ipo-is-dying-marc-a...
22
sarreph 9 hours ago 0 replies      
Anyone know where we can download the leaked pitch deck?
23
intrasight 9 hours ago 0 replies      
Limited audience. Limited monitization potential. It's a simple utility app so there is limited IP. I think this valuation will be as fleeting as the images they send. But still kudos to the founders for making hay from something so simple.
24
traviswingo 9 hours ago 3 replies      
This is total bs. $54 million in revenue with an $18 billion valuation? I understand investing in the "future value" of companies, but this shit is getting out of hand, lol.
25
jbob2000 9 hours ago 0 replies      
Wow, this is kind of scary. Snapchat is so clearly a fad, I don't know what to say to these investors. Most of their users are youth, who will drop Snapchat in a second once it isn't cool. As soon as moms and marketers get on it, the service is dead.

Having your messages disappear was kinda cool, but they're starting to ease up on that a little (you can re-view some snaps, always possible to screenshot the app to save the picture). The whole dog-face, barfing rainbows thing is getting tired now, everyone knows how these work, it's nothing special any more. They have some sponsored content, but the user is under no obligation to view it.

15
Comparing Git Workflows atlassian.com
176 points by AJAlabs  12 hours ago   72 comments top 9
1
drewg123 10 hours ago 13 replies      
One of the things I hate about the traditional git workflows describe there is that there is no squashing and the history is basically unusable. We have developers where I work that use our repo as a backup, then when things are merged to master, the history is littered with utter garbage commits like the following:"commit this before I get on the plane""whoops, make this compile""WTF?"

These add no benefit to history, and actually provide an impediment to bisecting (since a lot of these intermediate revisions will not even compile).

At my previous job, we used gerrit. The nice thing about gerrit from my perspective is that it kind of "hid" all of the intermediary stages in the gerrit review. So you could push all you wanted to your gerrit review fake-branch thing, and when you finally pushed to master, there would just be a nice, clean atomic change for your feature. If you needed more detailed history of the steps during review, it was there in gerrit. But master was clean, and bisectable.

Is there any git other git tool or workflow which both allows people to back things up to the central git repo AND which allows squashing changes down to meaningful bits, AND which does not loose the history of review iterations?

2
useryMcUserface 10 hours ago 2 replies      
This article has actually been around for a while. Explains it really great. But one advice from me is that try to choose only what is sufficient to your project and team. No benefit in being overequipped for a simple job.
3
zmmmmm 1 hour ago 3 replies      
It amazes me how the entire software industry seems to be adapting its workflows around the necessity of making Git usable. While there are certainly other positive attributes about some of these workflows, the main reason people use them in my experience is because "if you don't use workflow X you get undesirable problem Y with Git". Most of these problems simply didn't exist or were not nearly as severe with previous revision control systems, so we never needed these elaborate workflows. Now suddenly Git is considered a defacto tool, and downstream effects of using it are transforming the entire software development process.
4
Bromskloss 27 minutes ago 0 replies      
What was the workflow in mind when Git was designed?
5
crispyambulance 4 hours ago 1 reply      
Kudos to atlassian for bringing some much needed clarity to a confusing topic. So many people that claim mastery of git only know particular workflows and, when attempting to mentor others, just mansplain whatever they know without consideration that there are alternative valid ways of doing things.

Without a firm grasp of one's intent(workflow) learning git commands is pointless and leads to people desperately flailing out commands.

6
zamalek 8 hours ago 1 reply      
You can also evolve, basically, to each model in the order that they appear in the article.

As an example: I've been working on a new spike for the past 2 weeks with one other developer. Maybe 10 times a day we'll need something that the other person has committed, so we work against one branch (master). The workflow suits this extremely rapid iteration.

One repo has now matured to the point where developer branches make sense. We created "develop" on it as well as our own branches off that. We're not close to a v0.1 yet - but we'll be evolving to git flow the minute we want to ship.

Eventually as more devs join, we'll need the full-blown PR workflow, that also naturally stems from its predecessor.

There's a "meta-workflow" here, which implies which workflow to use.

7
sytse 9 hours ago 0 replies      
I think the ideal workflow depends on the complexity you need. I've tried to write about what kind of requirements cause what kind of workflow in http://docs.gitlab.com/ee/workflow/gitlab_flow.html

What do you think?

8
jupp0r 8 hours ago 0 replies      
Those are general development models and not specific to git.
9
kevinSuttle 9 hours ago 1 reply      
Was this updated recently? This has been up for awhile.
16
Systemd v230 kills background processes after user logs out, breaks screen, tmux debian.org
73 points by polemic  2 hours ago   50 comments top 13
1
kazinator 54 minutes ago 1 reply      
Several years ago I was developing a "robust serial console over USB" feature for Linux. THis feature allows you to have a console on, say /dev/ttyUSB0. The device doesn't have to be plugged in. When you plug in the USB-Serial dongle, you have a console. You can unplug your serial dongle right in the middle of a session, plug in one that uses a different chip, and your session is intact! Just "Ctrl-L" to refresh your Vim window where you were editing and there you are. The /dev/ttyUSB0 did actually disappear when unplugged and did re-appear on the re-insertion, but the TTY session was isolated from that, held in a suspended state.

Chopping apart the USB-Serial and TTY code in the kernel was relatively a piece of cake; but systemd threw up a wall, ironically! I got good help from the mailing list:

https://lists.freedesktop.org/archives/systemd-devel/2013-Ma...

At first I was stumped: everything should work, but what the heck is killing my login sessions when I unplug the device? I went over the code to make absolutely sure nothing would be getting a hangup signal from the TTY or the like. Through kernel printk messages, I traced it to systemd, slapping my forehead. Basically, even though the tty session was being kept intact, it was the fact that the USB device went away that systemd killed the session.

Once I solved the systemd issues, everything worked great.

systemd: it just likes to kill things, at least in its default configuration.

2
madmax96 1 hour ago 2 replies      
This violates the rule of least surprise. Honestly, it looks like the Systemd maintainers operated without regards to the community at all. There's probably going to be some commenter who says "it only is surprising to you", but as evident by the email, it's surprising to a lot of users.

Does anyone know if there's any precedent for a UNIX init system to behave like this? In my knowledge there isn't.

3
dijit 1 hour ago 4 replies      
This came up about a week ago, I questioned why it was a useful feature and people were quick to bark "security, if you don't like it you should be writing service/unit files for your programs."

To me, I can't think of a single other operating system that works this way, even Windows. But I'm sure the pro-systemd supporters will "correct" my thinking.

Additional: "You can just disable it, don't worry about the defaults" will be another argument that comes up, seems to be a nice way of stopping discussion.. it's the same thing people espoused when binary logging/journalctl was enforced.

4
TheGuyWhoCodes 1 hour ago 1 reply      
WOW. That will break/surprise a lot of people unless the distros disable it at compile time. I use tmux regularly to start long running jobs.

On the upside you can use loginctl with lingering to keep the processes after the user logs out.

5
gdamjan1 21 minutes ago 0 replies      
This is a good feature, and a good default.Processes in the session-xx.scope should be stopped when the session ends.

there's an easy sollution for processes that don't need to be stopped too.

6
mcguire 42 minutes ago 0 replies      
The response to the bug report:

Hello,You should quote the full changelog and not justthe part that is 'bad' in your mind.

>systemd-logind will now by default terminate user processes that are part of>the user session scope unit (session-XX.scope) when the user logs out. This>behavior is controlled by the KillUserProcesses= setting in logind.conf, and>the previous default of "no" is now changed to "yes".

For debian it would be enough to set this to "no" again with --without-kill-user-processes option to "configure"

>This means that user sessions will be properly cleaned up after, but>additional steps are necessary to allow intentionally long-running processes>to survive logout.

Here comes the important part. Seems like the systemd-devs are working on away to allow intentionally long-running processes in a specific user scope.

And here is another way for allowing these long-running processes:

>While the user is logged in at least once, user@.service is running, and any>service that should survive the end of any individual login session can be>started at a user service or scope using systemd-run. systemd-run(1) man>page has been extended with an example which shows how to run screen in a>scope unit underneath user@.service. The same command works for tmux.

And another way for allowing long-running processes.

>After the user logs out of all sessions, user@.service will be terminated>too, by default, unless the user has "lingering" enabled. To effectively>allow users to run long-term tasks even if they are logged out, lingering>must be enabled for them. See loginctl(1) for details. The default polkit>policy was modified to allow users to set lingering for themselves without>authentication.>>Previous defaults can be restored at compile time by the>--without-kill-user-processes option to "configure"

You see? No reason to complain about.

Best regards

Christian Rebischke.

tl;dr: You can configure it not to, or you can accept that Linux is about as much a "Unix" as AIX, Irix, or HP-UX and join them in the wonderful land of the future, which suspiciously resembles the time before all the non-standard vendors died.

I'll stop now, before I make Linus on a backwards compatibility rant look like a choir boy in church.

7
tbyehl 20 minutes ago 0 replies      
So if I set /etc/systemd/logind.conf KillUserProcesses=no the current behavior will be preserved whenever this change finds its way downstream?
8
jeanpralo 1 hour ago 5 replies      
I still can't find a good reasons why you should use systemd, this things grows out of control and is way out of the initial scope.

Upstart was a reasonable replacement for system V I reckon, ...

9
vxxzy 48 minutes ago 0 replies      
What about user-specific CRON jobs? Does this prevent them from running?
10
kazinator 1 hour ago 1 reply      
Even "nohup" programs?
11
d3ckard 1 hour ago 0 replies      
Well, I guess that is a useful feature for init system.
12
mrmondo 1 hour ago 0 replies      
To be fair, Debian Jessie is not exactly the most stunning of Debian releases over the past decade either. Between the missing and ancient packages we ended up having to move to CentOS 7 w/ epel+elrepo.
13
Scarbutt 1 hour ago 1 reply      
I wonder if the debian folks feel regrets about settling with systemd.
18
The Fusioneers, who build nuclear reactors in their back yards washingtonpost.com
147 points by Fjolsvith  12 hours ago   44 comments top 8
1
paulsutter 11 hours ago 5 replies      
These are Farnswoth Fusors[1], first developed by Philo T Farnsworth, one of the inventors of television.

The devices use about 100,000x more energy than they produce FROM FUSION (edit, thanks), but some fusion does occur. An individual ion can be heated by 11,000 kelvins with a single electron volt, so 15,000ev is enough to reach fusion temperature. The statistical challenge is getting ions to collide - overcome the (repelling) coulomb force - and fuse.

There are discusson groups online for this topic [2] and there's even a high school in the Seattle area that has a fusor [3].

[1] https://en.wikipedia.org/wiki/Fusor

[2] http://www.fusor.net/board/

[3] http://www.industrytap.com/overview-polywell/31940

2
snarfy 11 hours ago 4 replies      
I have a hack project like this going in my garage.

I'm trying to build a resonant tuned polywell device. The tldr is polywell + tesla coil power source + microwave oven = fusion? I have no idea if it will work but is something I wanted to try.

The original polywell is a steady state device, where this is meant to create a dynamic system in tune with the power source.

https://en.wikipedia.org/wiki/Polywell

http://2.bp.blogspot.com/-3BPtWiGxd3c/Vj7ecWEoXuI/AAAAAAAAGc...

3
Aelinsaar 11 hours ago 2 replies      
Impressive people, but I don't feel that I got a sense of what they were actually doing. I understand that it's a challenge for a writer who probably doesn't understand the subject well, to talk to the average read who doesn't understand it at all. Still, this feels like a wasted opportunity to teach, and to inform others with more knowledge about what these interesting people are doing.
4
andrey_utkin 10 hours ago 3 replies      
This listing won't be full without David Hahn aka "Nuclear Boy Scout". https://en.wikipedia.org/wiki/David_Hahn
5
Animats 7 hours ago 0 replies      
Lockheed-Martin's Skunk Works is trying to build a useful fusion reactor using a somewhat similar approach.[1] They announced this a few years ago, and last fall, their people gave a talk at the Princeton Plasma Physics Laboratory. (There's video, but it won't play in Firefox on Linux.[2])

It's striking that a very capable organization is working on this.

[1] http://www.pppl.gov/events/colloquium-lockheed-martin-compac...[2] https://mediacentral.princeton.edu/media/Thursday+Colloquium...

6
nickpsecurity 1 hour ago 0 replies      
Let's not forget Doug Coulter:

http://sploid.gizmodo.com/this-chain-smoking-gun-loving-guy-...

He has a forum dedicated to all kinds of stuff. He occasionally stopped by Schneier's blog to deliver insightful posts. I particularly liked his calling out the Anarchist Cookbook's many ways of killing people... that try to do what it says. ;)

7
riprowan 11 hours ago 2 replies      
Nice bios. Would rather read about what they're building.
8
cm3 7 hours ago 1 reply      
While we're at it, are safety considerations the only problem that prevented nuclear powered automotive engines? I've always wondered why.
19
PDE-Based Image Compression uni-saarland.de
49 points by ingve  6 hours ago   4 comments top 2
1
defen 2 hours ago 0 replies      
Is this technique related to compressed sensing?
2
street 4 hours ago 2 replies      
Comparing to JPEG seems somewhat dishonest (or thoughtless), considering the large parts of one color in some of the images. PNG and other algorithms not designed for detailed pictures would probably do better.
20
Could a neuroscientist understand a microprocessor? biorxiv.org
84 points by bigdatabigdata  6 hours ago   18 comments top 7
1
gumby 3 hours ago 3 replies      
This title sounds like an homage to Lazebnik's famous essay "Can a biologist fix a radio?" (http://www.ncbi.nlm.nih.gov/pubmed/12242150)
2
hyperion2010 33 minutes ago 0 replies      
One of the major outstanding questions I have as a neuroscientist is whether the classic experimental approaches used here can ever get us to the 'understanding' the is needed to build something that looks like a brain. I have been leaning toward the thought that we are likely to make faster progress by letting the synthetic biologists have a shot at it, even if it means we will be stumbling around in the dark if we stray even slightly from the steps they learn to take.
3
dclowd9901 2 hours ago 5 replies      
I assume there are some neuroscientists here. I was trying to imagine how the brain thinks, and how it comes to conclusions and taps into a breadth of information so quickly.

One way I tried to conceive of it was that when the neurons in your brain fire, they compose patterns. These patterns -- the orders and timing of neurons firing -- might be likened against something like a hash table, wherein you represent data as a serialized pattern.

For instance, when I think of a dog, my brain fires some base neurons that are associated with the size of a normal dog, and some of its most basic attributes: 4 legs, fur. These could also be the same regions of the brain that would fire when I think of a cat, or a raccoon.

Does this in any way represent how the brain actually works?

4
return0 3 hours ago 0 replies      
What a fun approach. It's true that traditional neuroscience methods are too crude to give insight to brain functions and results are often over-interpreted. Neuroscientists know that however [1], and that's why attempts to simulate large circuits like the blue brain project or the simulated cat model produced nothing more than "oscillation statistics". Despite their crudeness, there are cases where circuits (like the amygdala) were causally linked to behavior by early electrophysiology .

There is hope however, as there are now optical and molecular methods that make it possible to observe , activate, and inactivate individual neurons, which allows making causal inferences[2].

1: http://compneuro.uwaterloo.ca/files/publications/eliasmith.2...

2: https://www.sciencedaily.com/releases/2015/05/150528142815.h...

5
simonster 29 minutes ago 0 replies      
This is an interesting idea, and the paper is pretty well thought out. But I think one source of information not sufficiently explored is anatomy, which would help a great deal with the microprocessor, although it seems to help less with the brain. If you have the connectivity of the entire microprocessor (as the authors have determined using microscopy), then you can probably determine that there are recurring motifs. If you can figure out how a single transistor works, then you can figure out the truth tables for the recurring motifs. That takes care of most of Fig. 2. The only question that remains is if you could figure out the larger scale organization.

Anatomy also helps with the brain, but not nearly as much. People are still trying to figure out how the nematode C. elegans does its relatively simple thing even though the full connectome of its 302 neurons was published 30 years ago. But in larger brains, the fact that neurons are clustered according to what they do and projections are clustered according to organization in the places that they project from provides at least some level of knowledge. We are not merely applying Granger causality willy-nilly to random neural activity. We know what's connected to what (at a gross level) and in 6-layer cortex we even have an idea of the hierarchy based on the layers in which projections terminate (which is how Felleman and Van Essen got Fig. 13b in the paper).

OTOH, I think our failure to understand many neural networks at a conceptual level is quite disturbing, and perhaps a sign that the kind of conceptual understanding we seek will be forever beyond our reach. The authors mention this toward the end of the paper, although I think they overstate our understanding of image classification networks; I've never seen a satisfying high-level conceptual description of how ImageNet classification networks actually see. One possibility is that we simply don't have the right concepts or tools to form this kind of high-level description. Another possibility is that there simply is no satisfying high-level way to describe how these networks; there are only the unit activation functions, connectivity, weights, training data, and learning rule. We can find some insights, e.g., we can map units' receptive fields and determine the degree to which different layers of the network are invariant to various transformations, but something with the explanatory power of the CPU processor architecture diagram in Fig. 2a may very well not exist.

I hope that the brain's anatomical constraints provide some escape from this fate. Unlike most convolutional neural networks, the brain has no true fully connected layers, and this may serve to enforce more structure. We know that there is meaningful organization at many different scales well beyond early visual areas. At the highest levels of the visual system, we know that patches of cortex can be individuated by their preferences for different types of objects, and similar organization seems to be present even into the "cognitive" areas in the frontal lobe. It remains to be seen whether it's possible to put together a coherent description of the function of these different modules and how they work together to produce behavior, or whether these modules don't turn out to be modules at all.

6
internaut 3 hours ago 0 replies      
This really is a great question.

My guess would be that they might make progress in the direction of physics but not in the direction of higher order abstractions further up the stack. Those would just be interpreted as rules of the universe or background noise.

7
lettergram 3 hours ago 1 reply      
My wife (a neuroscientist) does...
21
A Two Month Debugging Story inburke.com
69 points by craigkerstiens  8 hours ago   38 comments top 15
1
rraval 7 hours ago 1 reply      
> To ensure each test has a clean slate, we clear the database between each test.

The proposed solution to manually nuke the database state seems crazy to me. Some alternatives:

1. Run the entire test in a transaction, do flushes and assert as normal. At the end, ROLLBACK instead of COMMIT and now you have a pristine database again. [1]

2. Setup pristine DB state once and then use `CREATE DATABASE ... WITH TEMPLATE ...` to create a temporary database. Not sure what the perf hit is, but it's probably worth trying. [2]

[1] http://alextechrants.blogspot.ca/2013/08/unit-testing-sqlalc...

[2] https://www.postgresql.org/docs/9.4/static/manage-ag-templat... CREATE DATABASE actually works by copying an existing database. By default, it copies the standard system database named template1.

2
joshribakoff 4 hours ago 1 reply      
I'm not saying that e2e testing is without value, it can be useful... but its just not worth it I've found. I write unit tests only, which don't touch the database. My litmus test is am I testing an "algorithm" -- something that has outputs solely based on its inputs.

If you insist on tests that touch the DB, the speed can be improved by chaining tests instead of resetting the DB everytime (this has its trade offs).

If I need to test a piece of code that generates a dynamic SQL statement, I would simply assert that the correct SQL is generated. I would not need to actually execute the SQL to see that it is correct. The point is to test that my logic generated the correct SQL, not to test that my database vendor implements SQL correctly. The latter would just be caught in manual testing. I like the BDD school of thought that you are writing specs, not tests.

3
mattbee 8 hours ago 2 replies      
Oof - seems like you're running on a managed platform that can't be debugged in a reasonable level of detail?

If it were me I'd want to take the problem somewhere I _could_ get root access and debug it properly. So I'd be interested to know what value this (unnamed) CI platform provides to make it worth a wild goose chase.

4
overgard 5 hours ago 1 reply      
At risk of invoking the wrath of test fanatics -- aren't tests supposed to save time? If your tests are both unreliable and so hard to debug that you haven't been able to fix them for two months, I'd think you'd be better off just turning them off and figuring out a better testing strategy.
5
voiper1 8 hours ago 1 reply      
First thought that came to mind was to just blow away the database and create a new one, see how long that takes.

I had found waterline's load time to have a LOT of things that I hadn't expected and take a long time... I'm just using knex now. Every time I try for magical solutions, I keep regretting it and end up using less magic.

6
jwatte 5 hours ago 1 reply      
What we do when we detect a failure is freeze the test runner instance, and allocate the same failed test to another runner. If the second runner succeeds, we okay the build, but we put the test and the frozen runner in a queue for investigation, and some engineer will be responsible for diagnosing and fixing this intermittent test. This queue is worked every day on a rotating schedule.

We run our own CI/CD infrastructure, on top of virtualized infrastructure; if you use a third party that doesn't give you that, you might want to look for alternatives.

7
loftsy 7 hours ago 1 reply      
I think best practice here is to run the whole test in a transaction and then roll back the transaction at the end. This is the approach Django uses.
8
switchbak 7 hours ago 0 replies      
I usually try to minimize the number of tests that rely on this (shared) DB state. So I'll focus more on unit and integration tests. Functional tests still do have value though. See: http://xunitpatterns.com/Testing%20With%20Databases.html

Where possible, I like to have data that can exist independent of the other data on the system. I can make a separate 'tennant' for that test - and just ensure it's wiped before I proceed. Sort of a multi-tennant approach. Works great. I don't bother with a 'teardown', but do any cleanup before the test runs. I also ensure the tests are written to not make assumptions about global state.

Instead of dropping all constraints as the article suggested (that sounds hacky), I use ON DELETE CASCADE constraints. If I miss some, the tests fail. Seems easy enough to maintain.

With the above approach, DB testing is approachable and still pretty quick.

9
karmakaze 7 hours ago 1 reply      
> We could draw a dependency graph between our 60 tables and issue the DELETEs in the correct order

If the above works, automate it.

10
utternerd 8 hours ago 1 reply      
disabled autovacuum by default

I realize this wasn't their solution, but it's worth noting that generally speaking, disabling auto-vacuum isn't recommended. Even if you do, it will still force vacuum jobs to prevent transaction ID wraparound.

* https://www.postgresql.org/docs/9.5/static/routine-vacuuming...

11
markbnj 8 hours ago 0 replies      
I always like to read stories of other teams' debugging adventures. One thing that occurred to me as I got toward the end of OPs (ongoing) narrative was that perhaps its time to fall back and consider other ways of setting up the data environment for each test. If the data that each test is dependent on is a small enough subset perhaps it makes sense to create a separate database for each test or class of tests, which can just be dropped and recreated before the test run?
12
gerbilly 6 hours ago 1 reply      
With Hibernate we used to connect it to a in memory DB just for the tests and to a real DB for production.

It was easy and quick to drop schemas.

13
meshko 4 hours ago 0 replies      
You can just do all the deletes in the same transaction and then the constraints wouldn't need to be disabled.
14
polskibus 6 hours ago 1 reply      
Have you thought about starting up multiple replicas of your database in parallel and then pointing each test suite to a different one? This way you may be able to parallelize them very well.
15
fapjacks 5 hours ago 0 replies      
They are using CircleCI, btw. I really like Circle, but they could sure be more helpful with the SSH access, as you mention. Also a personal nitpick, they (still) have no way to test a circle.yml file except for pushing a change and seeing if it works. Anybody that's used the service will know exactly what I'm talking about. You end up pushing four times just to get the syntax right. They know about that (iirc there was a support forum post requesting the feature some years ago), too. But besides that, I heartily recommend Circle.
22
A Rock-Sorting Robot wired.com
36 points by electic  7 hours ago   17 comments top 5
1
rm_-rf_slash 5 hours ago 3 replies      
I wonder how useful this technology could be to zero-sort recycling. As much of a hippie as I am, I don't think recycling can be solved by green bins and awareness campaigns, but instead by the simplicity of throwing everything into one bin that gets sorted when it reaches the processing plant.

It would also be helpful if this could sort organic material for use in compost and sell it. Maybe even brand it. How much do you think people would pay to use compost from Beverly Hills celebrities in their garden?

2
TheSpiceIsLife 57 minutes ago 0 replies      
I need this to help me deal with this problem at work https://imgur.com/HM8euP8

The laser cutter is capable of more product output than I have time to deal with sorting and stacking them on to pallets. There's another 400 parts behind me waiting to be broken free from their retaining tabs.

Each item is etched with a part ID. Some sort of computer vision + robot should be able to do the job for me. I have the tools to put something like that together.

Any leads?

4
greut 4 hours ago 0 replies      
It looks like (some parts of) the source code of this project is there: https://github.com/allesblinkt/riverbed-vision
5
cwkoss 3 hours ago 1 reply      
Wired sucks... adblock wall
23
Show HN: WebGL Minecraft-like scripting environment for teaching programming webblocks.uk
10 points by cjdell  3 hours ago   3 comments top
1
w-ll 2 hours ago 1 reply      
Why no mouse input?
24
Faster DOM annevankesteren.nl
104 points by plurby  12 hours ago   43 comments top 12
1
c-smile 6 hours ago 1 reply      
We all know that Element.innerHtml = "..." can be significantly faster than set of individual DOM mutating calls. Element.innerHtml = ... gets executed as single transaction - relayout happens only once, at the end of it. Yet there is no overhead on JS-native bridging and function calls in JS in general.

But the need of "transactioned" DOM updates is not just about the speed. There are other cases when you need this. In particular:

1. Massive DOM updates (that's your case), can be done with something like: Element.update( function mutator(ctx) ). While executing the mutator callback Element.update() locks/delays any screen/layout updates.

2. That Element.update( function mutator(ctx) ) can be used with enhanced transitions. Consider something like transition: blend linear 400ms; in CSS.While executing the update browser makes snapshots of initial and final DOM states and does blending of these two states captured into bitmaps.

3. transactioned update of contenteditable. In this case Element.update() groups all changes made by the mutator into single undoable operation. This is a big deal actually in WYSIWYG online editors.

There are other cases of course, but these three functionalities are what I've implemented in my Sciter Engine ( http://sciter.com ) already - proven to be useful.

2
CharlesW 7 hours ago 5 replies      
> That way, you only do the IDL-dance once and the browser then manipulates the tree in C++ with the many operations you fed it.

As a beginning web developer, I remember being surprised that there was no way for me to group DOM manipulations into SQL-like transactions, or even an equivalent to HyperCard's lockScreen.

I'm finding it a little tough to tell whether the author is talking about putting the browser or the web developer in control of grouping DOM updates.

3
tixzdk 8 hours ago 2 replies      
I'll leave this module here for those of you how haven't heard of it. https://github.com/patrick-steele-idem/morphdom

It diff's the DOM instead of a VDOM

4
m1el 5 hours ago 2 replies      
I was playing with the idea of using JS objects as a template engine: http://m1el.github.io/jsonht.htm, and I'm pretty happy with it.

I think DOM is pretty damn fast in modern browsers. `innerHTML` can be slower than creating elements from a JS object!

Here's the script I'm using for timing: https://gist.github.com/m1el/b28625b3b9261f0fab819e866133e49...

My results in Chrome are: dom: 2663.580ms, innerHTML: 10999.449ms

5
maxbrunsfeld 7 hours ago 2 replies      
I'm curious about the source of the slowness of the IDL operations. Is it specific to the DOM, or is he referring to overhead that exists for any bindings between JS and C++?
6
bzbarsky 7 hours ago 0 replies      
I wrote up a response to this but haven't found a good place to put it; might as well post it here.

1) I think the idea about such an API encouraging good practice is very much correct, in the sense that it makes it harder to interleave DOM modification and layout flushes. That might make it worth doing on its own.

2) DOM APIs are not necessarily _that_ cheap in and of themselves (though they are compared to layout flushes). Setting the various dirty flags can actually take a while, because in practice just marking everything dirty on DOM mutation is too expensive, so in practice UAs limit the dirty marking to varying degrees. This trades off time inside the DOM API call (figuring out what needs to be dirtied) for a faster relayout later. There is some unpredictability across browser engines in terms of which APIs mark what dirty and how long it takes them to figure that out.

3) The IDL cost is pretty small in modern browsers, if we mean the fixed overhead of going from JS to C++ and verifying things like the "this" value being of the right type. When I say "pretty small", I mean order of 10-40 machine instructions at the most. On a desktop machine, that means the overhead of a thousand DOM API calls is no more than 13 microseconds. On mobile, it's going to be somewhat slower (lower clocks, if nothing else, smaller caches, etc). Measurement would obviously be useful.

There is the non-fixed overhead of dealing with the arguments (e.g. going from whatever string representation your JS impl uses to whatever representation your layout engine uses, interning strings, copying strings, ensuring that provided objects, if any, are the right type, etc). This may not change much between the two approaches, obviously, since all the strings that cross the boundary still need to cross it in the end. There will be a bit less work in terms of object arguments, sort of. But....

4) The cost of dealing with a JS object and getting properties off it is _huge_. Last I checked in Gecko a JS_GetProperty equivalent will take 3x as long as a call from JS into C++. And it's much worse than that in Chrome. Most of the tricks JITs use to optimize stuff go out the window with this sort of access and you end up taking the slowest slow paths in the JS engine. What this means in practice is that foo.bar("x", "y", "z") will generally be much faster than foo.bar({arg1: "x", arg2: "y", arg3: "z"}). This means that the obvious encoding of the new DOM as some sort of JS object graph that then gets reified is actually likely to be much slower than a bunch of DOM calls right now. Now it's possible that we could have a translation layer, written in JS such that JITs can do their magic, that desugars such an object graph into DOM API calls. But then that can be done as a library just as well as it can be done in browsers. Of course if we come up with some other way of communicating the information then we can possibly make this more efficient at the expense of it being super-ugly and totally unnatural to JS developers. Or UAs could do some sort of heroics to make calling from C++ to JS faster or something....

5) If we do provide a single big blob of instructions to the engine, it would be ideal if processing of that blob were side-effect free. Or at least if it were guaranteed that processing it cannot mutate the blob. That would simplify both specification and implementation (in the sense of not having to spell out a specific processing algorithm, allowing parallelized implementation, etc). Instructions as object graph obviously fail hard here.

Summary: I think this API could help with the silly cases by preventing layout interleaving. I don't think this API would make the non-silly cases any faster and might well make them slower or uglier or both. I think that libraries can already experiment with offering this sort of API, desugaring it to existing DOM manipulation, and see what the performance looks like and whether there is uptake.

7
amelius 2 hours ago 1 reply      
Is there anything in a browser that ensures that simple updates with local effect have O(1) layout cost?

Or do developers have to live with the uncertainty that simple updates may have a higher computational complexity?

8
mmastrac 8 hours ago 0 replies      
I think this is a natural evolution of where DOM has been headed. Batching requests is entirely within the spirit of the original DOM.
9
lucio 7 hours ago 0 replies      
Something like suspendLayout(), resumeLayout()I guess this GUI technique is like... 50 years old?
10
beders 2 hours ago 0 replies      
Build faster browsers.Don't be afraid of dogma.

There were times before there was a DOM (you might not have been born yet) and there can be times after DOM :)

11
EGreg 1 hour ago 0 replies      
Yeah, that's why in our framework, we go the FastDOM /greensock route and encourage writing mutation code explicitly for when the state changes.

However to be fair the point of React, vdom, mithril etc. is to allow the developers to write a mapping from state to view and let the mutations be calculated automatically. They claim it's easier to just write the logic of the mapping instead of all possible transitions. I don't buy it, but it has grown to a huge community.

12
krallja 6 hours ago 2 replies      
> backed by C++ (ne Rust)

"ne" means "originally called" (usually someone's maiden name). I don't think C++ used to be called Rust.

25
The traditional FPGA market is abandoned xess.com
52 points by jamesbowman  9 hours ago   22 comments top 6
1
lvh 8 hours ago 1 reply      
I'm not sure I agree with the idea that Intel is going to make these folks work on something else because FPGAs don't grow that much. That makes little sense; Intel isn't really hurting for more employees, and the FPGA engineers can't necessarily be made to be productive on the higher-growth sections cited.

Instead, it makes a lot more sense that Intel is going to try and make that market more lucrative and larger instead. As a major chip manufacturer, they're in a great position to ship FPGAs to tons of new customers. Chicken-and-egg problems (you don't FPGA until you need to because it's not available) have made FPGAs a niche element. However, OpenPOWER/CAPI have demonstrated that you can get huge benefits from slapping FPGAs to existing general purpose compute.

So; TL;DR: I don't think it makes sense to assume Intel is just going to assume that market will stay what it is. Instead, they think they can make that market better, and do more than Altera/Xilinx can do individually. In that light, a purchase makes perfect sense.

(Disclaimer: I work for a company involved in OpenPOWER/OpenCompute and has shipped hardware that does this.)

2
dmytroi 3 hours ago 0 replies      
Ehm, FPGA/CPLD are everywhere, it's hard to say, but my educated guess is that hardware market consumes similar amount of SoC-like and FPGA/CPLD chips, look, even Pebble smartwatch uses Lattice iCE40 FPGA, MacBooks also usually have few FPGA's here and there. FPGA/CPLD's are essential to implement "glue-logic", which does what it says - glue different pieces of hardware together. Of course you can do "glue" in software, but than your customers will enjoy poor battery life time and random glitches here and there. And nobody gonna spend big-money (it's actually depends, sometimes it 1kk+$, sometimes you can get away by 30-50k$) by running their own ASIC's when you can just buy jellybean FPGA's.

Indeed phone/desktop market might move to more one-chip-for-everything solution, but even then we need glue-ish logic to control something like screen backlight DC-DC converters, charging IC's, etc, which is much more easier done from FPGA/CPLD-like devices. On the other hand FPGA/CPLD's are essential in some classes of devices, for example test instruments: modern oscilloscopes usually have 3+ FPGA's in them, companies like Keysight usually only run custom ASIC's when they hit limitation of current silicon tech, like their N2802A 25GHz active probe (starting from 23500$) uses amplifier IC made with indium phosphide process (InP), which is kinda far away from whatever current consumer product companies are doing, you can check the teardown of this beast here https://www.youtube.com/watch?v=jnFZR7UsIPE

So in my opinion FPGA/CPLD market will live long and strong, players might change, but demand is enormous. The only problem in my opinion is that whole market is more B2B-like (FPGA's are usually just a humble IC's inside end customers products, you don't see stickers "Altera inside" or anything on products themselves), so it's kinda hard to get grasp what's going on.

3
PaulHoule 3 hours ago 2 replies      
People are still scratching their head over this acquisition but here is my take.

Intel has failed at phones not because "x86 sux" but because Intel makes a small number of SKUs that are often different bins of the same design -- these are heavily tuned in every way. Phone chips are not so optimized in terms of process and design, but instead they are true "systems on chip" where a number of IP blocks are integrated to produce something which is optimizable by adding functional blocks.

Something FPGA based, whether it ends up in the datacenter or elsewhere, could have a highly optimized FPGA block which is field configurable, so this gives Intel something to compete with SoC vendors on their own terms.

One detail is the nature of the memory and I/O systems. FPGAs can move a huge volume of data, so unless you upgrade the the paths in and out of the CPU/FPGA area, the FPGA would be starved for bandwidth.

It would take one of two things for the FPGA market to expand based on these developments.

First, if there was some "killer app" that sold a huge number of units, that would help. The trouble here is that if you sell a huge number of units, you might be better off making an ASIC functional block an integrating it into a true SoC.

The other one is that (I think) FPGA development today is constrained by the awful state of tooling. Pretty soon you will be able to pop a FPGA hybrid chip into a high-end Xeon workstation and have one hell of a development kit in terms of hardware, but without a big improvement in the tooling, very few people will be able to really make use of it.

4
kqr2 6 hours ago 1 reply      
In case the link is down:

http://webcache.googleusercontent.com/search?q=cache:http://...

Also, I believe that cheap microcontrollers have been able to replace FPGAs in some cases.

5
petra 5 hours ago 3 replies      
Or there's another alternative. The main reason it was hard to make money from the industrial etc segment - was low volume combines with a lot of support costs.

But what if

a. xilinx built c-like tools that enabled embedded software engineers develop easily ?

b. they released those freely to some segment of the market ?

c. they've built an external support and IP ecosystem, either open or closed or both ?

Those actions can increase margins for xilinx, and they seem to be doing a,b.

As for the hardware, maybe the article is right. Also ,recent industrial chips are using 28nm, and going beyond that is extremely expensive and might not fit the industrial scenario anyhow, maybe there's not a lot of innovation left in the industrial FPGA market,

6
payne92 8 hours ago 3 replies      
I think it gets squeezed even more with general purpose GPU computing.
26
Show HN: Chatbots made easy Build, train, and deploy bots motion.ai
13 points by bestkao  3 hours ago   2 comments top 2
1
retrodict 7 minutes ago 0 replies      
Sounds like a good addition to http://wtfismybot.tech/
2
ne0phyte 12 minutes ago 0 replies      
Would be nice if there were a couple of example bots that you could try talking to/see some info about before creating an account.
27
Goldman Sachs Dumps Numerical-Ranking System for Employees wsj.com
9 points by edtrudeau  3 hours ago   2 comments top
1
bpchaps 11 minutes ago 1 reply      
Paywall :(

I haaate these numerical ranking systems. Bank of America had one when I worked there. Managers would have meetings with each other to match their workers with ranks. Really, really long meetings. So long that my boss apologized to me after the meeting to discuss my ranking.. because heleft before my name came up and there was no followup meeting. I ended up being ranked towards the bottom, I didn't get a raise, and the bonus was meh.

When I left, they told me that my performance was great and that they wanted to give me a substantial raise, since apparently I was told my pay was far below what it should be. They then had to replace my weekend shift with four others for a while. Most of the team quit over the next year or so.

28
Diversity and Inclusion at GitHub github.com
82 points by ahhrrr  8 hours ago   142 comments top 25
1
guessmyname 5 hours ago 17 replies      
This blog post reads to me like the Pokemon motto "Gotta Catch 'Em All".

Being non-american (I'm from a South American country) it is hard for me to understand the importance giving to these things associated to racism and sexism. The United States is the only place I have seen where people invest a considerable amount of time and resources just to put people from minor groups or specific genders in pedestals just to please other people. You will (probably) never see such thing in Latin American because no one cares why color your skin is or what gender you identify with in order to get a specific job.

Why do they care if there are no "Black/African-American" in their company? Or that 10% are black women? Or that 35% of women are in a leadership role? Like who cares? Are their roles assigned based on their gender and/or skin color?

Jesus, even my European co-workers make fun of Americans for these things. Let people build their career based on how they work and not their ethnicity nor gender, otherwise you will be incentivizing more racism and more sexism.

2
johnnyg 3 hours ago 1 reply      
"There are no Black/African-American GitHubbers in management positions, which is unacceptable."

It is only unacceptable if people with the skills to be in management positions didn't get a promotion because of a non-relevant personal attribute.

Otherwise, it is acceptable and in the best long term interests of the company.

I acknowledge that America has equality of opportunity problems. There are strong and just historical reasons why the government has carried the torch on this issue. However, I think its time to let the market work. Companies that place talent above non-relevant personal attributes are at a competitive advantage as they are drawing from a larger talent pool. Letting that market run on a long enough time line means that companies who discriminate will grow weaker, as well they should.

Github has come out and said "we're going to make sure that we meet target quotas for people of X, Y and Z non-relevant personal attributes." I think that's bad business and not the equality we need to be going for.

If you have the skill set, drive and integrity to be a strong contributor to a team, I want to work with you and make wealth with you. If you don't, I don't want to work with you - go get educated, build skills and try again or if I'm wrong about you, prove it by out competing me. This is equality of opportunity and of ideas. What Github is doing isn't, nor does it get us closer to what the idea of America promises us all.

I think that providing the resources to build skills to those whose parents didn't or couldn't provide them is a great idea.

I think shaking up our education system (because the numbers show that investing doesn't produce better results) is a great idea.

I want to be fellow citizens with educated, driven people who are so in part because society ensured it'd happen.

Lets not go too far or go in the wrong direction.

3
brbsix 5 hours ago 2 replies      
Ever since Zach Holman "left", it appears as though GitHub has been eaten away from the inside by intersectional feminism. There appear to be subversive efforts to turn tech companies into platforms for social justice. I can only imagine it must be really uncomfortable to work in this sort of environment.
4
pwim 1 hour ago 1 reply      
Previously I thought hiring diversity for diversity's sake didn't make sense, and hiring should be based solely on "merit". After all, as a company, you want to hire the best people.

However, since then, I have found there is research that indicates diverse groups can be more advantageous than homogenous groups of highly talented people [1], [2]. So from this perspective, striving for diversity makes economic sense.

[1] http://www.pnas.org/content/101/46/16385.full

[2] http://www.scientificamerican.com/article/how-diversity-make...

5
t0mbstone 4 hours ago 1 reply      
Amidst the forest of diversity demographics, I found this little tidbit:

"We have a flexible paid time-off policy and our maternity/paternity leave policies exceed the tech industrys norm. In addition, our policies do not differentiate between maternity or paternity leave."

Now THAT is cool. I really wish more companies did that. When my wife had a c-section with our son, she got 6 weeks off of work. I got nothing.

6
dijit 3 hours ago 0 replies      
every time I hear about "inclusive" practices at github it makes me grimace, not because it's not a good idea. Diversity is good. But because it's rather racist, partially sexist and it's increasingly the norm to be this way.

Equal opportunity, not outcome, if 4% of your applicants are British, but do not pass because of technical reasons. (I'm using British because I am one, replace with $minority), then why should I hire them? Ideally we'd interview blind and whoever is the best tech, is the best. I'm not sure why this isn't possible in our technology culture, do you really need to be face to face with a person to assess their technical merits and ensure they are the right fit for a company.

But, anyway, to me this stuff is exclusionary, I am apparently cursed to be a white male in tech, and as a majority I now have to work much harder for SV to acknowledge me for my merits, I am of the firm belief that if I were a black lady I would have enough support to prop me up above my current experienced position in no time. I'd love to test this theory honestly.

And anyway, who cares who delivers github/facebook/$website at the end of the day, hire who you think will do the job, nobody is going to feel excluded from the services you provide because you didn't meet your quota of transgender people.. and if they were, you could very easily mislead them.

edit: I know it's against etiquette to mention downvotes, but if you must downvote me please reply stating why. That is also against etiquette.

7
jbob2000 5 hours ago 3 replies      
I wish so much that I could be picky about my hiring like GitHub. I wish I could have so many qualified candidates applying that I am able to create a diverse team. But I put out job ads and get like 2 qualified responses.

How the hell are they able to find all these candidates such that they can start being picky about who they hire???

Maybe Toronto just sucks for tech talent... Argh...

8
BurningFrog 2 hours ago 1 reply      
How many republicans work at GitHub? How many evangelical christians? How do these numbers compare to the distribution in the larger population of the country?
9
jpeg_hero 4 hours ago 0 replies      
> At a high level, GitHub is 64% male worldwide and 64% white in the U.S. That said, the company has improved since the end of 2014, when it was 79% male and 21% female worldwide.

I wonder what their male/female split is in daily active users.

10
brandoncordell 5 hours ago 1 reply      
Weird, it seems like some of their team has been decidedly anti-white.

http://techcrunch.com/2016/05/26/githubs-diversity-is-just-a...

Edit: Removed Breitbart link

11
vemv 3 hours ago 0 replies      
It's like a law of nature... your organisation grows, its culture becomes braindead.

There should be more Basecamps around. I don't know what their equality policy is, but they sure have very little tolerance to bullshit - and don't mind letting you know!

12
kkirsche 2 hours ago 0 replies      
Diversity for the sake of diversity. So dumb. Promote for qualifications not junk like this
13
wozer 5 hours ago 0 replies      
Previous discussion of GitHub's diversity culture: https://news.ycombinator.com/item?id=11049067
14
Keats 3 hours ago 0 replies      
How about adding other parts of the company like HR/Sales/Marketing/etc etc to their data?
15
ghrifter 7 minutes ago 0 replies      
I am now a #GitLabMissile
16
davidcelis 6 hours ago 0 replies      
Looks like the link to the actual data[1] was posted and subsequently removed, making it unable to be submitted. Why?

[1]: https://diversity.github.com

17
Futurebot 1 hour ago 1 reply      
The goal of hiring diversity is to reach Proportional Representation; you want your workforce to resemble the demographics of the nation as a whole. If you accept that anyone, regardless of their genetic heritage, gender identity, sexual identity, skin color, etc. can do the job just as well as anyone else can, then you should be supportive of, or at least not oppose, efforts like these. If you don't think so (with regards to genetic heritage especially), there's really no discussion to be had, as that's a likely irreconcilable conflicting understanding of the science, especially at the individual level (intra-haplogroup potential ability swamps any supposed inter-haplogroup potential ability) or just plain, old-fashioned prejudice.

What are the benefits to PR? Two major ones:

- Countering structural biases in society. There has not historically been anything like a level playing field with regards to inherent attributes in this country. Many of those attributes cause people to get discriminated against, and this gets perpetuated across time. It's a "market" solution to a societal problem, which means that it can't magically counter much larger structural forces all by itself, but it can help.

- Producing better results. Many studies claim that diverse teams operate better, and potentially produce superior products and services. A major part of the reason is the introduction of diversity in perspectives, which can allow a company to understand different potential markets better and can make a company potentially more welcoming, which can improve hiring (i.e., a great candidate who might feel out of place in an environment with non-diverse demographics might decide to join if the demographics were more diverse. This can become a virtuous cycle.) Whether this is true or not, or how much it's true, is not that relevant. It certainly can't hurt.

There are valid criticisms of hiring diversity methods, but I'm not seeing many of them in the comments here:

- Can PR be achieved? Getting PR at a place with 10 employees is going to be very tough, so coming down on companies this small who can't do so should not be done, IMO. 100? Easier. 1000. Much easier. The larger the organization is, the more possible it becomes. So no, it's not always possible, and some perhaps well-intentioned activists beat up on companies they shouldn't, but that does not mean it should not be a goal that is pursued as a general principle for hiring.

- Do the people who get hired necessarily represent who those hiring them think they represent? Not always, no. If you're hiring a perfect rainbow coalition of identities, but all of them come from wealthy backgrounds or elite schools, for example, you may be getting less diversity than you think. It's not tokenism, but it's definitely not PR. Many Left criticisms of our current version elite liberal meritocratic culture center around this. The antidote is to include things like educational background, economic class/upbringing circumstances, etc. when considering a candidate vis a vis diverse hiring; that's what intersectionality is all about. You do not consider advantaging/disadvantaging attributes in isolation. Many companies have do a very poor job of the latter, but in this case, GH has explicitly said they are more open to non-traditionally educated candidates.

PR and hiring diversity is not going to fix all society's problems; for that, we need major reforms like free higher ed, federal, rather than local, funding of K-12 education, GBI, UHC, ending the drug war and a dozen other things. Can it help, especially by helping push American culture towards greater acceptance of diversity and pluralism? Yes, so companies should strive for it.

18
petsormeat 4 hours ago 0 replies      
Anybody besides me impressed that they released data about employee age ranges?
19
balls187 1 hour ago 0 replies      
It chaffes me when I see companies talk about diversity and inclusion (ignore that the blog post is written by a young white guy).

You want to use data to make Github more diverse--great. Just shut up about it and make it happen. Posting your diversity data like this, like it's some badge of honor, is ridiculous.

Someone wants to know how diverse you are? Great, show them the data, and tell them you're working on it.

Just please stop it with the public discourse.

You know what I don't see--companies like MSFT, and Google, and Facebook, and Github who talk about their inclusion showing up to the poorer parts of my neighborhood trying to address diversity issues there.

20
twunacceptable 7 hours ago 5 replies      
"There are no Black/African-American GitHubbers in management positions, which is unacceptable."

Why is it unacceptable? Just by virtue of the number of different kinds of people -- race, gender, etc there's no way to have 'one of each' for every position for maximum diversity. This sort of thing is so creepy because it treats people like collectible figures ("Oh yeah? Well I added a black transgendered person to our team today! I have way more diversity points than you!") instead of actual people you judge on their merits. How is a white person supposed to feel about applying to Github management position after reading "Its unacceptable we don't have a black manager"?

I'm not white and I don't want to be treated as some statistic you can show off on your diversity report card. Thats really demeaning and insulting.

21
throwaway934825 3 hours ago 0 replies      
The last time this topic came up was around the same time that that open letter of from maintainers of GitHub repositories was making its rounds.

If I recall correctly, GitHub responded to that open letter by implementing the lowest hanging fruit - bug templates and +1s - and then went radio silent. Have there been meaningful product updates since?

Now we know what happened: they minimally addressed the PR needs of their business, and then went back to doing nothing interesting to advance their products.

I was on the fence before, but now I'm certain: GitLab is the superior experience.

22
yuhong 3 hours ago 0 replies      
I have been thinking of the problems anti-discrimination laws. Not all kinds of discrimination leave evidence. I have a feeling that they should be restricted to certain job categories. Similarly, sexual harassment leave evidence more often, but this don't mean they are worth the costs. I have been thinking of this Ask HN for example: https://news.ycombinator.com/item?id=11666857
23
powertower 5 hours ago 1 reply      
To me every example of diversity basically has been the concept that "there are too many white people in the room." And to a smaller degree that "there are too many straight men...".

I've never seen this applied to non-whites, at least not to this degree. In fact, 100% black-demographic teams (or businesses or organisations) are oftentimes praised for being "authentically black".

IMO, the way I see it is once a productive homogeneous group (which by definition is composed of members that are able to agree with each other and engage in actual progress rather than in-fighting) puts in the effort and spends the time building the infrastructure/business/etc and makes it successful - then other groups see this as an opportunity to get something out of it... "Diversity for the sake of diversity" only comes in at the end, never at the beginning.

It is also odd that when someone says diversity is our strength, they never really share the actual details of that strength.

To me this is a turning point for a company that goes from being work oriented to becoming a race/gender oriented cesspool where everything is about your color, genitalia, and gender.

24
exstudent2 2 hours ago 0 replies      
As a Hispanic, I would never work at Github because I find this quest for "diversity" to be extremely patronizing and objectifying. As a male, I'd never work at Github because their policies are sexist.
25
ps4fanboy 1 hour ago 1 reply      
I dont want to live or work in a society where employees at every company reflect the demographics of that society, where everyone gets equal pay, feels less and less like diversity and more and more about conformity. Its the road to socialism, if all of this diversity was valuable the market will reward companies accordingly.
29
Classic HN: BitC isn't going to work in its current form (2012) ycombinator.com
42 points by scott_s  5 hours ago   6 comments top 3
1
scott_s 4 hours ago 1 reply      
This is an experiment in submitting "classic" threads. We already have a culture of periodically resubmitting old stories, but in this case, I think that the combination of the old thread and the story is what is so interesting.

The original submission is, as far as I can tell, the only post-mortem on BitC, which was a research effort for a robust systems programming language. See https://en.wikipedia.org/wiki/BitC for more. The ensuing HN thread has even more insight, as other language designers talk about their own experiences in trying to make a better systems language. Particularly interesting is the comment from pcwalton, talking about similar difficulties with the beginning days of Rust, and how they were overcoming them.

2
nickpsecurity 4 hours ago 1 reply      
SPARK is the only one still going strong in verifiable, systems, programming space. I had hopes for ATS and BitC. There's Rust tie-ins but it's not a verifiable language. There is ongoing work to produce necessary models to make it one but it's quite complex. It's fine, though, as languages like it, Go, and Wirth's were really about raising the baseline safety/security of average code. Way better goal. Whereas, the verifiable languages are about making the highest level of assurance easier even if one must sacrifice some benefits of popular, complex languages.

So, we have SPARK. After much work, C can be subsetted with HOL and AutoCorres. Not really designed for it, though. So, we just have one proper for imperative with another made to work with monumental effort.

Far as functional, we have quite a few options that are prototypes with some production use. Haskell & HOL used with seL4 plus plenty others. Tolmach et al did Habit system language based on Haskell but I think it's on pause. ATS seemed quite practical but hadn't heard anything on it recently. COGENT is a new one for verified, functional, systems language that compiles to C with IIRC certified translation. The ML's have many practical implementations and were designed for theorem proving. CakeML latest in that. PreScheme was verified with VLISP and could be rebooted. There's also model-driven and DSL approach from Hinjtens at iMatix, CertiKOS with DeepSpec team, and Ivory at Galois.

So, this is what we're at since BitC folded. Anyone know of any work I missed with at least one deployment of low-level, OS-style code with realistic, verification capability?

3
pcwalton 1 hour ago 0 replies      
It's interesting looking back on how this compared to the lessons learned by Rust in the lead up to 1.0.

Issues in which Rust observed the same problems and changed course to fixed them:

- Compilation model: Rust originally wanted to do a static compilation model, like BitC, but it eventually went with a hybrid model that leans toward link-time optimization.

- Compiler-Abstracted Representations vs. Optimization: Again, Rust wanted to abstract representations from the compiler, but eventually it just went to a model in which the representations aren't hidden from the compiler.

- Insufficiency of the Type System: Rust also had by-ref parameters only, but as that was insufficient it eventually grew the lifetime system (as well as the subtyping needed to make it usable).

Issues in which Rust persevered anyway:

- Inheritance and Encapsulation: Rust went with interfaces and unified them with the typeclass system.

- Instance Coherence and Operator Overloading: Rust went with a Haskell-like approach to instance coherence. It is sometimes annoying, but it's rarely an insurmountable problem in practice, and Rust is a lot more flexible than most other non-C++, non-D languages here.

30
Steve Blank: 'VCs Won't Admit They're in a Ponzi Scheme' inc.com
174 points by jackgavigan  7 hours ago   64 comments top 20
1
educar 6 hours ago 7 replies      
Yeah, probably the worst thing about the valley culture is that organic growth has lost it's charm. There was value in bootstrapping, making revenue, then profits and then seeking funding as a way of increasing profits by growing.

My understanding is that silicon valley VC culture started out because people were building hardware and this required some investment to get started. Software is totally different and with AWS and the like, building software is so cheap. But no, now it's basically: take money, give out stuff for free and take more money showing the free customers and get even more free questions and take even more money and so on. In the end, nobody knows the true value of the company - the founders, the VCs or the customers. True money is made by either ads, selling customer data or lock-in by making migrations to another service very expensive.

What irks me is that this scheme overly favours VCs and to some part the founders. The thing is the stream of founders who are willing to do this is never ending (same thing about privacy. the number of people willing to give up their privacy consciously is never ending). Even YC thinks of the whole thing as a game (if you followed the snapchat thing). I am cynical but this is really about rich people having fun and everyone else (customers, employees) is getting suckered. None of these companies are being built to last.

2
cehrnrooth 6 hours ago 4 replies      
What are ways this can be bet on since these are non-public companies?

So far I've identified:

1. Commercial real estate. Could take a short position on any public companies (ex. CBRE) though they're likely too diversified to drive them down to zero.

2. Hiring. Could bet against LinkedIn? Most local recruiters / agencies are privately owned and the public ones are diversified.

3. Ancillary services. Seems like start-ups serving start-ups so there's no publicly available position to take.

4. Tax Revenue. Assuming a contraction, can you bet on local municipalities being short on budget / revenue with a smaller tax base?

Might be a fools errand to short these if the excess capacity can be picked up by all the behemoths (Google, Facebook, Apple).

I wouldn't dare take a short position on SF residential real-estate although outlying areas might see a larger contraction.

3
haliax 6 hours ago 1 reply      
Is this taking into account the terms of these ridiculous valuations? According to http://www.fenwick.com/publications/pages/the-terms-behind-t... these come with "significant downside protection" and other terms that make these deals more akin to debt than equity funding.

If you stop thinking about the valuation being the price of the company, and instead being the price of a financial product which includes shares AND these protection mechanisms (e.g. liquidity preferences) do these valuations still seem irrational?

4
erikb 4 hours ago 0 replies      
More tiresome than the current state of the valley is how people react to it. I'm here since it started. I was reading on Venture Hacks and Hacker News before Y Combinator was really a thing. People where actually talking about what a Y combinator is, the Lisp one. The moment it started to hype everybody above the age of 20 should know how it would go since it is like every other hype. It starts optimistically, everybody happy that somehting is actually happening. Then the euphory spreads. Then people start to do ridiculous things. Then everybody knows that it's ridiculous but nobody knows how to get out. Then it starts to die, slowly or in a burst. That's how life is. No surprises here. But despite that most people seem to act like they have the biggest conclusion of their life: The valley is a bubble. Wow.

In business as in IT the basic rules stay the same. If it is healthy to go public with five consecutive successful quarters, then aim for that with your business. If you are editing text better spend time learning vim or emacs than the newest flashy text editor. No need to hype the explosion or burst of any hype.

Thus, I'd rather see us discussing about how to make a business work. And I'm really disappointed that Steve Blank, who always was about core marketing values, participates in such a attention seeking article.

5
vonnik 1 hour ago 0 replies      
Startups are just illiquid investments, like real estate or even like chunks of public-company equity so large that the market can't absorb their sale. Every single investor who ever existed has talked their own book; i.e. they promote the assets in which they have a stake. This is not a Ponzi scheme, it's a market. And unlike Ponzi schemes, in venture, there is an underlying asset, which is the company that's growing. Growth and profit are two different things. Profit and valuation are two different things. Valuation, or market cap, are based on the expectation of future revenue. With tech companies, these expectations can be very high, and rightly so. Even if they are grounded in human psychology, and subject to projections, distortions and the like.
6
muench 6 hours ago 1 reply      
Even a broken clock is right twice a day. Steve Blank has been saying this since 2011 when the economist hosted a debate between Blank and Ben Horowitz. All respect to Steve Blank though for his other work.

The Economist seems to have broken the link to the content on their site, but below is a video the Economist posted on Youtube and the articles posted on the personal blogs of Blank and Horowitz.https://www.youtube.com/watch?v=AfX9VLsUWwchttp://www.bhorowitz.com/debating_the_tech_bubble_with_steve...https://steveblank.com/2011/06/15/the-next-bubble-dont-get-f...http://a16z.com/2011/06/17/debating-the-tech-bubble-with-ste...https://steveblank.com/2011/06/17/are-you-you-the-fool-at-th...https://steveblank.com/2011/06/22/the-internet-might-kill-us...

7
chatmasta 21 minutes ago 0 replies      
Maybe there's a bubble, but I think it's a little more nuanced than the author's "companies are valued at <big number> multiples of their revenue, so there's a bubble" logic. Honestly, this line of thinking is so devoid of reasoning it's difficult to counterargue. It states a fact (valuations high multiples of revenue) as evidence, and a hypothesis (valuations will decrease in future) as a conclusion. Yet it offers no logical path from the evidence to the conclusion.

You don't need to convince me that these companies are valued at a price way higher than their revenue. I can see that with my eyes. But you do have to convince me that means their valuation will decrease.

The investors have arrived at these valuations via a "process." Whether driven by "market forces," quantitative easing, or some other "explanation" du'jour, there was a process. A series of events led to current valuations, which investors are continue to be willing to invest at.

So if valuations will eventually decrease (that is the meaning of the bubble, yes?), a convincing argument for why they might decrease must include what is going to change. What about the process of VC investments is so fundamentally broken that they will be forced to devalue their own portfolios?

Companies crashing? They have too much cash for that, and the VC's will give them more before any crash happens.

VC fund running out of money? Not gonna happen, at least at the top (the firms investing in the unicorns). There is no shortage of money, and tech is always one of the best places to put your money.

Revenue drying up? Hmmmm. This one might happen.

The bubbles I'm interested in are those that go like this:

1) VC money to Snapchat

2) Snapchat money to google cloud

3) Google cloud money to google, to google ventures, to another startup to go to (1)

...or this one:

1) VC money to Facebook

2) VC money to 1000 startups

3) Startups money to Facebook ads

4) FB stock rise with ad revenue

(And this is before even mentioning the insanity of the mechanics of the advertising market)

... But then, maybe this is just the markets working properly. After all, isn't capitalism the ideal Ponzi scheme?

8
api 7 hours ago 1 reply      
I think it's possible to draw a line between angel/VC with sane valuations and the "unicorn" phenomenon.

The former does not seem to be in a bubble per se, though we have seen a cool down. A "hot" market is not a "bubble" and a cool down is not a "crash." Bubbles are violent and insane and that's not what we've seen in seed/series A.

The latter may well be a bubble.

What I'm hearing about right now is that it's getting a lot harder to raise money in later stages if you don't have great revenue numbers and real revenue growth. Earlier stages have cooled a bit but not catastrophically.

9
_yosefk 5 hours ago 0 replies      
"Nowadays, most of the liquidity is happening for large companies paying for startups and hedge funds buying into the latest round. So when this crash happens, it's mostly going to affect the later-stage investors," Blank says. That's different from what happened when the bubble burst previously, inasmuch as "It [the '99 crash] destroyed a lot of public value, not just private capital. Your grandmother got hurt as well."

Newspapers were complaining about the public being unable to invest in unicorns, with all the benefits going to wealthy private investors. Then people on HN started echoing these sentiments (the part where stuff from newspapers gets planted into the heads of normal people and becomes their opinion is really scary, I guess it must be happening to me, too. Really scary to watch this happen though, reading some weird idea in a newspaper and then reading/hearing it repeated by people the next month.)

Will newspapers remember to thank wealthy private investors for taking the hit when the bubble pops? (BTW, what AFAIK doesn't exist but could is a VC fund with publicly traded shares. Of course the average VC produces below-average returns.)

10
SocksCanClose 2 hours ago 0 replies      
This is an interesting thing to trend, especially since it has been just over a year since @sama put out his famous bet (http://blog.samaltman.com/bubble-talk).

I think the author may be conflating (as most do) VC investments at the left side of the power-law curve, and those on the right side. Meaning that the investments on the left (that produce at least 100x returns) are likely valued correctly, whereas investments on the right (at the tail end of the power law curve -- or even those in the center) are supposedly overvalued.

Even so, the Inc author's insinuations about Uber and Airbnb are not exactly the smartest things I've ever seen written. Since I fully expect that within 10 years Uber will be running a massive network of electric, self-driving cars and busses (and perhaps even long-haul networks?) in every major and even minor city in the world. The global transportation infrastructure is absolutely worth MUCH more than $50bn.

11
jorblumesea 5 hours ago 0 replies      
The reality is that some VCs invest in smart but risky bets and some just throw money around like it's free. Uber and Tesla are risky bets, but also have solid thinking behind their business and target specific market demands. There are risks in execution and a whole host of other problems, but the idea behind it is fairly solid.

What is not a smart investment is funding some company that is planning on making "the next social media analytics platform". There are what, 200+ companies all doing that?

The issue is mostly one of perception: What happens when investors realize this and want their money back? The house of cards comes crashing down, and might take out some smaller, legitimately innovative places in the process. Money flows out of the industry, it is no longer seen as viable. Investors see VCs as some kind of get rich scheme, it's only a matter of time before reality intercedes. When that happens, it may even take out the biggest players. VCs are a great idea but often poorly executed, basically due to greed.

12
eternalban 3 hours ago 0 replies      
Devonshire Research Group had a very interesting analysis of Tesla. Related to OP, I found their high level matrix of identifying probable ponzi schemes interesting.

https://news.ycombinator.com/item?id=11769775

13
mathattack 3 hours ago 1 reply      
When was this article written? I'm a huge fan of Steve Blank, but the attached doesn't seem current.

For entrepreneurs, Blank warns, the future is clear: "Startups are going to find it much, much harder to raise money, and the liquidity pipeline will bounce back to the good old days -- when you actually had to make money to have some liquidity event [go public]."

I don't see many companies going public nowadays, so it isn't like there's a mad dash of premature IPOs. If anything it's harder than ever to go IPO. Due to Sarbanes-Oxley companies are getting much larger before the exit. Due to activist investors, companies want to have the future 5 or 6 quarters in the bag (in addition to the past 5 or 6) before going public.

14
traviswingo 6 hours ago 0 replies      
He's correct. The economy will only support "fluff" for so long. At some point, no one will be willing to acquire these companies for their price, and the public won't support an IPO at their price also. Revenue and cash flow are vital to long-term success. Increasing your sticker price just by saying your company is worth more doesn't hold up in the real world. Find a problem, solve the problem way better than anyone else is, and create a monopoly with real profits. It's not easy, but that's where the real wealth is created.
15
elevenfist 3 hours ago 0 replies      
It's odd for Blank to continually be tooting this horn when his ideas on "The Lean Startup" are part of the problem. You can't walk into a business not knowing how the business is going to work. That's not something TBD. Sure, some "startups" manage to make it through, but if we're talking about ponzi schemes here...
16
green_lunch 6 hours ago 2 replies      
I think it was worse in the late 90s early 00s. There is this documentary called 'e-dreams' that came out in 2001. It documents a delivery startup that didn't charge anything to deliver products to customers.

They had virtually no business model (and lost money on every order) and got millions in startup capital. The whole company imploded within a year, but some of the interviews with the executives of the company are pretty telling: they never intended on making a profit. The intent was to either get bought out by a larger company or get an IPO and bank the proceeds.

E-dreams is a really good documentary for anyone interested in seeing a small sliver out of the early days of the first .com boom/bust.

I watch it once-a-year to remind me of the craziness.

17
iambvk 6 hours ago 1 reply      
As an outsider, what is the best way to measure or identify the bubble?

Is a "VC firm filing for bankruptcy" the only indicator?

Can someone more knowledgeable share some thoughts on this?

18
blahblah3 5 hours ago 0 replies      
even companies with little intrinsic value (i.e no ability to generate future cash flows for shareholders) can trade at high valuations for a long time (see the tulip mania)
19
jgalt212 6 hours ago 2 replies      
Of course, it's a ponzi scheme. How else can you explain such behaviors as the same VC investing in the same company's Series A at $10, Series B at $100, Series C at $1000, etc?
20
Aelinsaar 7 hours ago 1 reply      
I think "Ponzi Scheme" is both too harsh, and too generous.
       cached 27 May 2016 01:02:01 GMT