hacker news with inline top comments    .. more ..    29 Jul 2015 News
home   ask   best   2 years ago   
How Google Translate squeezes deep learning onto a phone googleresearch.blogspot.com
231 points by xwintermutex  4 hours ago   51 comments top 20
1
afsina 2 minutes ago 0 replies      
They did this even more impressively when squeezing their speech recognition engine to mobile devices.

http://static.googleusercontent.com/media/research.google.co...

2
motoboi 2 hours ago 3 replies      
I am 15 years into this computers thing and this blog post made me feel like "those guys are doing black magic".

Neural networks and deep learning are truly awesome technologies.

3
liabru 38 minutes ago 0 replies      
This is great. I particularly like that they also automatically generated dirty versions for their training set, because that's exactly what I ended up doing for my dissertation project (a computer vision system [1] that automatically referees Scrabble boards). I also used dictionary analysis and the classifier's own confusion matrix to boost its accuracy.

If you're also interested in real time OCR like this, I did a write up [2] of the approach that worked well for my project. It only needed to recognize Scrabble fonts, but it could be extended to more fonts by using more training examples.

[1] http://brm.io/kwyjibo/

[2] http://brm.io/real-time-ocr/

4
Animats 28 minutes ago 0 replies      
Word Lens is impressive. It came from a small startup. Google didn't develop it; it was a product before Google bought it. I saw an early version being shown around TechShop years ago, before Google Glass, even. It was quite fast even then, translating signs and keeping the translation positioned over the sign as the phone was moved in real time. But the initial version was English/Spanish only.
5
eosrei 1 hour ago 0 replies      
I used this in Brazil this last March to read menus. It works extremely well. The mistranslations make it even more fun. Much faster than learning Portuguese!

I took a few screen shots. Aligning the phone, focus, light, shadows on the small menu font was difficult. You must keep steady. Sadly, I ended up hitting the volume control on this best example. Tasty cockroaches! Ha! http://imgur.com/j9iRaY0

6
josu 2 hours ago 0 replies      
WordLens/Google Translate is the most futuristic thing that my phone is able to do. It's specially useful in countries that don't use the latin alphabet.
7
modfodder 12 minutes ago 0 replies      
Here's a short video about Google Translate just released.

https://www.youtube.com/watch?v=0zKU7jDA2nc&index=1&list=PLe...

8
mrigor 57 minutes ago 0 replies      
For those unfamiliar with google's deep learning, this talk covers their recent efforts pretty well https://youtu.be/kO-Iw9xlxy4 not technical
9
murbard2 2 hours ago 0 replies      
I see no mention of it, but I'd be surprised if they didn't use some form of knowledge distilling [1] (which Hinton came up with, so really no excuse), to condense a large neural network into a much smaller one.

[1] http://arxiv.org/abs/1503.02531

10
cossatot 3 hours ago 4 replies      
International travel now has a new source of entertainment: On-the-spot generation of humorous mistranslations.
11
Uhhrrr 1 hour ago 3 replies      
I don't get it. They say they use a dictionary, and they say it works without an Internet connection. How can both things be true? I'm pretty sure there's not, say, a Quechua dictionary on my phone.
12
joosters 3 hours ago 3 replies      
WordLens was an awesome app and it's good to see that Google is continuing the development.

The new fad for using the 'deep' learning buzzword annoys me though. It seems so meaningless. What makes one kind of neural net 'deep' and are all the other ones suddenly 'shallow' ?

13
teraflop 3 hours ago 0 replies      
A possibly relevant research paper that they didn't mention: "Distilling the Knowledge in a Neural Network" http://arxiv.org/abs/1503.02531
14
up_and_up 52 minutes ago 0 replies      
This technology has been around since 2010 and was developed by Word Lens, which was acquired by google in 2014:

https://en.wikipedia.org/wiki/Word_Lens

15
poslathian 2 hours ago 0 replies      
The article mentions algorithmically generating the training set. See here for some earlier research in this area: http://bheisele.com/heisele_research.html#3D_models
16
dharma1 2 hours ago 1 reply      
Would be great to see a more in depth article about this, and maybe even some open source code?
17
anantzoid 1 hour ago 1 reply      
18
zippzom 3 hours ago 1 reply      
What are the advantages of using a neural network over generating classification trees or using other machine learning methods? I'm not too familiar with how neural nets work, but it seems like they require more creator input than other methods, which could be good or bad I suppose.
19
hellrich 54 minutes ago 0 replies      
20
api 3 hours ago 1 reply      
"Squeezes" is very relative. These phones are equal to or larger than most desktops 10-15 years ago, back when I was doing AI research with evolutionary computing and genetic algorithms. We did some pretty mean stuff on those machines, and now we have them in our pockets.
First Round 10 Year Project firstround.com
106 points by sergeant3  2 hours ago   41 comments top 19
1
csomar 1 minute ago 0 replies      
Solo Founders do Much Worse Than Teams

How come? If the valuation for Team is 25% more than a Solo founder is clearly better than a Team.

A team is 2+ founders. Which means your shares are divided by 2+. Solo is clearly winning here even if the total valuation is less.

2
jamiequint 52 minutes ago 1 reply      
Articles like this really bother me because its really unclear what data here is significant (if any) since all of the data is just quoted as a % without standard error and the sample sizes are very small:

e.g. on comparing companies that do better. You could have a data set of 150 companies whose exit performance (or current value) looks something like this (numbers in millions):

5000,1000,500,400,200,200,100,50,50,30,30,20,20,20,15,15,15,15,0,0,0,0,0,0,0 (repeated 6x to get 150 data points)

Now compare that against a data set that is those exact numbers divided in half:

2500,500,250,200,100,100,50,25,25,15,15,10,10,10,7.5,7.5,7.5,7.5,0,0,0,0,0,0,0 (repeated 6x to get 150 data points)

If you compare these data sets with a Two Sample T-Test you have to go down to 91% confidence to get a significant result (http://www.evanmiller.org/ab-testing/t-test.html#!307.2/986....)

That may not sound that bad, but now add a super-unicorn to each one of those data sets, a $20B exit. Now the differences aren't even significant at 80% confidence.

e.g. in Item 7 about technical co-founders: "consumer companies with at least one technical co-founder underperform completely non-technical teams by 31%"

Lets say that First Round has 150 consumer businesses and we're just going to look at a binary outcome of something like "valued over $50m". Now lets say that 100 of these consumer companies have technical co-founders and 50 are completely non-technical. Say 40% of the non-technical teams are "successful" by the $50m metric. That means that 30.5% of the technical teams are successful (if they are doing 31% worse by the numbers in the article since 40%/1.31 = 30.5%). That's not a significant result at 80% confidence (http://www.evanmiller.org/ab-testing/chi-squared.html#!31/10...)

I understand why they published the piece and think it will get a lot of reads, but really wish I could read a version with statistically relevant insights instead.

3
paxtonab 1 hour ago 1 reply      
I think the the things they highlight are all attributes that define hungry/ambitious people, and/or they correlate to things that would hold people accountable and keep them on track.

For example:

Ivy League School and working at a prestigious company? You don't get either of those by being a slacker.

Younger team, woman co-founder and more than one founder? You better believe there is going to be more pressure to prove yourself and not sell early or give up (vs. being a single founder or an older proven founder).

Standing out from the crowd at demo day or getting noticed out of all the noise of social media? That takes some dedication. I guarantee that the people who did get noticed that way didn't just send one email or one tweet. They were hustling their idea hard.

Great read though. I loved the point that startups don't have to come from SF or NYC to be successful!

4
jonahx 1 hour ago 1 reply      
The methodology matters a lot here: Were a set of preselected questions answered by the data, or was there exploratory analysis of the data which uncovered these results? If the latter, the effects of data fishing[1] would largely invalidate the conclusions.

[1] https://en.wikipedia.org/wiki/Data_dredging

5
cehrnrooth 1 hour ago 2 replies      
Really cool of First Round to share this data. Whenever I see interesting data it makes me ask more questions and these are some of the things I'm wondering about after reading the post.

Female founders outperforming male teams: My hunch would be that the bar for women to get funded (at least historically) has been higher than men so the female led start-ups would be a better calibre of company. Related, since this is based on investment performance, could it be that the female founders received smaller initial investments so performing on par with male teams would make the ROI look better?

Halo effect: This to me would indicate that we shouldn't be encouraging fresh college graduates to work at start-ups and instead get experience at a more mature company. I wonder how much tenure they had at their halo company prior to founding the start-up and how it ties with the average age of founding.

Solo founders perform worse: I wonder what happens if you frame this from the point of view of the founder. If the solo founder had a $100 return and the team had a $260 (160% better) return; assuming equal dilution and equal division between founders, solo founder get's $100, a two founder team get $130 each (30% better), a three founder team gets $85 (15% worse).

Next big thing from anywhere: Also interesting, I'd like to see how this varies by referral source. Do companies referred by other investors perform better than non-investor referrals (or can other investors pick companies better than social connections).

6
staunch 1 hour ago 1 reply      
Lies, damned lies, and statistics. This is just a shoddy report on their own biases, not a scientific analysis of any kind. BuzzFeed.
7
matrix 1 hour ago 1 reply      
TLDR version; Here's their success factors ranked by magnitude:

Technical co-founder, enterprise product: +230

Elite school: +220

ex AMZN, AAPL, FB, GOOG, MSFT, TWTR employee: +160

Female founder: +128

Discovered investment via non-traditional VC channel: +58

Technical co-founder, consumer product: +31

Team average age under 25: +30

Solo founder: -163

8
pmikesell 2 hours ago 1 reply      
"We also looked at whether the college a founder attended might impact company performance. Unsurprisingly, teams with at least one founder who went to a top school (unscientifically defined in our study as one of the Ivies plus Stanford, MIT and Caltech) tend to perform the best. Looking at our community, 38% of the companies we've invested in had one founder that went to one of those schools. And, generally speaking, those companies performed about 220% better than other teams!"

I'd love to see an inverted analysis of this effect, ie. which schools had the best indication of success. Pre-deciding to look at their definition of "top schools" is probably only seeing part of the picture.

9
rdl 1 hour ago 2 replies      
Obviously there are three kinds of selection bias here: companies which raise money; companies which approach First Round; companies selected by First Round. So it's possible these aren't overall trends, but specific to this set.

I think that probably explains the "no tech cofounders do better" bias in Consumer; the bar is probably higher there.

10
blizkreeg 2 hours ago 1 reply      
Could some of these be attributed to selection bias that ends up just confirming the conclusions/lessons learned? Prime examples - the ones re: age, schools, former employers, and repeat founders. Would it be fair to assume that they have a greater bias to fund companies with founders exhibiting these attributes to begin with?
11
alain94040 1 hour ago 1 reply      
Great data overall.

I'm not convinced about the age conclusion: depending on which statistics you focus on, you either conclude that 25 is best or 32 is best:

Founding teams with an average age under 25 (when we invested) perform nearly 30% above average [...] for our top 10 investments the average age was 31.9

12
mathgeek 1 hour ago 2 replies      
I think the headline for #1 is a bit misleading. I'd infer that teams with a female founder doing better than ones without one does not correlate to "Female Founders Outperform Their Male Peers." I'd think a better title would be more along the lines of, "Diverse Founding Teams Outperform All-male Ones."
13
Duhck 27 minutes ago 1 reply      
This seems to be a lot of confirmation bias and use of data that is likely not significant.

For instance:

"The results were stark: Teams with more than one founder outperformed solo founders by a whopping 163% and solo founders' seed valuations were 25% less than teams with more than one founder."

How many of the 300 investments in their portfolio were solo founders? 10?

Solo founders are rare, and it's often harder to raise money as a solo founder. That means less companies have solo founders to begin with.

Source: I am a solo founder

14
fredophile 29 minutes ago 0 replies      
While the data is interesting I don't think it's very useful. For founders, many of these are things you can't change. One of the few actionable points for founders is about having the right kind of cofounders. For VCs this data doesn't tell them what they really want to know. VCs want to invest in outliers. They say right near the beginning of the article that they removed Uber from the data. Actionable data for VCs would have info to help identify potential Ubers or AirBnBs.
15
msandford 39 minutes ago 0 replies      
Given that none of them were +100000% it's all basically noise, right? If you can successfully identify 100x or 1000x companies some percentage of the time you're a good VC. If you can't you either get lucky or go out of business.

Everything else is secondary to finding the home runs. Even if you multiply all those advantages together you get:

1.63 * 1.3 * 2.2 * 1.6 * 1.5 * 1.63 * 2.3 * 1.58 = 66

So if you manage to somehow get every single one of those attributes at max in a company you'll get roughly 66x the valuation or performance or whatever versus a company/team with none of them.

Of course if you go for all those things you'll probably only get one deal per year.

This isn't terribly meaningful.

16
nostrademons 57 minutes ago 0 replies      
I wish I could see these factors ranked against other metrics, eg. # users, revenues, profitability, exit size, % exited. Their key metric is valuation, which makes sense when judging your performance as an investment firm but is relatively useless to a founder. And their top 3 factors (has cofounders; brand name school; brand name employer) are all things that investors value highly, probably more highly than customers. I'd love to see whether the magnitude of the effect of each of these remains true when ranking companies by more founder-focused or customer-focused metrics.
17
zekevermillion 1 hour ago 0 replies      
I wonder if the outperformance of organic picks (non-referred companies) is a selection effect. That is, VC prefers to invest with a referral, but will go in on a non-referred investment only for the most promising opportunities.
18
a3voices 1 hour ago 0 replies      
I wish I could see the statistics on how many successful founders have successful parents.
19
cjrd 1 hour ago 1 reply      
I would like to see the sample sizes, e.g. how many only-male teams are there?
Nirn Plunkett has died apache.org
25 points by daenney  43 minutes ago   discuss
Blockspring: Do anything in a spreadsheet a16z.com
56 points by robzyb  2 hours ago   17 comments top 7
1
sanowski 19 minutes ago 0 replies      
https://www.blockspring.com/about/privacy

User Content. We collect your personal information contained in any User Content you create, share, store, or submit to or through the Service, which may include photo or other image files, written comments, information, data, text, scripts, graphics, code, and other interactive features generated, provided, or otherwise made accessible to you and other Users by Blockspring via the Service in accordance with your personal settings.

2
minimaxir 1 hour ago 0 replies      
The "Suddenly any financial analyst in your company, given the right permissions" comment is telling. The companies that would be most likely to use a service like Blockspring would be the types of companies that would never be willing to give and trust data access and processing to a random third party. The Privacy policy does not help alleviate these concerns (tl;dr: we keep everything). It is also unclear if the business offering (https://www.blockspring.com/business ) offers self-hosting.

Case in point, one of the example scripts in the blog post (https://open.blockspring.com/pkpp1233/get-amazon-new-price-b... ) requires you to input a Amazon Product Access Key and a Amazon Product Secret Key as parameters.

3
Dwolb 39 minutes ago 1 reply      
I think this concept has far reaching effects on optimizing white collar operations if you're able create social features based on the data.

I'd want to know who else in the company is using this data? Who has used it on the past? Have they done work that is similar or even a duplication of the work I'm doing?

These information management issues are currently hidden, but result in lost productivity. Just this past month my friend at Google found out another person had already done his analysis and he could learn from the previous work. Just knowing someone had previously used the same dataset could have saved him 7 weeks of work.

4
state 1 hour ago 0 replies      
This is such an interesting angle on making technical tools more accessible. I can't say I have a use for including functions in spreadsheets but when I imagine where this could go it's exciting.
5
bliti 38 minutes ago 0 replies      
How much does this cost to use? I was not able to find a pricing page on their website.
6
serve_yay 1 hour ago 0 replies      
You learn to code that way or you learn to code this way, either way you're gonna pay the piper.
7
normloman 1 hour ago 5 replies      
Show HN: Walk around unusual geometries parametricity.com
87 points by ihm  3 hours ago   20 comments top 8
1
jameshart 10 minutes ago 0 replies      
Suggestion: allow me to move in discrete steps of a specified length, and rotate in discrete steps too. Then I can see what happens in different geometries when I go forward n, left 90 degrees, forward n, etc.

In fact, how about implementing a Logo turtle graphics system :)?

2
amelius 1 hour ago 0 replies      
Feature request: always show the line that you will walk when you keep going forward (e.g. in a light color).
3
ThrustVectoring 2 hours ago 0 replies      
Feature suggestion: use the arrow's frame of reference instead of a global one.
4
Gladdyu 2 hours ago 1 reply      
Looks really cool! One question though, how did you implement the movement? When you press up and left at the same time in the flat you'd expect to trace the same circle over and over again, but instead it diverts in some apparently non-deterministic way.
5
dbbolton 44 minutes ago 1 reply      
>Use the up arrow key to go forward along the geodesic in the direction you're facing and the left and right arrow keys to change direction.>>Scroll to zoom, click and drag to pan.

How do you scroll to zoom? Down arrow and Page Up/Down do nothing, and there's no scroll bar.

6
jonahx 3 hours ago 3 replies      
Nice work. I'm curious if there are any similar projects which use Oculus/VR to put you in the midst of these strange worlds. I wonder if it could be used to improve your intuition about things like hyperbolic space.
7
cousin_it 2 hours ago 3 replies      
Great work! Can you add the Klein model?

It's kind of interesting that moving while turning at a constant rate in the hyperbolic plane makes you gradually "drift". Is that actually true, or is it an artifact of the software?

8
drewolbrich 2 hours ago 0 replies      
Thank you for creating this. It is beautifully implemented.
Newegg vs. Patent Trolls: When We Win, You Win newegg.com
595 points by nkurz  14 hours ago   114 comments top 19
1
x0054 1 hour ago 2 replies      
Perhaps a solution to this would be a Kickstarter like anti patent troll site. Patent trolls usually go after smaller businesses at first, businesses who could not possibly afford to fight, even if they wanted. The site would allow a business to post their legal case online, and other businesses who face similar exposure, could contribute to the defense fund.

The "perk" of contributing would be that you would get access to all of the expert witness prepared statements and legal work, so if a patent troll comes after you next, you would have a lot of your defense work already done for you. Plus, once the patent troll looses a case, especially on appeal, that decision can be used as precedent.

2
mcherm 8 hours ago 3 replies      
I really appreciate Newegg's approach here -- one of the main reasons that patent trolling is so successful is that the cost of settling is smaller even than the cost of winning a suit. Newegg is performing an (expensive) community service. How should I be supporting them (other than making them my "first place to check" for electronics shopping)?
3
andresmanz 9 hours ago 0 replies      
> An example of this is Sovereign who bought the rights to a shopping cart.

If anyone wants to google them and can't find anything (like me), that's because the name is Soverain, not Sovereign.

4
devy 3 hours ago 1 reply      
Erich Spangenberg is America's most notorious patent troll mafia head. I blames him for taking full advantage of the broken U.S. patent system to squeeze upward of $30 billion each year and the tremendous waste of use our legal system resources.

EFF[1] and NYT[2] ran full reports on him previously.

[1] https://www.eff.org/deeplinks/2013/07/times-profiles-patent-...

[2] http://www.nytimes.com/2013/07/14/business/has-patent-will-s...

5
grellas 2 hours ago 0 replies      
What Newegg does is highly commendable.

To achieve a decisive victory in these cases, Newegg typically has to take the defense of its case through a full trial and possibly an appeal.

People often fail to appreciate just how risky a trial can be. We stand on the sidelines and laugh at how absurd this or that flaky patent appears. And yet - and yet - the law itself went through a phase in which such patents were almost routinely granted. Standards may have tightened over time but, still, a patent claim in a hotly litigated case will not survive to trial unless it has been able to withstand a host of pretrial challenges by which a defendant has already asked a court to rule that the patent, as a matter of law, should not stand. It is only when a court tosses the patent claim in the pretrial phases that a defendant avoids the risk of a potentially absurdly high verdict after trial. If the claim survives such challenges, then the defendant has no choice but to settle or to play it out through trial while incurring just a risk of having a large verdict entered against it. This is the point at which most defendants - even large, deep-pocket defendants who can otherwise afford to pay the costs of defense - will fold. Newegg, on the other hand, has made the tough decisions, incurred the major risks, and largely managed to defeat such patents on the merits.

In doing so, it incurs the very large costs of defense typical in such cases. And it has the guts to take the potential liability risks of going through full trials to take the cases to verdict.

Large, institutional defendants have occasionally (though rarely) adopted such policies in the past. For example, over decades, GM adopted a policy of never settling injury claims if its own experts had determined that the GM autos were not at fault. In doing this, it would often incur defense costs that far exceeded the value of the claim being defended. But it did so to send a firm message to the plaintiff's bar that prosecuted such claims - that is, "if you want to sue GM, your case had better have merit - you will get no nuisance settlement from us."

Newegg effectively is delivering the same message but with an important twist. If GM successfully defended a particular injury claim, that ended the case for that claimant but had no preclusive effect on other, similar claims. If Newegg successfully defends and defeats a patent claim by having the patent declared invalid, the law of what the lawyers call "res judicata" (meaning, "a matter adjudged") kicks in and kills that patent off forever.

So, not only does Newegg take out the garbage, it makes sure it won't accumulate ever again.

This is a true public service for which we all must tip out hats.

6
jvdh 7 hours ago 0 replies      
Reading that the patents were about SSL and RC4, I had an evil thought. A patent troll with these patents would actually help make the Internet a safer place.

Companies that are still using those should be sued for not securing their consumers' information properly. Failing that, this would be an even better way of achieving the same thing.

7
davesque 2 hours ago 0 replies      
It's totally excellent that Newegg is doing this. As a side note, I don't like when people include silly infographics in articles like this. I think it sort of cheapens the message and this particular message is important.
8
emirozer 10 hours ago 4 replies      
Please excuse my ignorance on this topic.

- Why is this happening in the first place?

- Who is this entity that grants a loose patent?

- Why isn't this entity being interrogated ?

9
yenda 9 hours ago 3 replies      
No patent for ideas in Europe, problem solved.
10
codezero 1 hour ago 0 replies      
In winning these cases has Newegg shared any learnings that might help smaller companies affected by patent trolls, or is it almost as simple as ponying up the cash and going to trial?
11
kazinator 5 hours ago 3 replies      
You don't really win until the legal atmosphere is such that the the key personalities behind a patent troll operation get 10 years in jail.
12
FrankenPC 1 hour ago 0 replies      
This brings up a good point. I need to buy from the vendor that's doing some good with my money. Sometimes I hesitate to buy from Newegg because I can always find the parts cheaper (Hey! Rent's a killer on the West coast right now!) I think I'll buy PC parts solely from them for as long as they fight the patent trolls.
13
samch 3 hours ago 0 replies      
Seriously excellent work by Newegg. I'm worried, however, that they've now made themselves well-known enough in the industry to avoid future targeting by patent trolls. Honestly, who would go after Newegg at this point with their current track record? In the long run, this probably helps Newegg a lot with good PR and fewer trolls attacking them. With the exception of the specific cases they've already won, I just don't see this really helping the little guys over the long run. Patent trolls are going to remain a huge headache until we get serious reform.
14
ild 13 hours ago 2 replies      
Another well-known Troll fighter is Vizio.
15
IanDrake 4 hours ago 1 reply      
Why would any of these trolls take on newegg?
16
aet 4 hours ago 0 replies      
Worthless article that doesn't go into any details
17
transfire 4 hours ago 0 replies      
18
thinkcomp 5 hours ago 0 replies      
For those who prefer specifics of Newegg's litigation history rather than a summary:

http://www.plainsite.org/flashlight/newegg-inc/

19
jchomali 8 hours ago 0 replies      
Transactions in Redis dr-josiah.com
26 points by DrJosiah  1 hour ago   2 comments top
1
jessaustin 38 minutes ago 1 reply      
4. You attach a debugger and "fix" your script to get it to do #1 or #2

I suspect this is actually referring to #1 and #2 in the next list, not the list in which this item appears. It is confusing.

Eating Our Own Dog Food: Behind the Scenes at Cloud9 c9.io
44 points by lennartcl  2 hours ago   17 comments top 7
1
callumjones 22 minutes ago 0 replies      
Great article, it's good to hear about IDE companies dogfooding and using that to improve their product.

I don't want to call you out but in https://c9.io/blog/content/images/2015/07/recursion.png you're really only running C9 in C9 - the rest have the same URL.

2
aceperry 56 minutes ago 0 replies      
I've noticed a huge improvement over time and am really impressed with how well it works now. I forget when I first used c9, but it was more of a novelty than a productive tool. I'm also really enjoying the way the terminal works, feels integrated.

I've never tried the nested stuff, or would even think of it, but it sounds pretty cool. Hope to see more improvements and features in the future.

3
sebpark 1 hour ago 1 reply      
I've been watching too much dota. I just assumed this was about Cloud9.gg , the eSports team haha
4
echohtp 37 minutes ago 1 reply      
ive moved my entire development environment to cloud9 on a chromebook and i couldnt be happier with the performance and lack of overhead needed on my part. thanks!
5
ohitsdom 2 hours ago 2 replies      
Why is that background image 4000 x 3000?!

https://c9.io/blog/content/images/2015/07/Chatuchak_Weekend_...

6
hasch 44 minutes ago 2 replies      
400,000 lines of original source code sound really impressive! are you sure about that? Does sloccount say that or some wc -l ?
7
JorgeGT 2 hours ago 1 reply      
Your IDE looks darker than the default flat white theme? I like the flat theme a lot but can't stand the brightness, I hope you find some time to make a dark version of the flat theme! That said, yours is by far the best "web IDE" I've tried so far, congrats! =)
Scheme in a Grid (2000) siag.nu
20 points by brudgers  1 hour ago   2 comments top
1
zem 30 minutes ago 1 reply      
the source code link seems to be down :( i remember being pretty impressed by how clean and readable the c code was.
The Factory of Ideas: Working at Bell Labs [video] youtube.com
29 points by hwstar  4 hours ago   3 comments top 2
1
iamjs 19 minutes ago 0 replies      
The similarly named book "The Idea Factory" by Jon Gertner is a great read on the history of Bell Labs if you're interested in learning more.
2
hwstar 4 hours ago 1 reply      
Open allocation at its best...

We need more of this to counter the MBA/Business closed allocation model.

Show HN: File.io Ephemeral file sharing file.io
67 points by ca98am79  3 hours ago   45 comments top 19
1
_nvs 56 minutes ago 2 replies      
i personally prefer file.pizza, especially considering it is an open source webrtc implementation that doesn't persist the data via any middle man (https://github.com/kern/filepizza)
2
ca98am79 2 hours ago 5 replies      
I built this site and appreciate any feedback from the HN community
3
flipcoder 5 minutes ago 0 replies      
Add a privacy policy
4
ErikRogneby 23 minutes ago 0 replies      
"Also, no illegal files are allowed."

Is this a "(our lawyers made us put this in)" sentence?

It's not like there is a .ilgl file type, and with 1 time downloads DCMA takedowns are unlikely.

5
calebm 1 hour ago 0 replies      
Very nice. Also very similar to https://transfer.sh/
6
russellbeattie 2 hours ago 0 replies      
1. Nice implementation of a potentially useful micro service. 2. Nice domain name. 3. You should put more details in your FAQ like "no, this is not guaranteed to be a perfect technical solution" and "we'll happily work with law enforcement if you're a pedophile". 4. I always look down on services that don't have an immediate and obvious way of making money, as it'll likely be gone tomorrow. 5. MVPs are all well and good, but a few more simple features wouldn't hurt: time-based expiration, multiple downloads allowed and passwords, or whatever else seems simple and useful.
7
bascule 2 hours ago 1 reply      
Data remanence is a really hard problem. Are you sure this lives up to your claims that "the file is completely deleted without a trace"? How are you storing them? Do they ever hit e.g. an SSD in plaintext?
8
alfg 2 hours ago 0 replies      
Something similar I made a while back for those interested in hosting their own file-upload service via S3. You can configure S3's object expiration to delete/expire files after a set amount of days.

I still use it today for sending files here and there. :)

https://github.com/alfg/dropdot - Source with demo.

9
Paul-ish 2 hours ago 1 reply      
Although perhaps more constraining, why not use a website that uses WebRTC data channels to transfer the files? Then you can be more sure the data isn't persisting in a datacenter somewhere. Plus, it is more plausible that the service can remain free and private.
10
forgotmypassw 2 hours ago 0 replies      
You should probably filter out .exe files otherwise Chrome and other websites might block you off.
11
tomashertus 1 hour ago 0 replies      
12
dubcanada 2 hours ago 1 reply      
Do you store the IP of the uploader and downloader? If you don't you're going to want to.
13
stfnhh 2 hours ago 1 reply      
14
chbrown 2 hours ago 3 replies      
tl;dr from the FAQ:

Q: "Why should I trust you?"A: "Because you should! We're good people! Honest!"

I'd love to trust a service like this, but there's no credible effort to actually establish that trust.

15
rdegges 2 hours ago 0 replies      
Love this service -- beautiful site, simple docs, simple API, great concept.
16
schuyler2d 2 hours ago 0 replies      
good luck hosting this. Are you blocking any filetypes?
17
cmdrfred 2 hours ago 0 replies      
Very nice, no sign up, no nonsense.
18
yellowapple 1 hour ago 0 replies      
19
theunixbeard 2 hours ago 1 reply      
Jamestown excavation unearths four bodies and a mystery in a small box washingtonpost.com
19 points by benbreen  1 hour ago   5 comments top 3
1
whoopdedo 23 minutes ago 0 replies      
I don't know if it's the same graves, but the book The Cradle of the Republic[1] from 1906 mentions an excavation that found

> several graves and tombstones, as well as mortuary tablets,> were discovered in the old foundations. In the chancel, lying> with its head to the north, was an iron tablet, probably> formerly a cenotaph, once embossed with inlaid brasses,> now missing.

[1] https://books.google.com/books?id=4R4SAAAAYAAJ&pg=PA126#v=on...

2
Gys 1 hour ago 1 reply      
From the article:

'Studies and scans showed that the box was made of non-English silver, and originated in continental Europe many decades before it reached Jamestown.

Horn said he believed it was a sacred, public reliquary, as opposed to a private item, because it contained so many pieces of bone.'

'There are no plans to open it.'

3
jamesdharper3 10 minutes ago 0 replies      
Awesome read, thanks for sharing.
SSL tools we wish we'd known about earlier certsimple.com
125 points by nailer  6 hours ago   33 comments top 17
1
dreeves 27 minutes ago 0 replies      
Am I right to be excited about http://letsencrypt.org being about to make all of this much more sane? General availability on 2015 Sept 14.
2
rogeryu 4 hours ago 1 reply      
My favorite tool is the Calomel Firefox addon. https://addons.mozilla.org/nl/firefox/addon/calomel-ssl-vali...

The article links to https://badssl.com/, which shows a list of links to good and bad configurations. Calomel gives more details about what is right and wrong, and sometimes surprises with its rating.

3
Erwin 1 hour ago 0 replies      
Sometimes I use this one to validate certs: https://www.sslshopper.com/ssl-checker.html

The Qualys SSLlabs scan does not accept an IP address. I'm often in the situation where the cert is installed and ready, but the name is not yet pointing to the new IP address. The above URL can verify that you haven't left out the intermediate cert.

4
nnx 4 hours ago 2 replies      
https://sslmate.com - being able to get fresh certificates in less than a minute right from the command-line is amazing.
5
Twirrim 5 hours ago 1 reply      
Also useful. sslscan (http://sourceforge.net/projects/sslscan/). Point it to an endpoint and it will tell you all the ciphers and protocols that are accepted, and what the various defaults are, and details about the certificate bound to it. It's available in the Debian/Ubuntu repository for easy installation.
6
jms703 3 hours ago 0 replies      
If you're not using Mozilla's SSL config generator, you should check it out. The Mozilla OpSec team did a nice job on this. I love when teams give back to the community. https://mozilla.github.io/server-side-tls/ssl-config-generat...
7
rakoo 5 hours ago 1 reply      
A firefox plugin that gives you more details about the ssl/tls connection of the site you're connected to: https://addons.mozilla.org/fr/firefox/addon/ssleuth/?src=sea...

It also gives a summary grade. Very few sites are 10/10 (I only remember github having this grade)

8
bifurcation 5 hours ago 1 reply      
Other things I use all the time:

`openssl x509 -in $FILE -text | less`

https://lapo.it/asn1js

https://golang.org/pkg/crypto/x509/

https://github.com/agl/certificatetransparency

9
noinsight 4 hours ago 0 replies      
There's also sslyze for comprehensive and fast scans, it can test just about any TLS service.

https://github.com/nabla-c0d3/sslyze

10
j_s 2 hours ago 0 replies      
No one has mentioned the https://labs.portcullis.co.uk/tools/ssl-cipher-suite-enum/ perl script yet.

The tool performs a similar function to sslscan, THCSSLCheck and sslyze, but differs by crafting part of the SSL handshake instead of using an SSL library to establish a full connection. [...] Libraries either become outdated and therefore incapable of testing for new protocols such as TLSv1.2 or exotic cipher suites; or they are updated and lose support for older protocols namely SSLv2.

Support for SSL testing over SMTP (STARTTLS), RDP and FTP (AUTH SSL)

11
brightball 5 hours ago 1 reply      
The whois query works with his Microsoft example but I get a malformed request error when trying it with some of the newer domain extensions like .ninja
12
jms703 3 hours ago 0 replies      
I find Julien Vehent's CipherScan to be very usefulhttps://github.com/jvehent/cipherscan
13
jamespo 5 hours ago 0 replies      
I find https://testssl.sh/ particularly useful
14
tomputer 4 hours ago 0 replies      
Another useful site:

https://ssldecoder.org/

Source for self-hosting:

https://github.com/RaymiiOrg/ssl-decoder

15
voidz 2 hours ago 0 replies      
http://sourcefourge.net/projects/xca/ is a nice gui for x509 certificate and crl maintenance, creation etc.

(edit: inb4 kneejerk about sourceforge)

16
laveur 3 hours ago 1 reply      
The Native OS X Wireshark is great! I always hated the one that required X11 as it rarely ever worked right :(
17
josquin 3 hours ago 0 replies      
Nmap has some very useful SSL scripts, such as ssl-enum-ciphers, ssl-heartbleed, ssl-poodle, ssl-ccs-injection and this one for testing Diffie-Hellman configurations: https://github.com/eSentire/nmap-esentire
Focusing on Developer Happiness with Humane Development fogcreek.com
30 points by GarethX  6 hours ago   10 comments top 2
1
aantix 31 minutes ago 1 reply      
First off, I saw Ernie present at this years Railsconf and it was absolutely fantastic. Definitely go watch his talk on this very subject.

Secondly, over the course of my 16 years of software development, I've found that when a I get treated less of a human (e.g. boss is barking orders, denied vacation, etc), that most of the time, these issues are rarely directly confronted. These confrontations create mental roadblocks in my head; all of the sudden I'm less productive/creative, lethargic, generally less excited about my work.

When a boss cracks the whip for employees to "work harder, faster!", it results in these subtle, mental withdrawals that are hard to pin down but are definitely costing the company money. Internal resentment results in a subconscious "fuck you" that most bosses may not even realize is occurring.

2
a3voices 43 minutes ago 3 replies      
I realize this is controversial and might not make business sense, but one humane policy would be to give your senior developers tenure and then never fire them for any reason. It would greatly reduce their stress levels. Also it would reduce the chance they'd seek another job.
An annotation of the Rust standard library github.com
48 points by foogered  4 hours ago   3 comments top
1
mdup 3 hours ago 1 reply      
If you're wondering where the content actually is, take a look at the commit, where comments are written alongside the code:

https://github.com/brson/annotated-std-rs/commit/e50c2b16455...

Hayabusa-2 probe uses 64-bit MIPS CPU to explore the origins of the solar system imgtec.com
16 points by alexvoica  6 hours ago   discuss
ES6 Tail Call Optimization Explained benignbemine.github.io
14 points by rkho  1 hour ago   2 comments top 2
1
taylodl 1 hour ago 0 replies      
Not one browser implements TCO, and Babel only implements direct recursion, not mutual recursion. Sigh. Trampolines are your only option aside from Babel. I've written about trampolines here: https://taylodl.wordpress.com/2013/06/07/functional-javascri...
2
balls187 16 minutes ago 0 replies      
Tangeant: while very simple to memorize the solution to, I find that Fibonacci is a very fun problem to discuss with CS candidates.

This example of TCO adds to that.

Shoring up Tor mit.edu
46 points by user_235711  7 hours ago   10 comments top 6
1
noondip 2 hours ago 2 replies      
> The researchers attack requires that the adversarys computer serve as the guard on a Tor circuit.

This is a great reason to run your own Tor relay, even if it's just a private bridge for you and your friends. You can even use a pluggable transport of your choosing -- I picked obfs4 to make otherwise identifiable Tor connections look like random noise to my service provider.

2
nickpsecurity 58 minutes ago 0 replies      
"Researchers mount successful attacks against popular anonymity network and show how to prevent them."

This could've been the title of many, many articles about Tor. You'll see plenty more. It has so many past attacking, due to difficulty of its goal, that it should only be used as one step in a series of anonymity-enhancing methods.

3
w8rbt 1 hour ago 1 reply      
Anonymity is considered a big part of freedom of speech now... That's a great way to express the importance of Tor.
5
rc4algorithm 1 hour ago 0 replies      
Journalists need to start making a stronger distinction between Tor end-use and hidden services. End-use is solid - hidden services are experimental.

That said, the fact that this research can identify what hidden service a user accesses skirts that line. On the other hand, that doesn't sound too different from a typical timing attack (something that Tor doesn't try to prevent).

6
ikeboy 2 hours ago 0 replies      
So, the claim is that any guard can identify the sites Tor users that use it connect to 88% of the time? That's very bad, if true, because the guard also knows IP.

Edit: glancing through the paper, it may also require monitoring the webpage, in which case it's less severe. I'll wait until there's a good writeup by the Tor Project itself.

From the paper

>Indeed, we show that we can correctly determinewhich of the 50 monitored pages the client is visitingwith 88% true positive rate and false positive rate aslow as 2.9%

Foxpass (YC S15) helps companies manage employee access to internal systems venturebeat.com
13 points by aren  1 hour ago   3 comments top
1
aren 52 minutes ago 1 reply      
Foxpass started here as a "Show HN" back in February (https://news.ycombinator.com/item?id=9039027) and now we're part of the current batch.

Would love your feedback and of course I'm happy to answer any questions!

MH370: wreckage found on Reunion 'matches Malaysia Airlines flight' telegraph.co.uk
24 points by curtis  1 hour ago   1 comment top
1
bobowzki 5 minutes ago 0 replies      
Well this could get interesting...
Show HN: Farm Feedery menuless lunch delivery with farm-fresh ingredients farmfeedery.com
11 points by jmzbond  5 hours ago   1 comment top
1
rnernento 12 minutes ago 0 replies      
This seems cool, I could see making the necessary savings to enable the $10 price point by cooking in large batches. Isn't the delivery a killer though?
Windows 10's terrible installer movingfulcrum.com
4 points by pdeva1  13 minutes ago   discuss
The Itanium processor, part 3: The Windows calling convention msdn.com
42 points by Mister_Snuggles  4 hours ago   10 comments top 2
1
com2kid 2 hours ago 2 replies      
Some very cool things, some very complicated things.

I feel that if a subset of this processor had been all that was introduced, that it could have been successful.

The majority of the penalty for making a function call being negated? Wonderful! Heck it sounds like (although I skimmed the later half of the article and I didn't fully grok the first half on my single read through) the stack doesn't even need to be touched for some chains of function calls.

But there is a lot of work for the compiler here, wow. Knowing the maximum number of registers that is needed for any function call made within a function? Ouch.

Support for multiple return values is cool though. That'd be incredibly nice.

And again, rotating the registers to avoid hitting the stack, incredibly powerful.

Having that many globals accessible, also really powerful. All of a sudden the penalty for accessing your "God" object just went down by a fair bit.

2
lorenzhs 2 hours ago 0 replies      
Three quick tips from two years with Celery launchkit.io
29 points by taylorhughes  1 hour ago   9 comments top 4
1
rizwan 1 hour ago 0 replies      
Forgive me for asking, but how is "no default timeout" a sensible default for any technology?
2
simonpantzare 36 minutes ago 0 replies      
-Ofair disables task prefetching which will affect throughput negatively if most of your tasks run for a short time (http://celery.readthedocs.org/en/latest/userguide/optimizing...).
3
humbertomn 40 minutes ago 0 replies      
For the retry part, I prefer to use a database table + cron task to do it... storing failed attempts and making x new attempts in predefined date and times, not having it permanently on a celery queue.
4
OrangeTux 43 minutes ago 1 reply      
And do not use Redis as a broker under a high load. It will random crash or refuse to set workers to work, while queue is filling up with tasks.
Using Algorithms to Determine Character bits.blogs.nytimes.com
25 points by denzil_correa  7 hours ago   9 comments top 6
1
tristor 2 minutes ago 0 replies      
I think their algorithm probably works well for people who fall into a typical social mold, but for those of us who are atypical it falls apart. I'm an almost didn't graduate high school, college dropout, who never studied, and has changed phone numbers five times. According to the minimal information I have from the article, they wouldn't consider me loan-worthy.

The flip side of this is that in the StrengthsFinder personality assessment, it lists "Responsibility" as my top strength. I'm a man of my word who has never reneged on a promise or failed to pay back a debt, even if I have had to struggle or sacrifice to do so, and consequently I have near perfect credit and am financially relatively well off compared to the average situation for someone in my age bracket in the US.

While it's useful and good to seek to classify things into quantifiable buckets of data, it's also important not to lose sight of the fact that people are not easily quantifiable, and that any attempt to segment people into classifications will inevitably treat someone unfairly or misclassify them because they somehow differ from the typical set.

2
discardorama 11 minutes ago 0 replies      
FTA: One signal is whether someone has ever given up a prepaid wireless phone number. Where housing is often uncertain, those numbers are a more reliable way to find you than addresses; giving one up may indicate you are willing (or have been forced) to disappear from family or potential employers. That is a bad sign.

This is such bullshit. I had a prepaid phone for a few months, because my previous one was broken, and the new iPhone was going to be out in a few months. So I moved to Verizon prepaid: $40/mo, 2GB data, no taxes/fees/nickles/dimes. It worked great. Best part is: I paid VZ $150 for a used iPhone 4s; and traded it in for $200 credit for a new iPhone. When the new iPhone came out, I switched to it and a postpaid plan. And I have stellar credit.

The point is: just giving up a prepaid phone by itself means nothing. GIGO.

3
baseballmerpeak 4 minutes ago 0 replies      
> Every time we find a signal, we have to ask ourselves, Would we feel comfortable telling someone this was why they were rejected? he said.

Looking at other factors + feelings = trouble

These folks have already been rejected from traditional financing. There is a fine line between those who barely didn't qualify (but should) and really didn't qualify (and shouldn't). Where do you draw that line?

4
nickpsecurity 1 hour ago 1 reply      
I think their methods can only be as good as the honesty of the input. Depending on how they validate, this company might be an easier target for people who will just cook the books on their scores. I'm hoping that they thought of that ahead of time.

Far as character, there's an old legend where JP Morgan was asked by Congress on what basis he lends out money. He reply was the person's character, not ability to repay. His reasoning in the story was that people of poor character, but having the money, would make up any excuse to avoid paying. Whereas people of good character would do everything they could to get the money and pay up. Realistically, ability to pay is a huge consideration but the story's lesson about character was wise. Interesting seeing it in action and automated to a degree.

5
akshat_h 1 hour ago 0 replies      
What are the ethics of this? Though credit score anyway is penalising you, but it is at least based on financial history. There might be a slippery slope here.I can't imagine what would happen if google were to use something like search history for say insurance, mortgage etc.
6
zippzom 1 hour ago 1 reply      
Are they using machine learning to determine this or simply a hard coded algorithm? I would imagine a combination of both, but I'm very curious how they generated enough data to train their model.
Named Entity Recognition: Examining the Stanford NER Tagger urx.com
15 points by jmilinovich  2 hours ago   2 comments top
1
nxb 24 minutes ago 1 reply      
Next, try the taggers on a more realistic setting than the standard corpuses -- e.g. a product review that compares several products, and you'll instantly see how incredibly poor the current state of the art NER is.

Technology is really going to advance once we have anything that comes close to human level on NER and relation extraction. Kind of like self driving cars, the basic ideas have been around for decades, but performance in realistic adverse conditions remains awful.

Flux Architecture Visual Cheatsheet danmaz74.me
121 points by ihenvyr  10 hours ago   42 comments top 17
1
aikah 4 minutes ago 0 replies      
React is a good view layer, but React doesn't help me architecture a web application. React creates an impedance mismatch between how it is supposed to work(unidirectional data-flow + immutable collections) and how javascript actually works(everything's mutable). If Flux actually solved something the right way, we wouldn't need 10 different implementations (reflux,redux,alt,......). React is extremely smart but clearly not a lot of thoughts has been put into that flux thing.
2
findjashua 1 hour ago 2 replies      
If you're wondering about which flux implementation to start with, I'd highly recommend checking out Redux : https://github.com/gaearon/redux

The main idea is to think of stores as reducers (redux = reducers + flux). He also gave a very good talk on it at react-europe: https://www.youtube.com/watch?v=xsSnOQynTHs

3
morley 6 hours ago 2 replies      
This is a really great visual!

In my experience, the toughest thing to grasp about Flux was how to handle async server actions. It's something that a lot of tutorials (including, unfortunately, this graphic) handwaves, but it's one of the first things you need to nail down if you want to do anything exciting in an SPA.

The todos usually make it seem like you have to to have this flow of information:

ActionCreator -> ApiUtil -> ActionCreator -> Store

...but if you use the same action creator, you end up with a circular dependency. So you actually have to create a separate file of ServerActionCreators, that are only called with ApiUtils:

ViewActionCreator -> ApiUtil -> ServerActionCreator -> Store

This seems like a lot of boilerplate. At my job, we've simplified this a lot by using Reflux, which has async actions that run one Store callback when a call gets initiated, and another related one when it gets completed. But it's not ideal.

Personally, I'd rather see a bigger app than a TodoMVC implementation with a "correct" example of async server actions.

4
polskibus 5 hours ago 3 replies      
The problem here is the horizontal arrow between two stores. This leads to a web of connections, the usual spaghetti of data binding. There should either be a "store of stores" or another element that would direct the data flow so that it is unidirectional.

If anyone knows about a flux implementation that solves this - I'd love to hear about it!

5
yoklov 5 hours ago 1 reply      
I used React+Flux for a level editor for a game recently. It seemed to me like there was one or two too many levels of indirection.

Specifically, I'm not sure what problem the Dispatcher is actually solving. It seems to just add boilerplate and indirection for little benefit.

6
drb311 7 hours ago 0 replies      
This flow chart explain things that are hard to grasp in words.

Dan Roam's Back of a Napkin is a good guide to thinking and communicating in this visual way. You can get quite a long way with the freebies on his site, but it's worth shelling out for the whole book.

http://www.danroam.com/the-back-of-the-napkin/

When I studied the diagram, I found myself running through an imaginary scenario in my head. The Overview text makes it hard to do this. If you don't create a diagram and want to explain a process clearly, don't explain the components -- follow an example action through the process, and your readers will grasp it much better.

7
bradrydzewski 1 hour ago 0 replies      
This is a really great intro. I like how it expands on the basic flux diagram that I see everywhere.

One thing missing from many flux tutorials and sample applications that I would love to see included is error handling. How do you manage errors in your flux applications? Do you keep them in the store? If using something like react-router, how do you ensure you flush errors from the store as your routes change and they are no longer applicable to the data in view?

8
MPiccinato 6 hours ago 1 reply      
This is great! We dove into React and Flux a few months ago and it took way too long to come to the realization that this flowchart puts together.

One of the big things I think a lot of React/Flux tutorials miss out on are the "Smart and Dumb" components. This was the missing "view controller" that I am used to with MVC and the flowchart illustrates it nicely.

9
hoverbear 3 hours ago 0 replies      
The little "Tweet/etc this" box is so awesome, it actually half covers the first letter of each line and follows you down the whole page! How user friendly.
10
amirouche 3 hours ago 0 replies      
I'm upvoting this because I find reactjs a good development tool and would like to see more of those articles. That said, the article only explain a simple case that is not a real world scenario. Others have commented on this issue in the comments.
11
malandrew 1 hour ago 1 reply      
This is actually all you really need to know about flux:

https://twitter.com/substack/status/621818725159710720

https://twitter.com/substack/status/621832733564628992

https://twitter.com/substack/status/621639688919515136

That first tweet succinctly explains flux in less than 140 characters.

At the end of the day, pretty much every react "best practice" I've seen converges on approaches that raynos/mercury offer out-of-the-box.

React's decision to allow local state inside components was an "original sin" and every single engineers that decides to author a library to make React/Flux simpler is really just engineering around the poor decision of allowing local state.

Have of those should, might, could update if they felt like it react class props are unnecessary complexity that had to be bolted on to overcome issues with local state.

Here's a better approach:

(1) components that are pure functions that take in state that they should render. state hasn't changed? don't call the function

(2) components that have state that needs to be tracked can export a function that produces an instance of the state that component consumes

(3) compose the state of the UI, by calling the state instantiating function of all the components in your UI. Nest as appropriate.

(4) Make sure all these state instances are bijective lenses that keep one source of truth for a certain state value. Everywhere that state is needed or could be modified receives that "cursor" via dependency injection. The simplest example demonstrating a state cursor is the raynos/observ library.

The composed state object is the waist of your app hourglass, just like IP is the waist of the Internet. All I/O modifies state and that state propagates from there. Anything rendered to the screen is the O in I/O. Any events coming from your mouse or other peripheral is the I in I/O. Any syncing via XHR or websockets can be the I and the O in I/O. All I/O flows to the state.

Anything that needs to react to changes in that state have two options: subscribe to state change events (push) or read the current state as necessary (pull).

This really isn't all that hard and React/Flux has overcomplicated things immensely and given it a flashy name. The myriad libraries that purport to make it easier and simpler are just overcompensating for something that fundamentally needs to be re-engineered, but won't because it breaks backwards compatibility and requires people to move state logic (read: business logic) out of the component classes (that probably shouldn't have been there in the first place).

12
RobertoG 9 hours ago 1 reply      
Somebody knows if the proposed library in the article, Alt, with ReactJs makes a full solution (like AngularJs for instance)?
13
danmaz74 9 hours ago 1 reply      
OP here, if you have any questions/comments feel free to ask
14
adeptima 5 hours ago 0 replies      
15
ihenvyr 9 hours ago 0 replies      
Yes please facebook team, update the Todo tutorial.
16
larcara 9 hours ago 0 replies      
Great! thanks
17
tomjen3 8 hours ago 2 replies      
Holy fuck that javascript completely breaks me actually seeing that image, can you just link straight to the .png?

Thanks.

Twilio announces $130M series E round twilio.com
65 points by philnash  6 hours ago   26 comments top 6
1
jakozaur 2 hours ago 2 replies      
Several years ago, they would just IPO. They seems to be in a great shape to do that:

http://techcrunch.com/2013/06/07/twilio-raises-a-70m-series-...

http://blogs.wsj.com/venturecapital/2015/02/20/twilio-positi...

Yet another case of late stage private capital.

2
djloche 2 hours ago 1 reply      
Interesting to see that Bessemer Venture Partners isn't listed as participating in this round. They've been involved in all the previous funding.
3
anantzoid 1 hour ago 2 replies      
Let's hope they use this to stabilize their systems and make it more reliable. Have been extensively using their service, and there are times when the SMS doesn't get delivered.
4
pkaye 2 hours ago 6 replies      
How many rounds of funding can a startup have? For example, has any company ever reached H round?
5
rjv 2 hours ago 2 replies      
Do early-round investors bank off of later ones? I don't really know anything about fundraising, so my perception is that money it just getting shifted around from one investor to the next.
6
andyidsinga 3 hours ago 0 replies      
that announcement is phony ;)

seriously though, twillio is a great product, cheers to them!

How Driscolls Is Hacking the Strawberry of the Future bloomberg.com
13 points by hgennaro  5 hours ago   6 comments top 5
1
Robadob 15 minutes ago 1 reply      
There is an interesting image[1] at the bottom of the article, about how to extract the DNA from a strawberry, surprised (and partially sceptical) that it's really that simple.

[1] http://assets.bwbx.io/images/iBaJbMBMgtrI/v1/-1x-1.jpg

2
Vraxx 40 minutes ago 0 replies      
I always find it fun and interesting to see all the complexities of other fields that one would normally not think about. This did a great job of bringing light to something I had considered pretty mundane.
3
illegalsmile 12 minutes ago 0 replies      
Driscoll strawberries are disgusting when compared to a more natural and naturally grown strawberry.
4
crististm 6 minutes ago 0 replies      
5
vtlynch 59 minutes ago 0 replies      
How Many Times Has Your Personal Information Been Exposed to Hackers? nytimes.com
94 points by igonvalue  8 hours ago   28 comments top 10
1
jcadam 6 hours ago 2 replies      
Just about every organization that I've entrusted my PII to -- insurance companies (thanks, Anthem), banks, government agencies (thanks VA, OPM, DoD), etc., has managed to lose control of it. I don't know why I even bother trying to keep my identity secure.

I'm probably buying 3 houses in 3 different states as I write this.

2
glenscott1 8 hours ago 7 replies      
This is a good tool for determining whether your account has been compromised by hackers:

https://haveibeenpwned.com

3
Balgair 3 hours ago 0 replies      
Wow, that OPM attack sure was a doozy. If you check it your entire financial history is out there. Man, I mean, we talk about National Security concerns and some of the bloviating from the feds on Hn all the time. But wow, that OPM hack sure was a heck of a national security attack.
4
jedbrown 2 hours ago 0 replies      
This article takes a rather fatalist perspective. Seems to me companies/government should have a strict need-to-know policy and some liability for failures. If you can't keep information secure, you shouldn't have it. Acting as though it is secure while it is regularly compromised is reckless wishful thinking.

I realize that perfect security is fantasy, but the practices of many of these organizations don't pass the laugh test. We'd be vastly better off if they would hire a security professional and listen to her.

5
joesmo 1 hour ago 0 replies      
6
drallison 3 hours ago 1 reply      
A useful learning tool and an interesting means for shaping public opinion. The NY Times computation takes into account publicly acknowledged exploits; one wonders how many undetected exploits there are. The number computer here is almost certainly a lower bound.

I worry that the clamor about personal information exposure is going to be used to motivate restrictive and ultimately ineffectual government action and, perhaps, kill the goose that has been laying the golden eggs.

7
denzil_correa 5 hours ago 0 replies      
I tried to submit this link a few hours back and it HN threw me a "Deadlink" page, strange.
8
sarciszewski 3 hours ago 1 reply      
9
simeondd 4 hours ago 0 replies      
10
VLM 3 hours ago 1 reply      
       cached 29 July 2015 19:02:04 GMT