hacker news with inline top comments    .. more ..    30 Apr 2016 News
home   ask   best   2 years ago   
1
Software error doomed Japanese Hitomi spacecraft scientificamerican.com
37 points by nimbs  1 hour ago   16 comments top 4
1
dwc 5 minutes ago 0 replies      
This is the key passage:

 The spacecraft then automatically switched into a safe mode and, at about 4:10 a.m., fired thrusters to try to stop the rotation. But because the wrong command had been uploaded, the firing caused the spacecraft to accelerate further. (The improper command had been uploaded to the satellite weeks earlier without proper testing; JAXA says that it is investigating what happened.)
Going into safe mode is a thing. It happens with NASA stuff, ESA stuff, whatever. The spacecraft failed to stabilize and went into safe mode, and that's proper. Whatever glitch in the systems, this would have saved it and allowed for recovery.

But the uploaded command to shed rotational velocity was wrong. This is what caused the loss of the spacecraft. I'm sure there will be a pretty heavy postmortem on how this happened.

2
tlb 23 minutes ago 0 replies      
I'm surprised that they had to design custom inertial stabilization, considering how many times it's been done successfully before. Was it NIH mentality? Or did it have some requirement for more precise stabilization than other space telescopes?
3
unchocked 1 hour ago 1 reply      
It seems like the core error was in the inertial measurement unit: it would be a common cause between the reaction wheel failures and the failure of the despin burn.
4
microcolonel 1 hour ago 2 replies      
This is just depressing. How was this not tested? This is complete sign reversal of a control output; you'd think it would show up immediately.
2
Neural Networks Are Impressively Good at Compression probablydance.com
136 points by ingve  4 hours ago   36 comments top 12
1
kaffeinecoma 2 hours ago 3 replies      
People are knocking this guy for not being an expert and maybe getting some details wrong. Maybe it's a little bit like watching a non-programmer stumble their way through a blog post about learning to program- experienced programmers may cringe a bit.

But I really appreciate these kinds of write-ups: he declares his non-expertise up-front, and then proceeds to document his understanding as he goes along. There's something useful about this kind of blog post for non-experts.

I'm working my way through Karpathy's writeup on RNNs (http://karpathy.github.io/2015/05/21/rnn-effectiveness). I've mechanically translated his Python to Go, and even managed to make it work. But I still don't entirely understand the math behind it. Now obviously Karpathy IS an expert, but despite his extremely well-written blog post, a lot of it is still somewhat impenetrable to me ("gradient descent"? I took Linear Algebra oh, about 25 years ago). So sometimes it's nice to seeother people who are a bit bewildered by things like tanh(), yet still press on and try to understand the overall process.

And FWIW I had the same reaction as the author when I started toying around with neural nets- it's shocking how small the hidden layer can be and still do useful stuff. It seems like magic, and sometimes you have to run through it step-by-step to understand it.

2
svantana 3 hours ago 2 replies      
I thought this would be about something like giving the Hutter Prize [1] a go using character RNNs [2]. Instead, it's a somewhat confused "gentle introduction" to neural nets (which there are plenty of already, of higher quality) and compression is sort of handwavely discussed, not properly with bits and entropy like us information theorists would have it :)

[1] http://prize.hutter1.net[2] https://github.com/karpathy/char-rnn

3
jg8610 3 hours ago 0 replies      
It's good to see people write up their experiments, it's useful for the rest of us to test how we understand neural nets.

I think there are a few mistake in your maths though. You can learn a 1-1 discrete mapping through a single node where you are using a one-hot vector. You just assign a weight to each of the input nodes, and then use a delta function on the other side. If I understood correctly, this is what you are doing.

Also, if you use a tanh in your input layer, but keep a linear output layer (as you start off with), you are still doing a linear approximation because you have a rank H (where H is the hidden layer) matrix that is trying to linearly approximate your input data. This is done optimally using PCA.

I'd second the advice to look into the coursera courses, or the nando de freitas oxford course on youtube (that actually has a really nice derivation of backprop).

4
algorithm314 10 minutes ago 0 replies      
Neural networks are used for over a decade in Context Mixing compressors like paq https://en.wikipedia.org/wiki/PAQ
5
brendanofallon 3 hours ago 2 replies      
I think some of his statements are a little off-base, for instance, regarding the choice of tanh() for the activation function he says: "But mostly its 'because it works like this and it doesnt work when we change it.'" People have spent a fair amount of time investigating the properties of different activation functions, and my understanding is that tanh() is generally not in favor, and ReLU (rectified linear) is probably a better choice for many applications, for reasons that are well understood. Maybe the author isn't all that familiar with the field?
6
pigscantfly 3 hours ago 0 replies      
I was expecting to read about some experiments with autoencoders here, not this tutorial. I'm not sure how the author is learning about neural nets or where they are now, but anyone at this level of knowledge who wants to know more would be well served by going through Geoff Hinton's class on Coursera.
7
jcoffland 1 hour ago 2 replies      
This is not compression. The author focuses on the small number of nodes and calls it compression but it's the edges that encode the information and there are a lot of them.
8
astazangasta 3 hours ago 0 replies      
Lossy compression. Reminds me of this hilarious bit: http://web.archive.org/web/20050402231231/http://lzip.source...
9
rdlecler1 3 hours ago 0 replies      
I'm sure with perturbation analysis you could also remove even more edges from the ANN.
10
tacos 3 hours ago 1 reply      
This is both a fascinating read of an obviously bright and clever fellow and also a TERRIBLE way to approach building intuition around the emergent behaviors of neural networks. If you want to learn about them without repeating the terrible mistakes this guy is about to make if he continues down this path, pick up a book.
11
pjbrunet 3 hours ago 0 replies      
I suppose that one episode of POI (the chain of laptops on ice with Pink Floyd) could have implied a neural network of sorts, manifesting at the hardware level. If you remember the episode, they had to compress the AI to fit in a briefcase.
12
robbiep 2 hours ago 1 reply      
Ok so 3 layers increases the complexity but simplifies the connections. The human cortex has 6 layers.

Now create 6 layers, classify different sets of inputs as 'different' to represent different neurochemicals (you need several excitatory and several inhibitory and then a couple of very small master neurochemicals that have major excitatory and inhibitory responses to represent the dopamine network and a whole system for the amygdala), cluster different groups to either respond to inputs or create outputs, and set it loose on an environment. How close would we come to something that behaves as if conscious?

3
Snowdens Rubiks Cube da5is.com
71 points by ssclafani  3 hours ago   25 comments top 7
1
Luc 4 minutes ago 0 replies      
I've got a Dayan Zhanchi stickerless cube that I could hide at least 12 SD cards in (in the hollow edge pieces).

Example of cubelets taken apart: http://www.ebay.com/itm/US-Seller-Dayantou-Guhong-3x3-Sticke...

2
cmrx64 1 hour ago 3 replies      
This movie looks like is has a good chance of doing a better job than even The Imitation Game at ruining and fundamentally misrepresenting a topic I care about and a person I respect.
3
orblivion 1 hour ago 2 replies      
I just watched the trailer. Clichd brilliant patriotic dude saves the day. Bonus: Movie OS. This is gonna be such a great bad movie.
4
Johnny555 57 minutes ago 0 replies      
I'm sure you could grind out more space for the SD card with a Dremel tool to make it fit more seamlessly.

Or you could pop out a piece and you'd have room for dozens of MicroSD cards inside the pieces.

5
teekert 10 minutes ago 0 replies      
First time the actual character is more handsome than the actor ;)
6
BinaryIdiot 1 hour ago 1 reply      
Interesting, I hadn't realized this was a cube already in existence. Neat!

I've broken a few rubix cubes in my day and so many of them have hallow squares so I'm not surprised you could hide stuff in one. Hell if you customize one you can probably get it to hold a huge amount of data.

7
tamana 53 minutes ago 3 replies      
Why do Rubik's cubes have removable center tiles?
4
Taskwarrior intelligent TODO list taskwarrior.org
82 points by albertzeyer  4 hours ago   57 comments top 15
1
gglitch 3 hours ago 3 replies      
In another comment thread on Taskwarrior, someone more intelligent than me suggested just using plain dirs and files, one file per task. Then you get all of Unix for free, with any metadata you want as part of the filename or contents. I'm a former org-mode user who now just uses a paper book, but if I still wanted to use a computer, dirs+files would be hard to beat. One or more folders for todo, one or more folders for done, stick it all in Dropbox.
2
julian_t 11 minutes ago 0 replies      
I've started using TW and find it great for the simple to-do list stuff I need, with enough control over dependencies and scheduling.

I've tried org mode, but I just don't get on with the emacs way of working. TW, as a simple command line tool, integrates well with scripts and other apps, and a Ubersicht plugin displays the task list on my OS X desktop.

One thing that does drive me nuts, though, is its habit of changing task IDs. I know why it works like that, but it makes it harder to work with.

3
dhagz 1 hour ago 1 reply      
So, I went to the downloads page to see how I can get it on my laptop. I have a MacBook Pro, so I scroll down looking to see if there's anything for OSX's package managers...and there's 3 different Homebrew packages listed, each with a different name. Do they conflict? Are they all needed? Why isn't there just a plain `brew install taskwarrior`? It's a little off-putting.
4
jvehent 34 minutes ago 0 replies      
I use taskwarrior with bugwarrior [1] to automatically maintain a list of pending bugs/issues and pull requests assigned to me. The initial setup takes a couple hours, but it's a great way to track stuff spread across dozens of locations.

[1] http://bugwarrior.readthedocs.io/en/latest/

5
covi 1 hour ago 2 replies      
As a Google Inbox user, I've found its reminder features to be very good for these kinds of day-to-day tasks.
6
lambdasue 51 minutes ago 0 replies      
For all plain-text file lovers out there, there's actually quite a good tool that combines both the power of plain text files (i.e. Markdown) and taskwarrior called taskwiki [1]. Provides the best of the both worlds to me, tasks in plain text files synced to TW -> synced to Taskserver -> synced to mobile app. I can also use all the power of Taskwarrior directly on the command line.

[1] https://github.com/tbabej/taskwiki

7
ywecur 3 hours ago 2 replies      
While this is a decent tool, it doesn't come close to the power of org-mode in terms of implementing task management systems such as GTD.

Still, if you're not an Emacs user and want a FOSS task management system this is the way to go.

8
Numberwang 3 hours ago 10 replies      
A bit unrelated, but are people actually getting anything out of personal task managers and ToDo lists?Supposedly they are to help me do the right thing at the right time and not worry too much. I've started to believe part one of that problem manages itself more or less and part two just get's worse and worse by trying to be organized with projects, lables, calendars, ideas and ambitions.

I'm currently thinking maybe I should encrypt all my notes and make the password unavailable for me for the next year or so whilst I try starting every day with a blank page.

9
soyiuz 54 minutes ago 0 replies      
An impressive tool and website. Like others in this thread I use a plain txt file + git to manage my todos. Simplicity trumps any other feature for me. Unix tools are a bonus.
10
awwaiid 3 hours ago 0 replies      
If you are getting started with taskwarrior, I recommend starting out with a very very minimal usage of the features, and then gradually add in the ones you like/need. Otherwise, like many cool advanced tools (vim, emacs...) you might get overwhelmed and abandon it.
11
great_kraken 3 hours ago 1 reply      
I tried using Taskwarrior with sync to Mirakel last year, as a free replacement once my Todoist trial ended. The sync didn't function correctly, and notifications weren't reliable. Ended up paying for Todoist. It's a real shame, because I much prefer going with free & open source solutions whenever possible.
12
edward 3 hours ago 1 reply      
13
rndstr 2 hours ago 2 replies      
Haven't tried it but for CLI apps, numbered references to items (named ID in taskwarrior) are often troubling for me.

In the Quick Demonstration[0] it seems that deleting will re-number the items but doing

 $ task 1 done $ task 2 done
and then `task list` should actually lead to

 1 Buy eggs
shouldn't it?

or then

 3 Bake cake
if IDs are kept

[0]: https://taskwarrior.org/docs/start.html

14
stewbrew 3 hours ago 1 reply      
Why would I want to choose Taskwarrior over todo.txt?
15
kusanagiblade 55 minutes ago 1 reply      
Just out of curiosity, who really uses this thing day-to-day? And why???? I can't even be disciplined enough to use my to-doist app on phone. No!!!!!!!
5
Claude Shannon Turns 1100100 newyorker.com
75 points by anthotny  4 hours ago   11 comments top 7
1
tunap 1 hour ago 0 replies      
Try to imagine where we would be today if Bell Labs and/or MIT had fired him for his eccentricities, "trivial" distractions and prolonged absences. Imagine if he hadn't been present when Bardeen & Brattain(and subsequently Shockley) produced the modern transistor and took off on one of his tangential side projects. Just "wow".

edit: giving Bardeen & Brattain their due.

2
fitzwatermellow 9 minutes ago 0 replies      
IEEE also had a nice retrospective:

Claude Shannon: Tinkerer, Prankster, and Father of Information Theory

http://spectrum.ieee.org/computing/software/claude-shannon-t...

3
Cyph0n 2 hours ago 3 replies      
I hate non-standard length binary numbers! Just write 01100100 for God's sake...

On topic, Shannon is my favorite scientist of the last century, after von Neumann of course. If his only contribution was connecting Boolean algebra to digital logic, he would have been a pioneer. But no, he goes ahead and defines information theory out of nowhere, even though the majority of applications that can utilize it were not yet practical! I mean, come on Claude!

4
justin66 1 hour ago 0 replies      
The genius featured in almost all the CS textbooks who never won a Turing Award. I don't even know exactly where that ranks on my list of reasons for strongly disliking the ACM, but it's on there.
5
TheCoreh 11 minutes ago 0 replies      
We're only 28 years away from a nice, round number!
6
CurtHagenlocher 1 hour ago 0 replies      
As an aside, Dr. McEliece was one of the best teachers I had while at Caltech, with an amazing ability to present dry material an a way that was both informative and interesting.
7
kelukelugames 2 hours ago 0 replies      
6
How Russia Works on Intercepting Messaging Apps bellingcat.com
88 points by adamnemecek  2 hours ago   20 comments top 9
1
dendory 1 hour ago 1 reply      
It's all fine and good to point out how insecure SMS is, and various ways that 2-factor auth may have been improved, but I think the bottom line is that when your adversary is the government, police and telecom provider working together, the means you use to protect yourself are irrelevant, if they really want to bring you down, they will. The solution is in democratization and lobbying for proper laws to be passed.
2
mtamizi 1 hour ago 0 replies      
It's one thing to rely on phone numbers for authentication, it's yet another to store the messages. The same scenario on WhatsApp would have given the attacker access to the account but not the message history, which is presumably what they were after.
3
pigscantfly 1 hour ago 1 reply      
It's interesting that the Russian state relies on these multi-party procedures (security service contacts telco to cut off service to initiate attack) rather than unilateral interception, which as the author says, might not only be simpler but also less detectable.
4
finishingmove 1 hour ago 3 replies      
I love reading gray on gray. No, really, the web needs more of this. Let's also make sure the font is readable only on retina displays.
5
kabouseng 50 minutes ago 0 replies      
There is a couple of points I disagree with this article.

First off two factor authentication and resetting your account use sms's for recovery not so much to get access to your social graph and thereby "growth hacking", but because proving someone's identity you don't know personally digitally is a hard unsolved problem (pgp key ceremonies and web of trust certainly didn't solve it).

Second, saying you should use end to end encryption doesn't prevent someone from resetting your account and getting access, so it is not some sort of silver bullet. It does however prevent an attacker from reading your past messages, but after getting access they will be able to read all your current messages on multi device services.

6
weitzj 1 hour ago 1 reply      
It would be nice to have a distributed 2 factor authentication using something like bitcoin or ipfs.com
7
nxzero 1 hour ago 2 replies      
Even E2E apps like Signal use a device's number to setup an account; meaning as of a month ago, at least for the app, the device number was required to created an account.

Maybe this isn't an issue, but seems odd to me.

8
chinathrow 1 hour ago 0 replies      
Lesson No 1: Don't tie your messaging account to your cell number.

E.g. Threema allows that.

9
guilhas 1 hour ago 0 replies      
A guy working for NATO thinks that Russia...
7
Hogwatch a bandwidth monitor that shows per process network transfer github.com
146 points by cujanovic  8 hours ago   39 comments top 13
1
sciurus 3 hours ago 0 replies      
Under the hood, this is using nethogs. Here's how nethogs associates traffic with a process:

On linux the file /proc/net/tcp lists all established TCP connections. It includes the local and remote socket addresses and the inode number for the local socket. Nethogs uses libpcap to sniff traffic and associate it with its entry in /proc/net/tcp. It takes the inode from there and scans through /proc/*/fd/ looking for the file descripter that has that inode to determine which process has the socket open. Once it finds the process it adds it to a table of inode to process id mappings so it doesnt have to scan through /proc again the second time a packet for that connection comes through.

2
akshayKMR 2 hours ago 0 replies      
Hey, project author here. This is the first python package or project for that matter I've built for my college assignment.

I am really overwhelmed by the response. However the project is still very much unifinished.

Here are somethings that need to be fixed/added for eg.

-fix some bugs on frontend.(proper sort on listing/chart switching etc.)

-Kill nethogs process on exit // fails sometimes

-Store history for restarts.

-proper packaging.

-unit tests.

I'll add the above to the readme.md

First time on Hackernews/Github feed feels great though. Cheers.

3
eps 5 hours ago 2 replies      
On Windows: Sysinternals Process Explorer, already mentioned Glasswire and NetBalancer and a bunch of other apps, almost all which aren't very good at all. But Glasswire is very nice.
4
TheAceOfHearts 7 hours ago 3 replies      
On OS X I use Little Snitch. Unfortunately, it's not free.

I noticed in one of your screenshots you use LS as well, do they serve different purposes or was it just a project for fun?

5
aparadja 5 hours ago 1 reply      
If you're on Mac and just want to monitor the connections that each process makes (not the bandwidth), Radio Silence just got a built-in network monitor a few weeks ago: https://radiosilenceapp.com

Disclaimer: I'm the author

6
kalleboo 6 hours ago 0 replies      
A quick alternative on Mac is "nettop" in the Terminal
7
hanief 5 hours ago 0 replies      
Nice. I use Little Snitch myself. I also appreciate the clever naming. ;)
8
lsv1 6 hours ago 0 replies      
The final build should have proper shebangs and I also noticed the CSS is a little messy. I'll submit a pull request in a bit.

Otherwise I'll give it a shot.

9
gardano 3 hours ago 0 replies      
Dammit, now I've gone down a Terry Pratchett black hole again. Thanks, cujanovic.
10
yduuz 6 hours ago 3 replies      
NetBalancer is a similar tool for Windows, with interesting functionality, but not free https://netbalancer.com
11
jcoffland 2 hours ago 0 replies      
etherape is another option on Linux.
12
janee 3 hours ago 1 reply      
would be nice to configure a central server to which to push data to and list the machine name with each entry so you can monitor all machines on a network and see who and what is hogging the bandwidth. Might give it a go
13
8
The next Bank of England 5, 10 and 20 banknotes will be printed on polymer bankofengland.co.uk
29 points by ohjeez  3 hours ago   36 comments top 6
1
tombrossman 1 minute ago 0 replies      
Slightly off-topic but did anyone else find it amusing that the Bank of England website does not support HTTPS? I think that's the first bank website I've ever seen that uses HTTP only.
2
switch007 54 minutes ago 0 replies      
> The Serious Fraud Office and the Australian Federal Police are conducting a joint investigation into the activities of the employees and agents of Securency International PTY Ltd and their alleged corrupt role in securing international polymer banknote contracts.

> The trial of Peter Chapman is currently on-going and began on 4 April 2016 at Southwark Crown Court. Mr Chapmans first court appearance was on 5 May 2015 at Westminster Magistrates Court. There he faced six charges under the Prevention of Corruption Act of allegedly making corrupt payments to an overseas official in order to secure contracts of polymer for his company, Securency. The alleged offences took place between 9 July 2007 and 18 March 2009. He has been remanded in custody.

https://www.sfo.gov.uk/cases/innovia-securency-pty-ltd/

3
semi-extrinsic 1 hour ago 3 replies      
Polymer banknotes were introduced in '88 by Australia, who switched completely to polymer in '96. Other (ex-)commonwealth nations have followed suit, so it's not that surprising Britain is switching as well.
4
jonahrd 34 minutes ago 1 reply      
I like the Canadian plastic money. The only complaint is that when working retail you have to deal with people who fold their bills ot put them in a small wallet, and the bills are so much harder to unfold than paper making them difficult to count.
5
Aelinsaar 1 hour ago 3 replies      
Is the US likely to follow suit?
6
vegabook 44 minutes ago 2 replies      
a) be thankful that banknotes are not being withdrawn altogether, but...

b) mention of the 50 suspicious by its absence.

9
This Is Your Brain on Podcasts nytimes.com
34 points by dnetesn  4 hours ago   14 comments top 6
1
mapleoin 2 hours ago 0 replies      
This is what I got from this article: Researchers looked at MRI scans of people listening to a Podcast on their daily commute. The researchers don't know what any of it means.
2
Amorymeltzer 17 minutes ago 0 replies      
Amusingly, I heard about this research on the Nature podcast.

I was skeptical at first these fMRI studies are a dime a dozen these days, with dubious results but this one is pretty robust. The use of training data is nice, and definitely adds a level of verifiablility to it. It's a nice step toward actually understanding how the brain processes not just language but ideas. There is definitely a future where we can understand what people are thinking by observing their brain.

3
wimagguc 1 hour ago 1 reply      
The paper this article references has not much to do with listening to podcasts while commuting. It's about how "the meaning of language is represented in regions [of the brain]".

Wonder if the post would have received the same attention with the more precise title "This is your brain on stories".

4
hgh 31 minutes ago 4 replies      
Article itself is pretty light on the details, but good opportunity to ask the HN community -- which podcasts really fire your neurons?
5
agumonkey 58 minutes ago 0 replies      
Podcasts are my tool of choice for bike riding smooth cardio sessions. They drive my mind off the effort, it makes my rhythm more regular and the experience more pleasurable. Sometimes I'm even thinking "hard" while pedaling.
6
ecuzzillo 41 minutes ago 1 reply      
Link to paper full text?
10
.note.GNU-stack (2010) chys.info
42 points by networked  4 hours ago   10 comments top 3
1
userbinator 4 hours ago 3 replies      
Couldn't they have just used a bit in the header to indicate this, instead of requiring an entire section description (despite it being empty)? I believe there are some unused bits in ELF headers.

I do RE so I've inspected many binaries, and my impression is that the amount of "GNU promotion", for lack of a better word, in ELFs seems much higher than what Windows' toolchains will do for PEs. "GNU-stack" is entirely non-descriptive; something like "NXstack" would make more sense to me. There are also lots of sections (even in a linked, executable binary), lots of text auxillary information that isn't used (but can offer thumbprinting/forensic opportunities --- or risks, depending on your perspective), and overall why!? design decisions like this one. Maybe it's just my bias because I started with PE, but I'm of the opinion that it's more straightforward than ELF --- especially some features like dynamic linking.

2
jwilk 4 hours ago 0 replies      
It's 503 for me, so here's an archived copy: https://archive.is/VchZA
11
The curse of the potato washingtonpost.com
63 points by miiiiiike  7 hours ago   30 comments top 11
1
Smaug123 4 hours ago 2 replies      
I'm not convinced by "complex hierarchies and taxation schemes". It sounds just as plausible that grain was one of the first really tradeable forms of food, so people became able to trade non-food-related skills for food, with people from further away. Your ability to do this is badly limited if you're using wet foods like potatoes, because you are only able to trade with people fairly nearby before the potatoes start sprouting; but with grain you can send your son off to the market three hours' wagon-ride away with a month's worth of crop, for instance.
2
rtpg 4 hours ago 1 reply      
By the early 18th century, Japan had developed full-on derivatives markets (not just futures but CDS-style bets) for its rice crops.

I like the "portability/storability" hypothesis this article is proposing for societal development. If only because it becomes a proxy for financial systems. And you don't have to "convince" people that your money is useful: You can always eat it!

3
barking 4 hours ago 4 replies      
Showing Ireland as being dependent on roots and tubers in pre-colonial times is just plain wrong. It was the colonisers who brought the potato to Ireland.

This makes me suspect that the data is being made to fit the theory here and that they didn't want to leave out Ireland because, well, people would expect it to be there in any article about the impact of the potato :)

4
whiddershins 3 hours ago 2 replies      
I loved the starting premise that it was so much better to develop large, hierarchical, societies.

Better for whom?

Better for surviving contact with other societies, sure.

Better for quality of life for the average person? Probably not so much.

5
dmckeon 4 hours ago 1 reply      
Grain crops can be burned in the field by attackers,as well as stolen. The WP article seems to overlook thepossibility of slavery in early cultures, which wouldput social choices about crops into a vastly differentdynamic.
6
SwellJoe 34 minutes ago 0 replies      
Usually, I avoid the comment sections on most websites, due to the alarmingly low quality...but the comments on this article are actually quite good, and provide some interesting counter arguments and additional historical context.
7
mcguire 3 hours ago 0 replies      
The article fails to mention the other downside to roots: a bumper crop last year and a drought this means you still starve.
8
badloginagain 4 hours ago 0 replies      
It's an interesting theory, but as with all of these kind of theories, mostly conjecture.

Crops are a cornerstone of human society, so it stands to reason that different crop types would after societies differently. I think this theory might be a factor in human development, but I think it would be a small one.

9
vinceguidry 3 hours ago 0 replies      
I can't take an article discussing the New World's technological inferiority seriously if it doesn't mention horses.
10
NoMoreNicksLeft 3 hours ago 1 reply      
From the article:

> But the fact that grains posed a security risk may have been a blessing in disguise. The economists believe that societies cultivating crops like wheat and barley may have experienced extra pressure to protect their harvests, galvanizing the creation of warrior classes and the development of complex hierarchies and taxation schemes.

How is that a blessing? Sounds more like a curse.

11
ap22213 3 hours ago 0 replies      
I love subjects like this. But, I have a hard time believing its conclusions because it reveals more of our modern modes and assumptions than the prehistoric ones.
12
HyperDither tinrocket.com
29 points by shawndumas  5 hours ago   7 comments top 2
1
pervycreeper 33 minutes ago 0 replies      
More info here: https://en.wikipedia.org/wiki/Dither#Algorithms

This particular one seems to look pretty good when the pixel size is relatively large, but would give an inferior result compared to some of the others listed above for higher dpi. Also, it's too bad he didn't just disclose the specific details of the algorithm.

2
lobo_tuerto 1 hour ago 1 reply      
Is there anything like this for Linux?
13
Microsoft Flow microsoft.com
160 points by imarihantnahata  11 hours ago   87 comments top 24
1
justsaysmthng 4 hours ago 6 replies      
Just watched the "how it works" video and I'm less than impressed (can anything impress me these days?).

"Why constantly check e-mail when you can get a text message when anyone important e-mails you..."

Actually, I receive a push notification whenever I receive an email. I'd hate to receive SMS messages instead of e-mails.

---

"Say someone tweets something about your company. Set up a flow that follows them, sends a nice reply, adds him to a spreadsheet which then gets sent to Salesforce".

Yeah, so someone tweets "YourCompany fucking sucks!" and now the flow automatically follows him, sends a ridiculous "nice reply" and adds an obviously unsatisfied customer (or whatever) to the CRM..

---

"Working smarter, so you can work less and do more",

I think I've heard this promise a thousand times before.

I don't know about the Flow service, but the ad video is quite dumb and uninspired, just like the background music.. who composes all these identical tech ad songs ?

3
skocznymroczny 5 hours ago 3 replies      
Good thing they picked up a unique name that won't collide with any other similar named projects.
4
stonedge 3 hours ago 3 replies      
To me, this looks like their own implementation of the Azure Logic Apps product. (https://azure.microsoft.com/en-us/documentation/articles/app...).

I don't believe this is intended to be a consumer level product at all. In that sense, it's not an IFTTT competitor. Given it's got implementation points to Sql Azure, Azure Blob storage, swagger, etc. this is likely meant as a product for devs to use to hook-up integrations.

5
hadrien01 6 hours ago 2 replies      
Microsoft already created (and abandoned) a IFTTT competitor in 2012: http://onx.ms
6
dbarlett 1 hour ago 0 replies      
It's not mentioned on the landing page, but Flow integrates with PowerApps [1]. The PowerApps console shows Flows [2], and PowerApps can trigger Flows [3].

[1] https://powerapps.microsoft.com/en-us/

[2] http://i.imgur.com/dKxw9Tz.png

[3] http://i.imgur.com/UjJ7Bcg.png

7
blazespin 2 hours ago 0 replies      
I really hate how MSFT doesn't always respect its own SSO. Jeebus. I can't login with my live account. It's free, but you have to use a work/school account. Really?
8
g051051 6 hours ago 2 replies      
It says it's a free service and doesn't directly mention any restrictions, but when you try to sign up it will only accept something it thinks is a work or school email address. So I guess self-employed people who use Gmail don't count?
9
tyingq 5 hours ago 1 reply      
They have more integrations than are showing on the home page, but the only way to see the full list is to sign up.

I uploaded some screen shots showing the available services/integrations:

http://imgur.com/a/NR7Af

10
meesterdude 2 hours ago 1 reply      
yet another stupid and uninformative $company_name + $company_product title.

Better title: "Microsoft Flow: Automation Workflow and Task automation"

11
jszymborski 3 hours ago 0 replies      
Would love to see an IFTT.com integration :P
12
daw___ 8 hours ago 1 reply      
Any clues on why the site asks to uniquely identify my phone?
13
piyushco 3 hours ago 0 replies      
Looks like Microsoft is making zapier / ifttt.
15
sdfjkl 2 hours ago 0 replies      
Microsoft Automator?
16
Touche 6 hours ago 0 replies      
If the developers are reading this, when I watch the video in Opera and make it full screen the video doesn't stay centered and I can only see half of it. It's falling off the left side of the page.
17
slantaclaus 2 hours ago 0 replies      
Pretty cool feature macs have had for over 10 years
18
chris_wot 5 hours ago 0 replies      
There used to be a tool called conduit, part of Gnome. It was pretty amazing, in fact pretty much did DropBox before Dropbox but also synced everything and allowed for custom actions.

Could have been a killer app for Gnome, but Gnome decided that redesigning the notifications area and the clock was more important.

20
blue_dinner 2 hours ago 1 reply      
I wonder what this means for ITTT?
21
jhwhite 5 hours ago 1 reply      
Why would I use Flow over IFTTT?
22
chinathrow 7 hours ago 0 replies      
A new Zapier competitor (happy paying customer here).
23
tacos 6 hours ago 1 reply      
If this Microsoft thing then that Microsoft thing.
24
arsalanb 5 hours ago 6 replies      
Microsoft always gives the vibe of a company that is desperate to salvage some pride from its glory days. It was once a giant, but now they're on the brink of redundancy, in terms of being viewed as "innovative".

This opinion may be controversial to many, but it is what a lot of people are thinking. They experiment, which is phenomenal. But none of those experiments have been a major hit.

Apple has flagship products like the iPhone, etc. Google has search, Android, and even Youtube at the center of its existence. A fallback option, if you please.

Mircosoft "somewhat" has Windows, but hasn't really nailed it yet. These "experiments" will not bring back the glory days.

14
Who's downloading pirated papers? Everyone sciencemag.org
938 points by nkurz  1 day ago   361 comments top 61
1
leot 1 day ago 6 replies      
Elsevier and others can go something themselves.

Their behavior over the last two decades has been little more than reprehensible rent-seeking. Whatever goodwill they had disappeared as they sapped with increasing ruthlessness the dollars of students and non-profits. See, e.g., one of many such figures: http://www.lib.washington.edu/scholpub/images/economics_grap...

It is absurd and dishonest to call Sci-Hub "piracy", given that all of its contents were originally created and given away with the express goal of wide dissemination.

2
daveguy 1 day ago 3 replies      
If anyone is having trouble accessing https://scihub.io the site providing the papers) you can find the site directly at the ip address: https://31.184.194.81/ ... Apparently the domain name was seized. The certificate is for sci-hub.io (safe to accept). Or you can just connect to http://31.184.194.81/ if you don't want to bother with the warnings (and are ok with DOIs and papers being transmitted without encryption).

EDIT:

Their other domains:

https://sci-hub.cc uses sci-hub.io certificate)

https://sci-hub.bz (uses a separate certificate and ip address -- 104.28.20.155)

And a tor site: scihub22266oqcxt.onion

3
taneq 1 day ago 6 replies      
> Im all for universal access, but not theft! tweeted Elseviers director of universal access, Alicia Wise

You want everybody to have access, but you don't want them to get it for free.

Wow, so you want the entire world to all pay for the material you were given for free. Hmmmm.

4
hyperion2010 1 day ago 1 reply      
> Graduate students who want to access an article from the Elsevier system should work with their department chair, professor of the class, or their faculty thesis adviser for assistance.

Now THAT is chuckle worthy.

5
jimrandomh 1 day ago 2 replies      
Elsevier believes they have United States law on their side. And they're right; they do have US law on their side. That just doesn't mean much anymore; it's been worn away by decades of conspicuous corruption, and lost most of its respect. In principle, this should be addressed by the US legislature. In practice, academia has effectively voted no-confidence and bypassed the legal system entirely.
6
bendykstra 1 day ago 1 reply      
I recently wanted to read a five page paper on graph theory from 1977. The company entrusted with it 40 years ago is charging $38 for it. It is just absurd. I can't imagine that the author, now long dead, would have wanted his work to be so difficult to read.
7
kken 1 day ago 0 replies      
It looks like this is EXACTLY what is needed to distrupt this abusive industry. There have been numerous attempts of enforcing a change in positive ways - open access journals, campaigns by researchers and so on. But none of these had any effect.

Let Elsevier go down in flames. I have published more than 50 academic papers and have actively avoided Elsevier. To be honest, this was not too difficult, as they have a lot of journals addressing specialized subtopics that rather seem to appeal to manuscripts that were rejected in first tier journals.

8
Artoemius 1 day ago 3 replies      
I'm sad Aaron Swartz did not live to see this unfolding. He might have been in prison now, but he would still be a world-class hero.
9
Zelmor 1 day ago 1 reply      
It is in the interest of the general public to mirror these services as many time as we should. Paywalls for scientific articles are holding back the whole species. Fusion reactor when? Maybe when the patents for technological advance will not be held by oil companies around the world, whose short term interests cap the our technological progress to make the world a more livable place.

I have a couple PhD student friends, who give me first hand account of biology articles being inaccessible in 2nd world countries due to paywalls and financial limitations of research facilities. It is holding back research in just about every field, I suppose.

10
sachkris 1 day ago 1 reply      
The first three paragraphs of the article clearly tells what is wrong with the system - "Publishers are overcharging for content". Basically, they just continued their business model from the printed-book era to the e-book era without much change. The publishers should think of allowing individuals to subscribe to the content and charge them (nominally) for what they use, rather than putting the load on the Universities and making them subscribe the entire spectrum of journals. The Pay-per-view model of Elsevier currently charges an individual researcher (a staggering) "$31.50 per article or chapter for most Elsevier content. Select titles are priced between $19.95 and $41.95 (subject to change)." [0]

[0] -https://www.elsevier.com/solutions/sciencedirect/content/pay...

11
dredmorbius 1 day ago 2 replies      
I was interviewed for this article though not mentioned in it. My use case isn't mentioned: unaffiliated researchers with limited access to journals doing our own exploration of areas. I've compiled a library of several thousand articles (and via other sources, books) which for both access and portability I prefer electronic versions. My 10" tablet is almost perfect for reading printed material, and functions as a small research library on its own. (Organising this content is another headache -- Android and apps are sadly lacking in this area, one of the few options being Mendalay, owned by, you guessed it, Elsevier. Burn it with fire.)

While I can and do access materials from libraries, including online access, Sci-Hub is both more complete and far more reliable and convenient. Find a resource, plug in the URL or DOI, and I've got it. Versus locating the same reference independently through one of several distinct libraries, each with their own multiple subsystems, authenticating, and sometimes, sometimes not, securing the material.

Another point Bohannon failed to address, which is covered in the discussion here, is the role of journal publishers as gatekeepers not only to content but to careers. Academics, increasingly squeezed by budget retrenchments and awful working conditions[1], must publish through prestige journals in order to establish and advance their careers.

Journal publishers are rent-seeking at both ends of this channel.

Sci-Hub, or as I like to call it, the Library of Alexandra, hs a tour de force demonstration that information is a public good, and that information access wants, and needs to be free. Sci-Hub isn't a complete answer to the problems of current academic publishing (again: publish or perish), but it's a relief valve for many, and an absolute and irrefutable proof of the pressing demands for access.

________________________________

Notes:

1. See the amazingly awful story of a young newlywed biology postdoc who lost her arm in a lab explosion involving an improperly instrumented gas cylinder in which oxygen and hydrogen were being mixed under pressure. This after repeatedly reporting short circuits and electric shocks from the equipment.

http://chemjobber.blogspot.com/2016/03/postdoc-loses-arm-in-...

12
blaze33 1 day ago 2 replies      
So one commenter said that "Journals are used as a proxy for quality", another that they loose so much time browsing through low-quality papers. Isn't the root issue that we need an open and standard way to review, sort and rate all those academics papers ?

Developing voting, flagging, moderating mechanisms, that's what many developers have done for years now on the web. Obviously you wouldn't rate papers like reddit comments but plug arxiv/sci-hub to a system allowing researchers to say what papers they reviewed, what their degree of approval is, eventually where it's been published, who references this paper etc. Seems like Arxiv has an endorsement system but as they say "The endorsement process is not peer review", just a way to reduce spam. Isn't there anything done on this subject ?

13
yagyu 1 day ago 0 replies      
What can you do as a researcher without violating your contract? I decided to not referee papers to be paywalled. You can, too!

http://www.jonaseinarsson.se/2016/only-open-access-peer-revi...

14
jnsaff2 1 day ago 1 reply      
"The numbers for Ashburn, Virginia, the top U.S. city with nearly 100,000 Sci-Hub requests, are harder to interpret."

Am I the only one who thinks this is just the location of AWS us-east-1? People might be using proxies located there or have bulk download jobs.

15
cameldrv 1 day ago 1 reply      
One of the reasons the deep learning field is moving so fast is that everything is open access. Generally, it's considered prestigious to present at a conference, and not much gets published in journals. The major conferences have been adopting a model of posting first on arxiv and then submitting to the conference, so the reviewers see the paper at the same time as the general public.

The amazing thing is that weeks after the paper hits arxiv, new papers are coming out, improving on the previous one. By the time the paper is accepted and the conference rolls around, it's actually almost old-hat.

16
darawk 10 hours ago 0 replies      
These journals contribute precisely nothing to the world. They don't pay the authors, and they pay the reviewers a trivial amount (if at all). They provide exactly nothing, and they extract tremendous amounts of capital from what would have been some of its most productive uses.

There are only a few things that I can truly say I will watch wither and die with unconditional, unreserved glee, and the academic publishing industry is one of them.

17
jmcgough 1 day ago 2 replies      
Researchers have been (illegally) helping each other out with paper access for years. I used to hang out in the neuroscience group on livejournal years ago, and about half the posts were people asking "Does anyone have access to Foo et al 2012?" and then having it passed along to them by email.
18
eggy 1 day ago 1 reply      
They may have started servicing a need in the beginning as simply setting up some servers, and serving as a central repository, which I do not understand why this wasn't just setup by some other part like a university or research foundation for public use, but Elsevier have turned that into holding publicly-funded research hostage. You know when you are debating with somebody, and their logic runs out, they start floundering and emoting? That's what the comments from Elsevier are starting to sound like.I have no problem with them making some money from Ads to help their monthly server hosting and maintenance costs, but they have a weird, self-strangulating business model that is headed nowhere real fast.
19
thirdsun 1 day ago 5 replies      
I don't get it - some commenters are suggesting that the authors of those papers aren't paid, but the publisher is. Why would the author want to limit the spread and accessibility of his work in such a way?

As an outsider who doesn't have any experience with scientific papers and how to get them, it seems very obvious to me that there should be a huge demand for an open platform to publish and read those papers - from authors and readers alike. Why does this role need to be filled by an at best semi-legal party like SciHub?

The fact that users with legitimate access to those papers actually opt for SciHub to get them confirms that the current solutions just aren't working for their users. So why would authors rely on them?

20
Hondor 1 day ago 1 reply      
If the business model of Elsevier/etc somehow collapses, I wonder how universities will make hiring decisions? They high fees of their journals are effectively a recruitment or candidate selection service paid by university libraries and serving departments when they hire faculty. Perhaps they'll have to revert to assessing applicants on their merits instead of such arbitrary metrics as the impact factor of a journal that they published their work in.
21
Dolores12 1 day ago 0 replies      
Papers that meant to be free are now pirated. Nice paradigm shift.
22
EvgeniyZh 1 day ago 0 replies      
The system is broken.Rich universities are ready to pay money, so publishers can raise price, basically making papers unavailable for poor universities who can't handle subscriptions or individuals.

At the same time publishers hardly spend any money - most of people access publications online, not printed, and tons of journals don't have minimal review. So publisher does nothing and wants much money for that.

Scientist at the same time are stuck - if you want people to read your paper, you need good journal or conference, else you might be not heard. Leave alone prestige and fame.

So, something has to be done. Maybe Sci-Hub is that something.

23
onetimePete 9 hours ago 0 replies      
The economy of disrupting ruptured by it shear forces - aka by itself? AIs crawling over the knowledge base? Good thing they cant publish papers with meaningful recombined results yet.Or can they. The irony is that the church of singularity is not relevant to the process - its like declaring evolution some godlike principle or cataclysmic event- while it just is a glider gun going forth, not knowing, not wanting to know.Still interesting times.I guess in the end, the science journals just where roadblocks in Alphabets way. So they have to go- so they will die, the usual way- with there resource supply systems stripped from them by a not "attackable" third party, condemned by those who benefit, as a barbaric, lawless act.

If they would have foresight, they would release all the papers they have into the public domain, and have there true opponents wrestle with the GPL and thus the allmende that produced the wealth. Instead they are having a nap at the ste ering wheel moment.In the end it will help mankind.So can we drop the charade and get on with it?

24
davesque 1 day ago 1 reply      
It's just so funny how blatantly parasitic some of these publishing companies are.
25
ujjwalg 20 hours ago 0 replies      
I wrote about horrors of scientific publishing not just from the viewership side, but also from publishing side a few years ago. tl;dr - Publishers have just too much control on publishing process and distribution process when they bring almost nothing to the table in the digital world we are living today. The very fact that as a scientist you can only submit a paper to one journal at a time and wait for months and sometimes a year to get a response is insane and completely unacceptable. Imagine if you have to do it when you are applying for a job.

If someone wants to read it, here is a link: http://ujjwalg.com/blog/2013/5/3/the-problem-with-the-scient...

26
musha68k 1 day ago 1 reply      
Wow <3 Alexandra Elbakyan is my new favorite super hero:

http://www.sciencemag.org/news/2016/04/alexandra-elbakyan-fo...

27
abhi3 1 day ago 3 replies      
While pirating music or movies, one may try justify ones actions, but deep down it feels wrong.

This doesn't even feel wrong. These parasites had it coming.

28
fferen 1 day ago 0 replies      
As a grad student at a university with ample access to journals, I still use sci-hub occasionally, as many journal sites are surprisingly unreliable (cough IEEE, cough Nature).
29
wsfull 1 day ago 0 replies      
If the cost to students attending universities that have adequate subscriptions is small -- previously, commenters suggested this portion of their tuition amounted to only a small annual fee -- then what would be the reasons the publishers would not offer individual subscriptions at similar cost to the public? As any serious student knows, the "a la carte" prices are absurd -- how successful have they been with this idea?

One silly idea deserves another:

Generally, journals manage their subscriptions through filtering on IP address ranges.

Imagine a customer-ISP agreement that had an option whereby a subscriber could "opt in" to academic journals for a few extra dollars per month.

These subscribers might then be assigned addresses in certain designated ranges by their ISP.

30
tibbon 23 hours ago 1 reply      
20 years ago, most people assumed that MP3 file sharing would never take us to a place where you can pay a company $9.99/month for unlimited streaming of a huge portion of published music. Surely, the Industry would never allow that to happen!

Yet, it did. I'm told that the journals will never allow themselves to become antiquated and that they play a vital role in the review process. I'm seeing this being chipped away at slowly, and have to wonder if soon that small movement will accelerate greatly.

31
throwaway3523 1 day ago 0 replies      
Just as a note about who gets denied access.... I work in a US research lab but my department got spun off into a company. With the loss of my university credentials came my loss of library access.... I still need to read papers in random journals to do my job though.

EDIT: A cash poor company.

32
kriro 1 day ago 2 replies      
I'm ideologically primed to favor free access to information (especially if any tax money was involved which is true for pretty much all research) but I think a middle ground of moving scientific publishing to a nextflix/itunes/kindle model would solve a lot of issues. Access a paper -> (micro) payment to the journal (who could in return set up another (micro) payment to the actual author).The biggest issue is the pricing model (35$/paper when a researcher typically needs to read 100+ papers per papers he/she writes) and the lack of a single point of access.

Like iTunes for science. Go do it someone. I won't envy you, I'll expect you to fail but if you pull it off I'll be forever grateful.

34
topstriker515 1 day ago 1 reply      
I'm not familiar with the procedure of publishing a paper, so please excuse my ignorance. What's preventing someone from submitting their work to publishers AND uploading it to an open access platform? They can rely on the journal for quality-review/validation/etc. while still allowing for wide dissemination.
36
guelo 1 day ago 0 replies      
As in so many other endeavors the law works to protect the true criminals against humanity.
37
lottin 1 day ago 0 replies      
This isn't just about the academia. As soon as you want to write about any technical subject, even if it's only for your own personal use, you need to study the relevant literature. Often this means reading dozens of papers. There's no way a normal person who isn't working at a university or research institution can afford that.
38
foobarbecue 23 hours ago 0 replies      
The author refers to Napster as a "pirate site." I stopped reading at that point because I realized the technical info in the document is likely to be incorrect.
39
ylem 23 hours ago 0 replies      
Just curious--where does ResearchGate fit into all of this? I see constant requests where people ask for papers, but is it actually legal to give them out? There is an interesting discussion here:https://www.researchgate.net/post/How_is_ResearchGate_dealin...
40
rdl 1 day ago 0 replies      
Journals becoming free would probably vastly more to the economy than the rents extracted by the journal publishing industry. Sci-Hub is one way, but perhaps another way would be either a philanthropist or government buying the companies out, or applying eminent domain to the research.
41
cant_kant 1 day ago 0 replies      
If the paper is retracted, do the publishers give a refund ?See http://retractionwatch.com/ to see the massive number of retractions that happen.
42
kcole16 1 day ago 1 reply      
What exactly is the benefit of Elsevier for researchers? Is there no free way to publish papers?
43
gregw134 1 day ago 2 replies      
Have any of the mirror sites uploaded all 50M papers as a torrent yet? They really should...
44
santialbo 1 day ago 0 replies      
Before sci-hub, whenever I needed a paper I would email the author kindly asking for a copy. Almost everytime I would have the PDF by the end of the day. Researchers are happy to share their knowledge.
45
sumanthvepa 1 day ago 2 replies      
I may be naive here, but why not search and download the article's preprint from arXiv? Would it be that much different from the final paper? Why would you need to pirate at all?
46
Fiahil 1 day ago 0 replies      
One of the many problems we have as humans and citizens of the world, is our dependence on US laws. It's not like it's cataclysmic-bad but there are some issues that would greatly benefit, to the rest of the world, from a little push by US citizens on US law makers (It's not like we can do something about it, we're not invited to the conversation).

This is one of them.

47
KKKKkkkk1 1 day ago 0 replies      
This discussion thread feels like a deja vu from the Napster and Kazaa days. The wheels of the US legal system have been set in motion, and even if we think that information wants to be free, the days of sci-hub are numbered.
48
chris_wot 1 day ago 0 replies      
They compare SciHub to Napster. I have news for U.S. Corporate giants - this is no Napster. For one, this isn't downloading what is essentially entertainment. This is downloading serious content without which researchers and students couldn't help the world progress in positive and vital ways. Secondly, a lot of this content wasn't funded by the ones publishing it, and it shows in the comments every time this gets broached because almost every writer whose work is published tells the same story - they didn't get paid by the publisher and all they are doing is preventing their work from being distributed to the widest audience possible.

Thirdly, this time the bullies can't touch those who are distributing the work. One of the consequences of the pervasive reach of the Internet is that if pirated material falls outside of the jurisdiction of a nation that strictly endorses its copyright law then there is virtually nothing that nation can do about it.

When Carmen Ortiz and Stephen Heymann prosecuted Aaron Swartz they thought they had struck a blow for copyright holders everywhere. And they did for a few years. Under three years later, however, the rules of the game changed and this time there is no one they could easily prosecute, because they are outside of their grasp and there is no way to persecute them, or make an example of them.

The established order has changed. Those who should have known better, who should have put conditions on publicly financed research to be open to all instead allowed greedy and amoral companies like Elsevier to take the hard work of others and sell it for a profit, giving almost nothing back to the system they are pillaging. Too late they choose to open the door slightly ajar to make those on the outside think they will be granted access, only to slam that door shut before they can get inside. Too late do they realise that a gentleman thief has broken in and distributed their ill-gotten gains to the ones they stole from.

Those who ruthlessly pursued the Aaron Swartz's of the world have finally been undone. Their arrogance and rapacity blinded them to the reality that they cannot deny information to the world. They did not heed the words of those who enabled the digital revolution. This struggle was predicted by Stewart Brand in 1984, who said to Steve Wozniak at the first Hackers Conference in Marin County, San Francisco:

"On the one hand, information wants to be expensive because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is lower and lower all the time. So you have these two things fighting against each other." [1]

That tension continues, but the old guard is now fighting a rear guard action. They will fight back, but they will remain in retreat until one day they give up, or are forced to and information becomes free. On that day be thankful, because the bindings of the ones who wish to shackle your mind and creativity will have lost the power to do so, to the incalculable good of humanity.

1. https://www.edge.org/documents/archive/edge338.html

49
jerryhuang100 1 day ago 0 replies      
some papers on that top 10 list are even free on the original publisher sites or ncbi (eg. nejm). so why getting them on sci-hub rather than from the original? ppl don't read supplements any more?
50
jbmorgado 1 day ago 0 replies      
It baffles me how a publisher that doesn't create the content neither pays for it, was deemed by the law as the copyright holder of the works they didn't create neither did they pay for.
51
amelius 1 day ago 0 replies      
Sci-hub is nice, but it would be nicer if these papers were available in torrent form, so we wouldn't have to depend on a single source (which can fail at some point).
52
ericjang 1 day ago 1 reply      
Does this suggest that publishing companies like Reed Elsevier plc (RELX group) will eventually be starved of revenue from papers? Or have they figured out an alternative business model to adapt to the changing times?
53
jcoffland 1 day ago 0 replies      
I find a login at my alma matter and an HTTP proxy over SSH quite useful for accessing research papers.
54
vbezhenar 1 day ago 0 replies      
There's an easy way for Elsevier to find compromised accounts used to download articles. I wonder if they will pursue account owners some day.
55
dewiz 1 day ago 1 reply      
Is sci hub dead then? DNS doesn't resolve the host anymore
56
mrkgnao 1 day ago 0 replies      
Can someone explain the regular (weekly?) dips in the graph of Sci-Hub usage? Is it people not working on Sundays?
57
nxzero 1 day ago 0 replies      
Access to pubs is just a symptom of a huge issue, calling science science; science as it is at best a "soft" science and at worst, in many cases, whack quackery that preys on the weak.

As an example of what I mean, as of 2010, 91% of published pubs for the field of psychology supported their original hypothesis; if this isn't an obvious red flag, not sure what would be.

Intellectual fraud needs to be criminalized and claims based on science must require not only reproducibility, but solid means of denying the authors of any form of plausible deniability from escaping responsibility for what in many cases is fraud, or worse, blindly seeding their own bias as reality.

_______

[1] Quackery https://en.m.wikipedia.org/wiki/Quackery

[2] Positive Results Increase Down the Hierarchy of the Scienceshttp://journals.plos.org/plosone/article?id=10.1371/journal....

[3] Why Most Published Research Findings Are False. http://journals.plos.org/plosmedicine/article?id=10.1371/jou...

58
gambiting 1 day ago 1 reply      
I guess that soon governments will start forcing ISPs to ban access to Sci-Hub, just like they did with Pirate Bay and Kickass Torrents. It's still trivial to type into google "pirate bay proxy" and access it quickly, but I can see some legislation happening against it soon.
59
dschiptsov 1 day ago 0 replies      
Knowledge shall be free and unbiased by authority. Free access for everyone to knowledge and healthy scepticism produces miracles. In India, for example, lack of restrictions and tolerance to every opinion produced, besides thousands of sects and cults, the best philosophy this world have seen so far.

The site, it seems, is a part of natural social movement, similar to FOSS, rather than paid content piracy. It is against restrictions.

When some parasites are trying to construct a paywall then society sooner or later would find the way around it, be it knowledge or any other form of digital content - selling an output of the sendfile syscall by those who haven't produced anything would never been tolerated, it violates the hardwired notion of fairness. Especially when one assumes that these papers has been written to spread knowledge and contribute to scientific community (a-la contrubution to open source), not to make money by selling copies - it isn't a paperback.

60
tacos 19 hours ago 0 replies      
The least secure format on the web (PDF) + sketchy (ex-)Soviet servers + high-end researchers in aerospace and materials science. What could possibly go wrong?
61
wutf 1 day ago 0 replies      
Torrent the papers. Why hasn't this been done?
15
Strategic Scala Style: Practical Type Safety lihaoyi.com
63 points by lihaoyi  9 hours ago   12 comments top 2
1
azernik 2 hours ago 1 reply      
A great post!

A suggestion for change: Try[V] is, for most purposes, a more intuitive alternative to Either[V, T] for exception handling. It can be passed a block that may throw a Throwable and Do the Right Thing, it has functions named in ways that make it more clear which alternative they operate on (recover, isFailure, isSuccess), and works nicely in for comprehensions:

 val tryManyThingsSequentially = for { result <- someDangerousComputation other <- otherDangerousComputation(result) } yield other
(As far as I understand, it's implemented more or less as an Either[V, Throwable] under the hood.)

2
aji 4 hours ago 1 reply      
My team recently started boxing both integer and string IDs in a service of ours and, though the transition was a little painful, it has paid off immensely in both readability and safety. These kinds of tips aren't just pedantic programming language theory, they're improvements that will actually make your code nicer!
16
Why the NRA hates smart guns techcrunch.com
20 points by jonstokes  4 hours ago   6 comments top 2
1
hirundo 53 minutes ago 1 reply      
The article makes it sound theoretical, but New Jersey already has the Childproof Handgun Law that makes it illegal to sell a non-smart gun "three years after it is determined that personalized handguns are available for retail purposes." https://en.wikipedia.org/wiki/New_Jersey_Childproof_Handgun_...
2
sanj 17 minutes ago 2 replies      
If the better approach is to focus on the "who", why does the NRA oppose background checks?
17
Linux greybeards release beta of systemd-free Debian fork theregister.co.uk
125 points by based2  7 hours ago   89 comments top 12
1
cm3 2 hours ago 1 reply      
One major problem with systemd is that beta software is released as stable and incorporated in distros. I've just realized that systemd-coredumpd is used to save crashes in the journal and write the files somewhere to /var.

This has the problem that I have to vacuum or nuke the journal if I want to remove knowledge of past crashes because coredumpctl tool has no delete command. And coredumpctl hard-codes GDB as the debugger.

To fix all of this I've overwritten the systctl.d file and made it write somewhere I know and can find without the use of an unfinished tool.

I have also regularly had it happen that bootup would intermittently hang up or shutdown as well. There are also unfixed warnings and errors during shutdown on multiple machines.

My conclusion is that systemd uses the broader population for beta testing, even if you don't follow Fedora Rawhide or Arch rolling.

2
INTPenis 2 hours ago 7 replies      
I can't believe anyone would like to keep using sysvinit-scripts or RC-scripts.

I love bash, I love nitpicking about how to use bash along with the other bash nazis in #bash@freenode. But I would never put myself through sysvinit again. I thoroughly enjoy using distros like CentOS 7, Fedora 23, RHEL7. They're a joy to manage and work with, in part thanks to Systemd.

I never want to debug, troubleshoot, or even look at another sysvinit-script again. :)

3
jcoffland 2 hours ago 4 replies      
One giant program to rule them all was never the Unix way. System's monolithic architecture will be its downfall. Rather than stabilize I predict that systemd will get more and more complicated and eventualy fall due to unwieldy maintenance problems at which point someone will come up with a better more modular system.
4
mwcampbell 3 hours ago 1 reply      
Counterpoint from a former systemd skeptic who now likes it: http://changelog.complete.org/archives/9655-count-me-as-a-sy...
5
CSDude 4 hours ago 2 replies      
Init should be just this: http://git.suckless.org/sinit/tree/sinit.c , systemd incorparates too much funcitonality IMO, I cant get used to it. Well, but its a free world, I have even written a Go init, some guy did Rust based, and upstart is still being used by Ubuntu 14.04, I like choices, and happy to see this alternative as well.
6
meddlepal 1 hour ago 0 replies      
Meh. Systemd is pretty great. The *Nix purists hate it, but it solves a lot of problems and does it fairly consistently. I love using it in Fedora now where journald has basically replaced syslog completely.
7
Nursie 3 hours ago 1 reply      
I like systemd from the perspective of a service-writer/maintainer, the service files are nice and simple and don't have the repeated boilerplate of old init scripts.

However I still dislike its monolithic nature, its folding in of journaling and udev, and all the rest of the creep.

8
phantom_oracle 53 minutes ago 0 replies      
Everybody mentions the Unix philosophy and that is a very valid point, but I would like to mention the freedom philosophy that open-source/Linux gives (or used to).

These guys who wanted systemd muscled their way in to get this software into the Debian system. They also adopted an "all or nothing" approach, depriving the greater community of something bigger than "do 1 thing and do it well", which is "freedom to do as you wish and choose".

Debian wouldn't have split had they maintained they made systemd opt-in or Poettering and co. maintained their own Debian-systemd version. That didn't gel well with Redhats version of what Open Source is, and so many were deprived of their freedom to NOT CHOOSE systemd BUT still use Debian.

For all the talk of how good/bad systemd is, the proponents for systemd should remember how they deprived others with the freedom to choose, no matter how "ignorant" or "backwards" the detractors are (or how shit sysvinit is).

9
ausjke 2 hours ago 1 reply      
Glad to see this coming out.

https://www.devuan.org

Downloading via torrent now, it's 10G via torrent? Anyways it seems the file server there is under pressure now.

I for one don't want to deal with systemd from the start. Never liked its intrusive, all-in-one, hard to debug design approach.

10
eb0la 2 hours ago 1 reply      
I understand this because I can't get used to systemd. The learning curve should be better than rc.local for my taste.

Problem is systemd helps you booting fast and this is important both in the cloud (where every minute is a billed), and in telecom (where you have a stringent SLA to cover).

11
lottin 3 hours ago 2 replies      
Nice, but from the users' standpoint it would've been better if they had worked with Debian rather than rolling out their own distro.
12
cisstrd 2 hours ago 2 replies      
No point in discussing systemd anymore quite frankly, it won, doubt that it's good it did, doubt the methods by which it won - I do both - but it won. Many of the major distributions are not really following the Unix philosophy anymore, heck, some stopped quite a long time ago to do so.

I run sever operating systems because I like to have the control, the minimalism, the elegance, the security. By trying to be more of a desktop orientated system, Linux actually looses what attracts people like myself to using it. I doubt they care, I don't think they have to, just as I don't have to use it.

"This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."[1]

"Much of the power of the UNIX operating system comes from a style of program design that makes programs easy to use and, more important, easy to combine with other programs. This style has been called the use of software tools , and depends more on how the programs fit into the programming environment - how they can be used with other programs - than on how they are designed internally. But as the system has become commercially successful and has spread widely, this style has often been compromised, to the detriment of all users. Old programs have become encrusted with dubious features. Newer programs are not always written with attention to proper separation of function and design for interconnection. [...]

This style was based on the use of tools : using programs separately or in combination to get a job done, rather than doing it by hand, by monolithic self-sufficient subsystems, or by special-purpose, one-time programs. [...]

One thing that UNIX does not need is more features. It is successful in part because it has a small number of good ideas that work well together. Merely adding features does not make it easier for users to do things - it just makes the manual thicker. The right solution in the right place is always more effective than haphazard hacking."[2]

Still reading at this point? Great systemd free Linux distributions: Gentoo, Funtoo, Void Linux, among many others and there is also something called BSD (FreeBSD, OpenBSD, DragonflyBSD)... you should check it out. Also see www.suckless.org

[1] http://www.amazon.com/Quarter-Century-UNIX-Peter-Salus/dp/02...

[2] http://harmful.cat-v.org/cat-v/unix_prog_design.pdf

18
Meet SpaceXs SuperDraco Thruster, the Key to Landing a Dragon on Mars arstechnica.com
5 points by shawndumas  35 minutes ago   1 comment top
1
hoorayimhelping 2 minutes ago 0 replies      
Video of 8 SuperDracos performing a pad abort test a year ago: https://www.youtube.com/watch?v=1_FXVjf46T8
19
Watch SpaceXs Falcon 9 Rocket Land on Barge in New 360-Degree Video ir.net
54 points by Ferver777  5 hours ago   13 comments top 3
1
andrenotgiant 4 hours ago 1 reply      
Can we get this link updated to point directly to the Youtube video, this is blogspam

https://www.youtube.com/watch?v=KDK5TF2BOhQ

2
teleclimber 1 hour ago 1 reply      
This made my Google Cardboard purchase so worth the $20!

I liked hearing the audio too: the hum of the barge, the double-crack of the sonic boom followed by the roar of the engines. Epic!

Hopefully in the near future there will be landings shot in higher resolution, and with better audio.

3
vanattab 4 hours ago 4 replies      
How do they secure the rocket remotely after landing?
20
Review: Japanese Hologram Pop Star Hatsune Miku Tours North America arstechnica.com
36 points by Terretta  5 hours ago   29 comments top 10
1
greggman 24 minutes ago 0 replies      
It's a little disappointing they didn't bring up the fact that Hatsune Miku's songs are created by the fans. Anyone who wants to make a song just buys the software makes a song and posts it to Youtube or NicoNicoDouga and if it takes off it becomes part of her hit list
2
jacobwcarlson 3 hours ago 2 replies      
The absence of Idoru[1] references is surprising. Almost as surprising as learning that book is 20 years old. I may need to re-read it.

[1] https://en.wikipedia.org/wiki/Idoru

3
louprado 2 hours ago 0 replies      
I can never tell if the ticketing websites always report that tickets are almost sold-out. Regardless, that strategy just worked on me.

So today I will see John Kasich speak in San Jose and then I attend a J-Pop vocaloid concert in the evening. A most weird day even by SF Bay standards.

4
anoplus 35 minutes ago 1 reply      
I can't help but thinking about singularity.. but regardless, I believe if done right this form of art can touch the most conservative and cynical.
5
mc32 3 hours ago 3 replies      
Imagine once VR/AR become mainstream. You'd have real artists having simul-cencerts in ten stadiums (or in homes for the homebodies) competing against anime performers who don't get exhausted or get sick or who want to retire... So the farce of the "artist" can be all done with --just corporate manufactured music entertainment.
6
cs702 3 hours ago 1 reply      
Here we have large groups of people in the US paying to attend concerts given by a virtual pop star created and controlled by Japanese businessmen.

Reality truly is as strange as fiction!

7
magoghm 3 hours ago 0 replies      
I've been waiting a long time to be able to see this show live. Since last December I bought tickets to see Hatsune Miku "live" this June in Mexico City.
8
20andup 4 hours ago 1 reply      
I am guessing you don't need 3d glasses to see the hologram. If so, I am all for it. They need to replace those very annoying glasses in 3d movies.
9
TD-Linux 3 hours ago 2 replies      
It's TODAY in SF, there are still tickets available if you act fast.
10
alexvoda 2 hours ago 1 reply      
There are already plenty of comments on Ars discussing the various inaccuracies of the article.

I'm not sure pointing them out here would be useful.

For starters, cs702's comment here ( https://news.ycombinator.com/reply?id=11602173&goto=item%3Fi... ) is plain wrong.

21
The A-10 Warthog May Be Kept Out of Retirement by Law jalopnik.com
51 points by ourmandave  7 hours ago   85 comments top 8
1
Pxtl 33 minutes ago 1 reply      
I think people are unfair to the air force on the A-10. Obviously discontinuing it should be reconsidered, but they've made massive investment in standardizing on a handful of modern planes instead of zillion overspecialized vehicles. The F-35, for all its frustrations, is a staggering piece of technology.

Plus, consider that this is a plane expected to take fire. Everyone loves he story of the A-10 that kept flying with half a wing blown off... But imagine being the crew. I'm sure they'd rather be in a vehicle that wasn't hit in the first place, or controlling it remotely.

That said, the biggest failing has been ignoring the new role of NATO in the ME.

But yes, there should be a joint project for a fixed-wing CAS gunship drone to replace the A-10. The Avenger weapon is overkill, you could build a vehicle half its size around a Vulcan. Do the same role as an A-10, but without risking crew and the other a smaller, modern, cheaper vehicle.

2
maxerickson 6 hours ago 7 replies      
This is something congress really shouldn't be doing.

For example, how much of the scope ambition and scope creep for the F 35 came from congressional meddling?

Someone recognized for their expertise in infantry strategy and tactics should be placed in charge of close air support procurement, given a preliminary budget to try to meet and then be given free reign to figure out what they can get from that budget.

3
GunboatDiplomat 5 hours ago 0 replies      
The Air Force has proven they hate doing CAS. The CAS role should be removed from the Air Force and given back to the Army.
4
Havoc 4 hours ago 0 replies      
They seem to be very popular with both pilots and ground crews so I say if the people risking their lives are comfortable with it then more power to them. I'm not going to arm-chair-general that away and neither should the law makers.

(They should be planning ahead though for replacement because that comfort won't last forever).

5
cyberpanther 4 hours ago 0 replies      
And now we have two problems! Yes its a problem to keep an old plane flying but it does the job. But the bigger problem is the turd costing us an arm and leg to build that doesn't do the job. Its so idiotic things have to be done in such a round about way. Just cancel the F-35 and spend the money on something useful to society.
6
london888 4 hours ago 0 replies      
There should be parts of the armed services that are stuck in various decades. So you could join the 1980's air force for example, but you weren't allowed to use any equipment made after 1990. For each conflict the Pentagon would decide which decade's forces would be most effective.
7
onetimePete 4 hours ago 0 replies      
Why not scrap the USAF instead?It all breaks down into mission support and drones in the long run? So why not scrap the organization that warps the strategic decisions ? Give everything ground mission related to the army, break everything that is strategic, as in supply, and the useless ICBMs of the sky to the new drone Department.
8
discardorama 5 hours ago 1 reply      
How is it that a bunch of elected people without much military knowledge "know" more about these things than the men/women in uniform, with decades of experience?
22
Berkshire Hathaway 2016 Annual Shareholders Meeting yahoo.com
46 points by roschdal  4 hours ago   discuss
23
Intel's Changing Future: Smartphone SoCs Broxton and SoFIA Officially Cancelled anandtech.com
203 points by dbcooper  18 hours ago   110 comments top 19
1
ChuckMcM 12 hours ago 6 replies      
Wow, and ouch. I give them credit for ceding the ground for now but this is another sign of how much ARM has been encroaching on Intel's space.

Scott McNealy said early on that retreating to the Data Center was where companies went to die. And at the time was selling workstations on desks and crushing DEC, then when that business got eaten away by Windows the data center was where Sun was going to make its stand. And then it died.

I think a lot about what the fundamental principle is here. How do seemingly invincible forces in the market get killed by smaller foes? Christensen's Dilemma doesn't really explain it, it describes it, but it doesn't tease out the first principles behind it.

At this point I think it is a sort of Enterprise "blindness" which was something Steve Kleiman at NetApp shared with me. A company can be so good at something that they focus all their energy on it, even when it is vanishing beneath them. Consider a fictional buggy whip company when automobiles came on the scene, right up until the last day they made buggy whips they could be the best whip you could buy, all the secrets to making a great buggy whip where mastered by this company, all the "special sauce" that made them last longer, work in a wide range of conditions, and yet the reality was that the entire reason for buggy whips existing was evaporating with the loss of carriages. By focusing on what the company was the undisputed leader in doing, they ride the wave right into the rocks.

When the company is so stuck on what used to work for them, even after the technology has moved on, they become blind to the dangers. Challenging to watch, even harder if you feel like you can see the train wreck coming. And of course soul crushing if nobody driving the bus will listen to the warnings.

The next sign I'm waiting for is Apple to ship a Macbook laptop with their own 64 bit ARM processor in it. Then it gets really really interesting if AMD can pull off an ARM server chip, going where Intel won't.

2
fallous 10 hours ago 0 replies      
Intel has a long and unfortunate history with promising initiatives into new markets outside their core, and then promptly cutting and running when there's obstacles to those efforts... and I say that as a former (long ago, circa 2000) Intel employee.

Intel saw the explosion of graphics chipsets and decided to try its hand with the i740. After initial teething pains, they designed the i752 and i754 to address these concerns but renewed competition from AMD started to cut their x86 margins and rather than continue on the broader product path, they ejected the graphics business and ran back to Mama x86.

In 1999 and 2000, Intel made several substantial acquisitions in the networking space regarding routers, load balancers, etc. They aggressively tried to move into these markets (I know, I was a sales engineer for that line at the time) but between AMD's Sledgehammer and the dotbomb they promptly fled those markets as well in order to run back to x86.

I can't argue that those were poor business decisions, but I can say that anyone depending on hardware initiatives from Intel that aren't directly x86-related are skating on some mighty thin ice.

3
StillBored 12 hours ago 3 replies      
As I mentioned elsewhere I have a baytrail tablet, and that thing is amazing. But this is the usual corp thinking that got Intel into the trouble they are now in. Ceding the market because they can't currently make billions from it pretty much guarantees they won't ever make any money from it (see POWER giving up the desktop too). Meanwhile as ARM struggles building servers they will have all the time in the world to figure it out, as they now have an uncontested market to fund that work.

I don't understand why intel haven't learned the history lessons from all the other processor manufactures. As soon as you stop competing in low end markets the low end guys build better and better products until they build a higher end product that makes them the top dog.

I guess its because Andy Gove isn't around to kick them in the pants.

4
hyperpallium 13 hours ago 4 replies      
Linked statement https://newsroom.intel.com/editorials/brian-krzanich-our-str...

tl;dr Moore's law remains important, not because of speed or power improvements, but cost improvements.

The cloud [servers] will grow; the internet of things [clients] will grow. So we'll do that.

He doesn't say this, but the smartphone soire is over. Imagination laying off workers, iPhone sales down, samsung galaxy s7 sales down. Flagship smartphones are obviously way more powerful than needed for common usages. A $40 smartphone is now so good, it's good enough.

What's the point in intel chasing a ship that has not only sailed, but sunk?

5
Everlag 17 hours ago 2 replies      
Given that they address it in their statement, I'm interested to see how their FPGA offerings develop in the next few years. FPGAs have a brutal learning curve but sit in a yet unscaled power efficiency niche. Could we see FPGA VPS in the future if Intel's backing them?

Then again, that mention is probably just to respond to concerns regarding layoffs ~immediately after the Altera purchase.

6
m_mueller 10 hours ago 2 replies      
What does this mean for Microsoft's idea for a unified Windows 10 platform? Will they have to bury their x86 Windows 10 Smartphone plans as well?
7
raverbashing 10 hours ago 0 replies      
I wonder how much of Intel's pain in mobile is due to the pains of the x86 platform, especially all the overengineered and legacy stuff around it (BIOS, ACPI, chipset and bus logic, etc)

ARM has some equivalent components (bootloader, power mgm) but they're much, much simpler.

8
ksec 14 hours ago 3 replies      
But What will happen to Intel's modem business? I would love to see some competition to Qualcomm. Broadcom have disbanded their 4G Modem and WiFi business as well, leaving the market with very little choices.
9
TazeTSchnitzel 5 hours ago 1 reply      
Oh dear, is this the death of the Windows tablet already?

Microsoft had to make a stripped-down ARM-only version of Windows, Windows RT, because Intel's CPUs just weren't there. Then they abandoned that in favour of full x86 Windows once Intel's CPUs got there. But now Intel is gone.

This means Windows will now only be on larger, more laptop-like devices, I would assume? No more 7" tablets.

11
mariusmg 9 hours ago 1 reply      
Doesn't that means that CoreM is finally ready to replace Atom ? Why all the gloom ?
12
watersb 16 hours ago 1 reply      
Oh man, the Austin TX design group...
13
beagle3 9 hours ago 0 replies      
Intel has been selling mobile chipsets at loss for the last few years, because no one would touch them at break even price (and don't even think of profit. And still hardly anyone touched them..)

So they basically cut the subsidy (which is understandable) and didn't wait for their market segment to die as it surely would - they just killed it immediately.

14
shrewduser 15 hours ago 2 replies      
a shame, i own a zenfone 2 and it's a great device. i was looking for more from intel.
15
educar 17 hours ago 1 reply      
16
nimish 8 hours ago 0 replies      
Interesting that this brings up aicha evens who recently left Intel after heading up their mobile comm chip division
17
wslh 4 hours ago 2 replies      
Why doesn't Intel just buy ARM?
18
int0x80 9 hours ago 1 reply      
I really think they are going to come up with something to get into the mobile/embedded market.It is a very important market. With their experience, fabrication process and tech they can totally rip the competence.
19
mentos 6 hours ago 0 replies      
Is the future of computing in q-bits?
24
You probably don't need a JavaScript framework slack-files.com
392 points by Svenskunganka  18 hours ago   316 comments top 59
1
edejong 17 hours ago 13 replies      
I still remember doing a PoC for an end-to-end secure messaging app with web support, around 3 years back. It was written in plain JS with jquery and one or two libs for crypto bolted on. Simple, easy, but not very maintenance friendly written. Took around 2000 lines all in all. One of four clients (Android, iOS and a bot framework in Scala/Java).

Then the web-boys came in to rewrite my... well, contraption. In came Grunt, NPM, Angular, some CSS framework, unit testing. Much more, but I forgot the names of all of it, you know the drill. I have to admit, looking at each of these components independently, one could hardly argue with their usefulness, but together they buried a relatively simple and elegant messaging system in tons and tons of incoherent, unmaintained, inextensible and incompatible (with websockets at the time) stuff.

The line-count went up, of course, easily to 15000 lines. I couldn't understand my own designs anymore, since they got spread out over dozens different files. Refactoring became almost impossible. The web-boys however, didn't understand asynchronous messaging, cryptography and eventual consistency very well and we lost each other, making a babylonian tower, far away from our original goals.

The quality of your solution is not in your libraries, or your frameworks, instead it's in being fluent bottom-up before grabbing a library or two. You'll see you often don't even need them.

(edit: small typos, edit2: please explain down-votes, I'd like to know and be happy to answer any questions)

2
sod 4 hours ago 3 replies      
A) There are people that love frameworks. B) There are the ones that don't.

The type A ones just gave up to the complexity and bury it with foreign frameworks.

The latter ones invent their own ecosystem. And are very productive with it. It's fast and beautiful. And you know every screw and bolt. There is a feature request? No problem, you know immediately how to solve it. I know many of them. And nearly all of them are the best I have ever known.

But here is the big "but". Would programmer type B want to work with a 2 year old project from another type B programmer? And there you have your answer why there is such a hugh appretiation for frameworks. Its easier to throw 10 programmers of type A onto the same project.

3
dwg 16 hours ago 2 replies      
To begin, React isn't a framework. Its a tool. Would you compare a table saw to a workshop? That doesnt make sense and neither does comparing React to a frameworkespecially if youre saying you dont need one.

Furthermore, using a virtual DOM is not the purpose of React. Virtual DOM is merely a part of how React works. People don't buy cars to get an engine. They buy cars to get around places. People don't use React to get a virtual DOM engine. People use React to make building and maintaining apps easier.

Lets say you decide youre smarter than everyone who worked on React or some other library, and you can build your app without them. Great, if youre building a small app this is probably just fine. Scale your app up, however, and youre going to end up with a framework anyway. The only difference is that it will be your own framework, and chances are it will be difficult to understand, hard to maintain, and full of bugs, and you may even be stuck with it because of how much it would cost to change.

Using libraries, or even frameworks, is how you leverage the collective intelligence of dozen, hundreds, or even thousands of very smart minds (a few of whom might even be smarter than you). Especially proven ones with many successful projects using them.

Youre welcome to give this up to be a cowboy, but me: Ive been there, thought I was that smart, and realized how much better off I am by not trying to reinvent the wheel every time.

The rest of the article mentions other points which dont seem particularly related to framework decisions to me at all, so Ill let them be.

4
gjolund 17 hours ago 7 replies      
I'm really glad I don't have to work with people who think like this.

You don't NEED a framework, just like you could theoretically build a car from scratch.

If your app isn't complex enough to merit using a framework, then you aren't really building something worth talking about.

Frameworks do exactly what they say, provide a common framework for your team to work off of while they build a complex application.

I feel like most of the FUD around JS frameworks comes from simple people building simple apps and wondering why everything needs to be so complicated. Frameworks enforce structure, which becomes more and more important as your app grows in complexity.

Frankly it would be great for me if this train of though picked up steam, plenty of contract work from companies who can't maintain their "homegrown" frameworks anymore.

5
keeganmccallum 3 hours ago 0 replies      
I think the flame war that has erupted has a pretty simple explanation: some people forget the state of javascript just a few years ago. I am currently in the process of moving my team away from the monolithic dojo framework towards using more of the vanilla web standards, but I think dojo was a great choice at the time. The pains of cross-browser support, terrible vanilla APIs that were completely different in IE8 vs the rest of the web, etc. made frameworks "the way to go" in the past, and that's set us off in the direction we are going today of immediately reaching for one. Articles like this one are great in that they are helping to educate people on the power and ease of use that the modern web has to offer, but I can 100% see why people are going to skeptical for a while longer when it comes to using only vanilla javascript.
6
darawk 17 hours ago 3 replies      
The point of React isn't its performance (though that is nice). The point of React is the simplicity and composability of functional components.

You can create this on your own of course (and I have), but the solution of "just use the DOM" ignores an enormous amount of progress that React made with component design, not performance.

7
WalterSear 18 hours ago 5 replies      
You don't need one, but if you don't eventually adopt one, you'll end up writing one.
8
natrius 18 hours ago 3 replies      
I don't use React for speed. I use it because manipulating the DOM to make it reflect the application state is hard. It's something a program should do for me. React is that program.
9
etatoby 10 hours ago 1 reply      
The article's author does not seem to understand what React is for and why people use it.

The article is full of sentences like: it's true React is fast, but basic Javascript is even faster. Duh? It's true React is quite small, but if you don't use it, your site will be even smaller. No sh*t Sherlock?

React was invented to simplify the program structure of medium-to-large applications by replicating the successful data flow of the Web itself: namely, the state of the application is in one place only (the URL in the Web, the state object in React); HTML is generated from that state using one set of reusable and composable templates; and crucially, user events only affect the state, not the HTML itself.

Personally, I'm going back to basic HTML + progressive enhancement, for a variety of reasons. But React is the sanest of all the client-side architectures I've seen so far, including bare-bones web api.

10
Nitramp 17 hours ago 2 replies      
One thing many people underestimate is how easy it is to run into XSS, XSRF, XSSI when not using a framework, in particular XSS when using native DOM APIs ("location.href = ..." - pwned. "e.innerHTML = ..." - pwned. "a.href = ..." - pwned. "*.src = ..." - pwned.).

You might not need a framework, but you'll need a structured approach to avoid those problems, and frameworks can help a lot with security.

11
vhiremath4 17 hours ago 2 replies      
"Another example is the Fetch API"... That API has very limited cross-browser support. One of the largest reasons web frameworks exist is to bridge the gaps between browser incompatibilities/offering legacy support.

I don't know if this person has even really scaled and deployed a web application that must work across several browsers - especially older browser versions. I always wonder that whenever I see people bashing frameworks. Like the author said, start with asking why the framework was created. Just make sure you come up with the right answer to that question next time.

12
statictype 18 hours ago 4 replies      
React doesn't exist to handle the massive amounts of data that facebook has.

React exists to make complex UIs easier to build and maintain.

Being able to just write the code once to render a UI from some state and then just reloading the entire UI when the state changes is an incredible simplification of your code.

That the author kind of misses this point makes me pay less attention to the rest of the post.

13
teen 18 hours ago 2 replies      
The cycle continues...

No framework -> Backbone -> Angular -> React -> No framework

We should be back towards using a simple MVC skeleton by the end of 2016.

14
educar 17 hours ago 1 reply      
The main point of frameworks/libraries is to set coding guidelines for the team. It's fine to go off and invent your own framework but this causes maintenance issues for the people coming in next. Most likely, there are no docs or examples for the code you just wrote (especially the framework bits). When a framework which has a community is chosen, these boring things like docs/examples/references/blog posts get fixed over time. So it's best to pick one so that the code is easier to maintain in the long run. Besides, the focus should be on the app and not the framework/library and writing your own is just distraction.
15
scope 16 hours ago 0 replies      
- shows an example using mutation observer API and fails to mention it only works on IE 11+

- follows the widely accepted format of "rant about some popular framework/library/language by mimicking a tiny portion of it"

- uses shock factor to get the reader's attention

> React, Virtual DOM, Webpack, TypeScript, JSXHOLD ON! Lets pause for a second here and think.

YES, you don't always need a framework/library BUT if you want to build something maintainable & scalable, it's better to opt for battle tested approaches (which is especially true when it comes to the web, having browser quirks)

16
Svenskunganka 16 hours ago 1 reply      
I apologize for not adressing your concerns with the post earlier, I've just added an update to the bottom of the post with some replies to some comments I've read.

Please keep in mind that my intent is not to mock you or your framework of choice. My goal with this post is to try to inspire you to try building something with the native DOM and Web API and see for yourself. Some may enjoy it, others may not. And if you feel comfortable with your current way of building your web applications with a framework, that's completely fine.

Also, please keep in mind that I cannot address every use-case in a single post. As much as I'd like to, that's going to take way too much time.

17
seanwilson 3 hours ago 0 replies      
So instead of using an established framework the post is suggesting possibly using these things?

MutationObserver for UI updates + Fetch API for networking + History API and a custom router for routing + CSS for animations + HTTP/2 instead of Webpack + npm for building + JSPM for package management + multiple polyfills for browser support for all this

Besides not having a name and a dedicated site, how is the above any different from using a framework? Once you've worked out how to get all these things to play nice together, are you really saving any time from just using an established framework that's done the work for you in the first place (i.e. they'll use some of this stuff under the hood already)?

Just because you're leaning on native features instead of library features doesn't make it any different from using a framework. At least with an existing framework you know it's battle tested, has good documentation, has an active community and is easier to maintain for new programmers on the team.

18
zwily 15 hours ago 0 replies      
I wrote dbmon at Instructure (first with jquery, then in ember, then in react, where it has been for a couple years) to give us a birds eye view of a bunch of databases. It's been kind of surreal to watch it blow up into what it has.
19
mortenjorck 17 hours ago 3 replies      
This might be the post that convinces me to write a companion piece on why you probably don't need a CSS framework, either. Browser renderers have come so far since Bootstrap's peak: Flexbox has all but obviated its primary value proposition, and when CSS grids make it out of RFC, using Bootstrap for layout will be like using Backbone to do rollover styling.
20
blue_dinner 17 hours ago 0 replies      
If you don't use one and have a complex app, it will eventually turn into a spaghetti mess of callbacks. You will end up writing your own framework that hasn't been battle tested on thousands of sites.

For simple apps, it's probably not needed. The issue also is that you app could be really simple now, but have features added to it regularly making it complex.

21
mhd 11 hours ago 0 replies      
It's interesting that by comparison, a lot of these modern JS frameworks are positively tiny. No comparison to e.g. Cocoa, QT, MFC, Rails, Spring and all the things usually called the f-word. React itself barely qualifies for "library", more like "helper functions".

So, having started outside the web world, I really don't understand the hullabaloo. It reminds me of a certain point in Windows development, when shareware etc. was a thing, but download speeds, RAM and disk space were still low. Creating smaller programs was a thing for some developers, both for distribution and an alleged "bare metal" feel (whether they were that much better than e.g. a big package including all of Tcl/Tk is another matter). Programming Win32 with assembly. The ATL. Compressed executables. People recomming a new weird command-line oriented operating system made by some crazy Finn.

Both ATL and Win32ASM bring in yet another feature: Programming in my favorite, possibly new idiom. Another reason for creating these mini-frameworks in the JS world. All glued together by slapdash micro-libs and utilities, that might be there one day, might be gone tomorrow.

I might finally be getting old and cranky, but I actually feel that some bigger frameworks wouldn't hurt. Some kind of standard library included, given that this won't come out of standards or the global community. More opinions. A set of standard components/widgets, not just three score methods of displaying them and two score of connecting them to a score of different backends.

Go all in: If you really want to just use the web browser as a delivery system for all kinds of GUIs, give me a GUI framework. I think that strangely enough ExtJS used to have more competition there...

Either that, or make more use out of the old-fashioned tenets of HTML, do more with text and links and stick to progressive enhancement. The middle ground of shiny but tiny GUIs that seems to be the rule for modern "SPAs" is neither fish nor flesh. (And I don't think that "isomorphic apps" is the answer here, either.)

22
TheCoelacanth 17 hours ago 0 replies      
You don't need a JavaScript framework. You just need a bunch of stuff that you can't use because it isn't supported in IE9.

> HTTP/2 is widely supported by web browsers already. At the time of writing, 70.15% of visitors has support for the updated protocol.

Oh yay, so only another 5-10 years until I can use it.

23
carsongross 17 hours ago 0 replies      
Most people probably don't need to write much app-land javascript at all to have reasonable UX:

http://intercoolerjs.org

Bonus: you get HATEOAS without knowing what HATEOAS means.

24
tomc1985 17 hours ago 2 replies      
This! This times a million times!!~!

Seriously. The javascript world is INSANE. All these frameworks and few truly need them... node.js package management is a mess... yet how many of these coders will take the time to learn real ES5? It really isn't that hard of a language.

I feel like calling oneself a "javascript" expert practically requires knowledge of a framework or too. This is saddening, because Javascript itself is a rich language with a ridiculously fast runtime, yet everyone wants to work with a framework when a head full of sound theory and experience will do :/

25
CameronBanga 17 hours ago 2 replies      
Are Slack Files an official blog from Slack? For a second, I thought looking at the URL that this was just a URL for a Slack snippet or something similar.
26
frik 12 hours ago 0 replies      
I wrote about that about 4 years ago, often called vanilla JS.

Back then everyone on StackOverflow was still completely bought into the JQuery mission. I often see it when people don't fully understand something, they choose a framework.

27
calsy 12 hours ago 0 replies      
Is it possible for someone to point me in the direction of a nice JS project template that does not use a framework?

I would still like all the core elements of a decent JS project such as package manager, bundler, templating BUT without the overkill of a new framework. A project template that allows you to simply write HTML, CSS and JS modules in an organised fashion.

A reference to a Github repo or online doc would be awesome. With so much JS news and information these days its really hard to find any info without the words react or angular or whatever. Appreciate any help from you knowledgable folks.

28
zer00eyz 18 hours ago 1 reply      
I have been avoiding JS for years (not out of hate, just the job) and every time I dabble in the node/js framework ecosystem I end up turning away frustrated.

I have a few small personal projects that I haven't started due to lack of desire in building the front end I want, after reading this I'm going to give them a shot and see how far I can go with out all the goodies!

29
hawkice 15 hours ago 0 replies      
For those who read the comment about Object.observe: Don't use that. It's being removed from everywhere it exists, and pulled from the standards recommendation. Just have a set(state) function that calls your callback instead, it's like three lines of code.
30
dmix 15 hours ago 0 replies      
One of the big ones is build tools.

You don't need Gulp/Grunt/etc for 90% of the stuff I've seen it used for. They could be replaced with a Makefile plus a bunch of standard unix utilities.

You shouldn't have to install 90 libraries in order to build/install/run your program.

31
bikamonki 16 hours ago 0 replies      
This eternal argument of js framework/non-framework reminds me of them old days of static html when Dreamweaver will just munch a PSD file and spit out working HTML + Images + CSS. Some of us hated it b/c the output was bloated, super hard to read/maintain, etc. So, some of us just went ahead and coded from scratch. I guess the preference has to do with a need to 'understand' what's going on. Tools like React or frameworks like Angular are just the modern Dreamweavers: don't mind the inner-workings, just follow the rules and get the desired output.

If I can recommend a js framework so flexible it feels like a non-framework I'd say go and learn Backbone.js It's like a flat green Lego table, no assumptions, just structure.

32
__s 6 hours ago 0 replies      
openEtG doesn't even use jQuery, it's been alright. Only library it uses is pixijs, & a slimmed down version at that. Originally everything went through canvas/pixijs but over time only a few views aren't DOM

http://etg.dek.im for site

http://github.com/serprex/openEtG for source

It does a lot of dynamic content generation, which dom.js has proved sufficient for

33
pfooti 17 hours ago 0 replies      
So, I've been working with rich text editors in particular recently, and I can tell you one thing: contenteditable is a gigantic pain in the rear. It's uneven across platforms what it transforms user intent into (control-B: do you get a <b> or a <strong>?), it handles paste events sloppily, and marking inner fragments as not contenteditable (say an image embed, atmention or some other block) is weird, especially if that item is the last thing in the list.

Contenteditable is so annoying that the first thing most rich text editors seem to do is make their own selection state manager, because the native range implementation in the DOM is crap. Medium in particular pretty loudly got rid of it entirely, and that's the direction most modern web-based rich text editors are going.

So. The best thing to do here is find some kind of framework that helps you map from user intent into mutating a data model, and a separate way to translate that data model into the DOM. This sounds like a framework to me, and it especially sounds like one if you want it to work on lots of platforms.

Similarly, sure: I don't need a framework to do DOM manipulation. But I've been using angular (1.x) for a few years now, and I can say: it makes life a lot easier. If I don't have to debug lines of javascript looking for where different elements get injected into the DOM, I'm happy. I can just look at the template file for the html code I'm editing and see where classes get programatically changed based on state, not because I'm deep inside some MutationObserver, but because I can read an ng-class directive. Easy pie.

I sure don't need a framework. I could also program in ones and zeroes. Speaking as someone who's been doing web stuff since the 90s, where the first framework I ever wrote was a bunch of DIY C libraries to make writing cgi-scripts easier, I can say with all honesty: using a framework makes life easier.

Sure, it's annoying to learn, and sometimes the opinions of the framework are different from your opinions. That's why I use angular instead of ember (because I'm not a rails person, for the most part). Take your pick, learn it well, and it will make your life easier.

And if you're that worried about the weight of your library, stop loading so many hero images uncompressed.

(I will grant edge cases where library size is a big deal, but it's more of a premature optimization thing in most places, especially now that tree shaking is starting to make its way back into our build tooling).

34
kup0 16 hours ago 1 reply      
I don't think frameworks and tools are the problem. I have dealt with the same frustrations about these huge tool-chains but I've always seen it all boil down to one key point: the problem is not the tools- it is not choosing the right tools for the project at hand. Then you get websites and apps that really bloat up because of the overkill or poor choices.

There's the whole side issue of progressive enhancement and/or making a non-JS version of a site, but I feel like those ideals are now dead- whether it's due to it being unfeasible, unaffordable, or something else.

35
_alexander_ 16 hours ago 1 reply      
Ok, we don't need frameworks, however what should we use in order to build complex UI applications? Own solutions? And know imagine that in every project you build your own skeleton and architecture...
36
matchagaucho 11 hours ago 0 replies      
Brilliant. I get burned by cross browser issues when going without frameworks. Would love to work exclusively with Web API.
37
sshbio 18 hours ago 0 replies      
I like this way to do, preventing complexity.

There may still be things that can not be achieved JavaScript, but aside from those cases, I would even add "you (probably) don't need JavaScript".

38
namuol 14 hours ago 0 replies      
React is popular because it re-frames the meat and potatoes of frontend dev as a simple input -> output problem. VDOM merely helps make this approach performant at scale - it has little to do with why people actually use React.

Abusing MutationObserver to get "easy" 2-way data binding is precisely the sort of antipattern that React & Friends would have discouraged.

39
moron4hire 2 hours ago 0 replies      
I'm getting so tired of this conversation. Use a framework. Don't use a framework. Just build something already. Who cares how you made it? I'm so sick of getting asked "what framework did you use?" I'm so sick of seeing people pitch their projects as "ProjectName: built with FrameworkX and LibraryY", with zero indication what the project does. If the most interesting part of the project to you is the tools with which it's built, you're not an entrepreneur, you're a fetishist.
40
evan_ 16 hours ago 0 replies      
This looks like a great blueprint to spend way too much time writing an app that is un-maintainable and only works in a few browsers.
41
codeisawesome 14 hours ago 0 replies      
I don't agree with the sum of it's parts, but I like some of the... components... in this article.
42
hellofunk 7 hours ago 2 replies      
>Virtual DOM is efficient and has good performance, but native DOM interaction is insanely fast and your users will not notice any difference.

As soon as I saw this, I knew there were limits to this author's knowledge on the subject. React was a requirement for the low-latency UI we built for a major commercial app and without it (or a similar library) the user experience we shot for would have been impossible.

43
callesgg 9 hours ago 0 replies      
I might not need a framework but that does not mean that there is no value to it.

I don't really need a smartphone, but it is quite useful.

44
CommanderData 11 hours ago 0 replies      
You 'probably' don't need one but 'should' use one to save yourself countless hours re-inventing the wheel that frameworks solve.

The best thing is to try and use minimal frameworks and libraries and justify every single one - Which I'm sure most devs already do. jQuery is still rather justifiable. Yes, you can replace functions with JavaScript today but you'll end up with similar looking functions.

I recently added Transit to animate using CSS3, the alternative would be to write each animation effect I use in CSS3 down to the duration! If I need to change the ease I spend 5x the time doing so - From a devs perspective no, just no.

45
intrasight 17 hours ago 0 replies      
Back in the late 90s a bunch of us create an architecture we named AJAX. Now in 2016 a bunch of us are still using AJAX to build web apps that work just fine and are probably faster than these "frameworks".
46
hmsync 16 hours ago 0 replies      
In some special environment, such as Electron (Atom shell), that can ignore the cross-browser problem, writing application using native Web API and DOM API and ES6 is elegant and effective.
47
jrochkind1 14 hours ago 0 replies      
I really wish I didn't need a javascript framework, but I find this essay unpersuasive. He doens't seem to deal with the same kinds of problems I do. But I like that he's making the argument, and trying to pull out the parts where the web API has gotten a lot better. It's approaching maybe. I really do wish.

If we're lucky, maybe we can come up with something lighter weight (conceptually, and in terms of build tools needed, as much as in byte size) that is closer to the built-in browser API. I definitely don't feel like I can do what I need in a sane way with nothing but the browser API though.

48
apeace 17 hours ago 0 replies      
contenteditable and MutationObserver? What about <input> and onchange?
49
bitwize 17 hours ago 0 replies      
If you're looking for work and the hiring manager fishes for names of JavaScript frameworks you've used, you need a JavaScript framework.
50
nbevans 7 hours ago 0 replies      
Whilst predictably this HN thread has turned into a bit of a flame war I would like to point out the article certainly made one indisputable point. That HTTP/2 is going to change how modern web apps are written all over again. So then we'll all need another new framework.
51
sorpaas 17 hours ago 1 reply      
Greenspun's tenth rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Now let's add Javascript to the list.

52
bricss 18 hours ago 0 replies      
Absolutely agree. Also wanna notice that is React is the only one V from the MVC. At least it's weight now more then Angular. Backbone can even do everything and twice time more then React can do.
53
dandare 11 hours ago 0 replies      
If the author thinks TypeScript is a framework there is no need to waste time reading the article.
54
pier25 16 hours ago 0 replies      
Yeah why not? Let's reinvent the wheel every fucking time.

I'm sure your clients and your family will love that.

55
AlwaysBCoding 17 hours ago 0 replies      
You probably don't need Redux* but React is awesome
56
proc0 16 hours ago 0 replies      
npm install --save-dev vanilla-js-loader
57
petecox 16 hours ago 1 reply      
vanilla-js.com
59
joesmo 15 hours ago 0 replies      
While we're at it, let's just write everything in spaghetti ASP because you don't need a [insert programming language here] framework.
25
DeepMind moves to TensorFlow googleresearch.blogspot.com
393 points by hektik  1 day ago   75 comments top 18
1
aab0 1 day ago 2 replies      
This is great news! One of the most intimidating things about getting started with deep learning if you want to understand and extend cutting-edge work is the Tower of Babel situation: aside from mastering some quite difficult and opaque concepts, you need to learn multiple frameworks in multiple languages, some of which are quite uncommon. (Want to use Torch? You have to learn Lua. Want to use Theano or Caffe? Need to learn Python too. Need to implement optimizations? Hope you've mastered C++.)

And DeepMind's research output was a major reason to need to use Torch, and hence have to learn Lua.

But by switching over to TensorFlow, this means you now have one language to learn which is supported well by all the major frameworks - Python - and you can benefit from several frameworks (Theano, Keras, TensorFlow). So the language barrier is reduced and you can focus on the framework and actual NN stuff. Further, this will also drive consolidation onto TensorFlow, reducing the framework mental overhead. As long as TF is up to the job, and it reportedly is, this will benefit the deep learning community considerably.

I'd been wondering myself what language and framework I should focus on when I start studying NNs, and this settles it for me: Python and TensorFlow.

2
fchollet 1 day ago 5 replies      
If anyone wants to switch to TensorFlow but misses the Torch interface, you will always have Keras: https://github.com/fchollet/keras
3
argonaut 1 day ago 0 replies      
I like these comments on the Reddit discussion: it's not like DeepMind ever really open sourced anything (other than their Atari code from years ago).

Another a Google team switching over to a product maintained by another Google team makes a lot of sense for the team. They get instant development/deployment/infra support and huge control over development roadmap.

Hopefully this motivates them to open source much more...

4
vonnik 1 day ago 1 reply      
To be clear, TensorFlow is about a lot more than deep learning. It's a distributed math library, a bit like Theano. It's ultimate rivals in the Python ecosystem are Numpy and SciPy and even Sci-kit Learn. You'll see the TF team implement a lot more algorithms on top of their numerical computing eventually. (In the JVM world, I work on ND4J -- http://ND4J.org -- and we see a lot of similarities, which is why I bring this up.
5
Smerity 1 day ago 0 replies      
This is a pleasant surprise. The more people that work on TensorFlow the better, especially as the DeepMind team will be more aligned with extending TensorFlow's research potential.

I am curious how well TensorFlow fits for many of DeepMind's tasks though. Much of their recent work has been in reinforcement algorithms and hard stochastic decision tasks (think gradient approximation via Monte Carlo simulations rather than exactly computed gradients) which TensorFlow hasn't traditionally been used for.

Has anyone seen TensorFlow efficiently used for such tasks? I'm hoping that DeepMind will release models showing me what I've been doing wrong! =]

(note: I produce novel models in TensorFlow for research but they're mostly fully differentiable end-to-end backpropagation tasks - I might have just missed how to apply it efficiently to these other domains)

6
eoinmurray92 1 day ago 3 replies      
TensorFlow is the machine learning codebase, but typically how do machine learning research teams manage their training sets, dataset metadata and collaboration on these large datasets?
7
Ferver777 4 hours ago 0 replies      
This is huge news for the AI space. May move things forward a couple of years.
8
tdaltonc 1 day ago 1 reply      
This is a very money-where-thier-mouth-is move. Like they said, moving away from Torch is a big deal.

I know that google has been criticized for not dog-fooding GCS, does anyone know if that has changed? For example, does DeepMind use it?

9
sandGorgon 1 day ago 1 reply      
Anyone know whether they are primarily working on Python 2 or 3?
10
ya3r 23 hours ago 0 replies      
I guess this (switching from Torch to other deep learning libraries) will become a trend as deep learning have become more mainstream in tech companies. I say Facebook, Twitter and others who use Torch (I don't know of any others actually), will move away from torch gradually. Unless the Torch community steps its game up.
11
SixSigma 1 day ago 2 replies      
Stanford's CS224d: Deep Learning for Natural Language Processing uses TensorFlow. Although they have only just got up to the part where they are beginning to use it.

Here's the "Introduction to TensorFlow" lecture.

https://www.youtube.com/watch?v=L8Y2_Cq2X5s

You don't need to watch the previous 6 lectures to make sense of it but it would help if you knew a bit (but not super detail) about neural nets e.g. the terms forward propagation, backward propagation and gradient descent of neural networks mean something to you.

http://cs224d.stanford.edu/syllabus.html

12
swah 1 day ago 5 replies      
I'm a layman but I find it quite interesting that a big release such as TensorFlow doesn't affect more people outside Google - or at least thats my impression. One would think, at least, that online store recommendations would become better or something like that.
13
cft 23 hours ago 6 replies      
Why is it called Tensor flow? Do the multi-dimensional matrices that exchange data between the nodes transform like tensors? If so, when does the need arise to transform them?
14
jonbarker 19 hours ago 0 replies      
So when do we get to see the alphago code?
15
bawana 1 day ago 0 replies      
I guess they dont want to be under facebook's thumb (didnt they invent torch?
16
deepnet 5 hours ago 0 replies      
NVidia's NVCC has performance & compile time issues with Tensorflow.[1]

NVCC vs GPUCC benchmarks 8% - 250% slower compilation & 3.7% - 51% slower runtimes.[2]

Google use GPUCC internally so weren't optimising for NVCC.

LLVM based GPUCC is the 1st fully open source toolchain for CUDA.

Google announced that the guts of GPUCC will make their way into CLANG.

[1] https://plus.google.com/+VincentVanhoucke/posts/6RQmgqcmx2d[2] http://research.google.com/pubs/pub45226.html

17
yarou 1 day ago 0 replies      
I think the neat thing about Google is the high degree of crossfertilization between teams. In many organizations, teams rarely share information either due to political reasons or a lack of sharing culture in the company as a whole. That being said, this framework/API change doesn't really surprise me; DeepMind was more a proof-of-concept than an actual battle tested framework, unlike TensorFlow. So in that sense this news isn't surprising at all.
18
mtgx 1 day ago 0 replies      
Should we be worried or glad that a potential future Skynet is written in C++?
26
Bitcoin's $137k Jackpot hackingdistributed.com
289 points by cynthiar  23 hours ago   135 comments top 17
1
ryan-c 22 hours ago 3 replies      
> Remember that time when you tried to transfer your life savings from one bank account to another for a small fee, but swapped the fee field with the total transfer amount field, and ended up losing all your life savings? Of course you don't. There are safeguards to catch and prevent these kinds of errors.>> But this is a common occurrence in Bitcoin-land.

This mistake is difficult, if not impossible, to make with standard Bitcoin software. The fee field is usually pre-filled and somewhat tricky to change, and newer versions of bitcoin-core will block transactions like this as obviously wrong. It should also be noted the bitcoin fees are implicit in transactions - a transaction will have some amount of funds going in, and some amount going out, and anything left over is the fee. Usually these types of mistakes actually happen when someone manually crafts a transaction and forgets that they need to make a change output for themselves.

It is obviously still possible to create transactions like this, but it currently requires using advanced interfaces and deliberately bypassing safety measures.

2
hughes 22 hours ago 5 replies      
This reminds me of a similar error from 2005[1] where a trader mistook the "price" and "quantity" fields of the trading software. Instead of selling 1 share for 610,000 yen, 610,000 shares were sold for 1 yen.

The mistake cost around $225 million.

[1] http://www.foxnews.com/story/2005/12/09/typing-error-causes-...

3
drostie 19 hours ago 3 replies      
Can someone else confirm to me that I'm not crazy and the central distinction of this article is totally bogus?

Reasoning: BitCoin isn't, to my knowledge, a scheme where some private identifier is stored inside each "coin" whose ownership is revealed with a zero-knowledge proof; it's simply one where you have public and private keys and use those private keys to sign transactions saying "Take X1 out of my public key K1 and put X2 in public key K2, with X1 - X2 going to the miner." That is, BitCoins themselves, as I understand them, are just points in a big distributed videogame: they do not represent actual packets of data which are individually 'minted' in mining and stored on your computer until you spend them.

If that's correct, then there's no distinction if you tumble-via-miners versus tumble-via-tumblers -- either way the coins are 'freshly minted'; there's simply no other sort of coin.

The only real thing that you seem to be able to do here is to tumble via both the tumblers and the miners, which might create some binary tree of complexity if someone tries to "follow the money" -- but that doesn't seem to be what the author is saying.

Is there legitimately something in each coin which makes it easier to follow a given bitcoin via tumbling than through miner-money-laundering, or are they really just the same thing performed through different channels, with a much slower rate of success for the MML tumbling?

4
mrpopo 22 hours ago 0 replies      
You can see a half-dozen of "mega-fees" on this graph[0]. The public nature of bitcoin transactions is very interesting.

[0] https://blockchain.info/charts/transaction-fees?timespan=all...

5
xigency 22 hours ago 5 replies      
Sort of unrelated but,

"According to my calculation, a single Bitcoin transaction uses roughly enough electricity to power 1.57 American households for a day."

http://motherboard.vice.com/read/bitcoin-is-unsustainable

I find this interesting thinking about Bitcoin as a currency. The first cryptographic currency example that I had read, from a cryptography book, didn't involve active power. I'm really surprised that Bitcoin has been able to gain this much popularity with its design. Does this explain how you might be able to lose a household-sized amount of money in a transaction on this network? I don't know, but I think that the blockchain idea might not be the ideal for digital currencies.

6
Bedon292 22 hours ago 6 replies      
Obviously I am missing something on this. How do you direct the transaction fee to a specific miner? I thought it just went out there for anyone to process. If you can, why don't people just direct all their transaction fees at their own mining operation or a friend they trust?
7
ChuckMcM 21 hours ago 1 reply      
"exogeneous enforcement mechanisms." -- Chortle, I really need one of those :-). I had not been aware of how the miners could launder bitcoin, that seems like a pretty big hole you can drive a lot of bitcoin through.
8
vonklaus 21 hours ago 1 reply      
Is MML(miner money laundering) a viable concept? I thought that the transaction blocks were randomly distributed so it would be hard for a launderer to "target" a miner they trust. If someone has a bit more technical insight about whether this makes sense, or is viable I would be interested.

I would also contend that even if it was trivially easy to do, it would probably make more sense to just route transactions to various wallets and then to some exchanges and into other currencies, ect. I don't know if the miner thing makes sense

9
seanalltogether 7 hours ago 0 replies      
Wouldn't sending a transaction directly to the miner be more reliable then trying to hide it in the fee? If the goal is to hide your money in the high volume of BTC that a big name miner processes per day, why not just send it straight to their wallet?

From an investigators perspective there is no difference between the two scenarios.

1. Miner A has received 500 BTC from Suspect B over the past year and may be laundering it.

2. Miner A has processed 500 BTC in transaction fees from Suspect B over the past year and may be laundering it.

10
placeybordeaux 22 hours ago 0 replies      
Pretty bad article, but I couldn't find any better. Looks like a solid chunk of the money traces pretty cleanly back to tradeBTC.

I'd really like to find a good tool for exploring the graph of transactions. Most of the funding of the account that lost all the money have a similar pattern where a couple BTC is peeled off at each step and the majority goes forward in the chain till it all pools at 1QgTYzMYqStzZBQx8gguYaJQMjFRbagbh and gets spent. I wonder if thats just what TradeBTC transactions look like.

12
thevibesman 22 hours ago 1 reply      
Thanks for the post, I enjoyed the read (haven't read much about bitcoin laundering).

One small nitpick:

> My payment is going to be with newly mined coins, the Bitcoin equivalent of fresh, crisp dollar bills straight from the Mint.

Coins are minted, bills are printed, so only coins come from the U.S. Mint. Paper bank notes and stamps come straight from the Bureau of Engraving and Printing (way less fun to say though!).

13
LAMike 22 hours ago 0 replies      
Some people say it was an act of money laundering by the pool that mined it.
14
theandrewbailey 21 hours ago 0 replies      
> Remember that time when you tried to transfer your life savings from one bank account to another for a small fee, but swapped the fee field with the total transfer amount field, and ended up losing all your life savings? Of course you don't.

Of course I don't. I know banks can have terrible design, but I've never seen a 'fee' field on any transfer form. (maybe because I use credit unions?) That bank might as well tell everyone 'please let us eat even more of your money on top of the bogus fees we already charge you'. And don't real banks have regulations preventing mess ups like these?

15
maxerickson 21 hours ago 1 reply      
Wouldn't the MML leave lots of traces in the blockchain, a miner collecting unusually high fees over time?
16
taesu 22 hours ago 2 replies      
I'd definitely keep that amount if I won it through mining. It's really hard swapping `amount` to `fee`, in coding, so I bet that this was a human mistake on sending btc, not coding mistake...
17
known 14 hours ago 0 replies      
27
ES6, ES7, and beyond v8project.blogspot.com
229 points by gsathya_hn  23 hours ago   67 comments top 11
1
dherman 21 hours ago 2 replies      
(TC39 and Mozilla member here.)

FWIW, this:

 For these reasons, the V8 team, along with TC39 committee members from Mozilla and Microsoft, strongly support denoting proper tail calls by special syntax.
is misrepresentative of the state of consensus and frankly premature. In particular, while I won't try to speak for all of my colleagues (we do not pre-determine an official Mozilla position in advance of standards discussion) I don't think it's the case that any of the Mozilla representatives "strongly support" syntactic tail calls, and I personally have reservations. It hasn't gotten beyond being an area of open exploration.

All that said, I'm interested to see where the explorations go, and I'm looking forward to subsequent discussions.

2
spdionis 21 hours ago 2 replies      
> String.prototype.padStart() / String.prototype.padEnd()

Finally left-pad module's functionality will be integrated into core! Souns awesome. /s

3
grayrest 21 hours ago 2 replies      
So in looking at the Kangax tables when the next Safari comes out, it looks like all major desktop browsers will have support of the big ticket ES6 features (arrow functions, destructuring, default params, Map/Set/WeakMap, Proxies). Unless you're doing JSX, type annotations, or upcoming stuff (async/await, decorators, object spread, class properties) you can drop transpiling from your dev workflow and only do it for prod.
4
grifter2000 21 hours ago 5 replies      
PLEASE don't do the awkward .mjs thing. I and prob most devs will simply not respect it.
5
faide 21 hours ago 3 replies      
> http://tc39.github.io/proposal-string-pad-start-end/

Not to beat a dead horse, but I find it hilarious that this is a proposal. Someone on TC39 has a sense of humor.

Edit: Apparently this was proposed long before left-pad broke the internet.

6
apo 19 hours ago 0 replies      
>Although additional standardization work is needed to specify advanced dynamic module-loading APIs, Chromium support for module script tags is already in development.

Of all the future developments in V8, this is what I'm looking forward to the most. It's the last key component tying most code to transpilers.

7
raarts 21 hours ago 0 replies      
Is there already a proposal for (optional) type safety? So we can assimilate the typescript fans?
8
lucasmullens 22 hours ago 6 replies      
What happened to the year-based naming convention? I thought we switched to ES2015 and ES2016?
9
eknkc 19 hours ago 2 replies      
Seems that there's no mention of async functions and no browser support apart from Edge. Are there any concerns about it's availability in ES7 at this point?
10
hajile 19 hours ago 2 replies      
The two reasons for tail calls are loops and CPS. Nobody is championing the idea that we need traces for each loop iteration. I would somewhat understand the CPS argument except that the event loop already destroys a huge amount of meaningful stack traces anyway (while not identical, it is somewhat similar). I don't see anyone insisting on stack traces there either.

Why do we need explicit tail calls with stack traces?

11
_mikz 22 hours ago 4 replies      
Unreadable in mobile safari. Way to go.
28
Harvard Institute of Technology thecrimson.com
6 points by badboyboyce  1 hour ago   discuss
29
Show HN: TeachCraft Learning Python Through Minecraft github.com
116 points by emeth  18 hours ago   24 comments top 8
1
jackhack 5 hours ago 0 replies      
Is this project related to the book: "Learn to Program with Minecraft: Transform Your World with the Power of Python" ?http://www.amazon.com/Learn-Program-Minecraft-Transform-Pyth...I ask because the concept appears to be substantially similar.

re: the approach -- My kids are crazy about Minecraft, and being able to build very simple python programs that modify the world (build block structures, control creatures, etc.) is much more gentle yet engaging than the typical programming 101 tasks.

2
markdavis33 3 hours ago 0 replies      
Hey emeth...thanks for making this awesome project. As a father of 2 sons (aged 9 and 11, both Minecraft fanatics) this is a perfect platform to get them interested in coding. Starting with Python is a nice easy on-ramp for them, and they'll even start using GitHub with this project, which is great too. Keep up the cool work :)
3
bpchaps 13 hours ago 0 replies      
Neat, I'd also recommend something like opencomputers for learning. The learning curve and asinine amount of time it takes to get started is a bit of a hurdle or thirty, but it helps to have more than one person involved.

At the end of my adventure with it, I had some code that would read a png from github from minecraft, then use the 3d printer to create a series of blocks by using the 'pixels' from all server allowed blocks. The personal requirement was to not do any sort of color manipulation and only use what was available. The pixel position from block to block doesn't change, and transparency could be coded outside of the png, so it became pretty damn difficult really quickly.

In the process I got to learn about png, cv2, jit (to pull the available blocks' pngs and eventually for an attempt in finding consistent transparency logic. Flowers.....), minecraft's internals, voxels, some lua, more python and some interesting algorithm stuff.

Life got in the way and I never actually finished, but the block directly to the left of the cursor was the last block created. Despite its ugliness, I'm actually pretty pleased with it. https://imgur.com/dqwnnL2 which is part of https://imgur.com/gallery/giajLha (dickbutt baby, nsfw kinda)

4
hakcermani 16 hours ago 1 reply      
This is awesome. Much more engaging than finding the first 100 primes.
5
justifier 4 hours ago 0 replies      
A gateway to 3d rendering with scripts

once you've exhausted these lessons you can show them blender

6
soared 15 hours ago 0 replies      
Give me screenshots.
7
asimuvPR 15 hours ago 1 reply      
This is actually something I was looking for. Very nice!Do you mind adding a license to the work?
8
deepnet 5 hours ago 1 reply      
Is there a good GPLed substitute for Minecraft ?

I am required to use a Free license as an educator and the massive appeal of voxels is undisputable.

30
Google rolls out If This Then That support for its $200 OnHub router arstechnica.com
184 points by shawndumas  1 day ago   184 comments top 19
1
rdegges 1 day ago 11 replies      
OnHub is one of the best routers I've ever owned. I've been buying custom router hardware for years -- I even went as far as building my own OpenBSD / PFsense router way back when, but nothing I've ever built (or bought!) has given me the same reliability / stability / speed as the OnHub.

I'm seriously impressed with it.

Also: the OnHub management app for mobile is AMAZING. It is so cool.

Awesome product. Really great to see them put more effort into it and support integrations with IFTTT :D

2
ProAm 1 day ago 3 replies      
Until they discontinue support and turn the service off in a 18 months. It's hard to put any faith in google products anymore, especially with the new alphabet revenue strategy put into place.
3
some-guy 23 hours ago 3 replies      
I tried OnHub, but unfortunately it didn't have the option to force 5Ghz over 2.4Ghz for devices without going through some hacks. Where I live, 2.4Ghz wireless has 10% packet loss on average, no matter which router I use.

I'm happy that routers are finally getting the UX treatment they deserve, and their target market isn't for people who know the difference between the two, but OnHub should keep the nitty-gritty details accessible to power users if they so wish. I eventually had to return the router and go back to forcing 5Ghz on my old one.

4
zyxley 1 day ago 6 replies      
I just wish more of these companies would add support for a locally runnable IFTTT equivalent, or at least something like MQTT support so you can bake your own.
5
morgante 20 hours ago 1 reply      
The gratuitous negativity of this article is entirely uncalled for.

I would much rather that a device integrate with IFTTT and thus gain connectivity with the hundreds of services which IFTTT supports than for it to attempt to hand-roll integrations with a much smaller subset of services.

In case anyone disputes my claim of gratuitous negativity:

> Some smart home features finally come to OnHub, but using a non-Google ecosystem.

> Its only real differentiators were the funky design, easy setup, and the promise of future updates.

> Now with the IFTTT update, the OnHub finally supports some smart home featuresbut it's using someone else's ecosystem.

> IFTTT is now the gateway for controlling other things in your house via the OnHub rather than using some kind of Google communication standard like we expected.

> This is all still happening over Wi-Fi, too, so the OnHub is still not using any of the smart-home antennas it shipped with.

6
deprave 1 day ago 3 replies      
I cannot understand why anyone would buy hardware from Google. Chromecast for $35 a pop is as much as I'd be willing to pay from something that they could arbitrarily shut down.
7
ericclemmons 20 hours ago 0 replies      
Shucks. I just bought eero over OnHub because the former seems to answer the multi- floor wireless solution than OnHub.

I'm in a 4-story town home with PoE between floors, which works great, but Apple's AirPort Extreme + Express CONSISTENTLY stops resolving DNS while the wired connections are fine.

Looks like I have another alternative if eero doesn't work...

8
robbiemitchell 15 hours ago 1 reply      
Every time I read a story about IFTTT support, I want to shout "Please look at Zapier, it's so much better!"

Really, it's so much better.

Zapier = AWS Lambda with hundreds of pre-built integrations, webhooks for the rest, multi-step jobs, arbitrary code execution (JS and Python) ... all in a UI with monitoring that makes all the trouble go away.

9
Animats 23 hours ago 4 replies      
The remote update "feature" means this has a built-in backdoor into your local network. So you don't want to install this in a law office, anywhere that has to be HIPAA compliant, or any environment requiring security.
10
PerfectElement 14 hours ago 0 replies      
Went in to look for a recipe and didn't find it: I want to be notified when any new device connects to my network.

It seems pretty obvious to me and more useful than most recipes I've seen there.

11
jMyles 1 day ago 1 reply      
This relies on an internet connection, right?

So many of these sorts of products do, and it is infuriating. It makes them utterly inapplicable for many environments. I'm building a home in a school bus. There will most certainly be times when I don't have a good (or any) internet connection, but I still want local and customizable automation.

My plan is to just roll my own with Raspberry PIs.

12
davidu 23 hours ago 1 reply      
How does IFTTT make money? I can't figure it out.
13
rocky1138 23 hours ago 1 reply      
It was super annoying to set up the OnHub. Instead of a standard web interface, they require installation of a mobile app.
14
djhn 9 hours ago 0 replies      
Is there a beginner guide for modern, home-use or pro-sumer quality, networking hardware in general, or device specific purchase recommendations for otherwise technically savvy people? Could someone please share a link? I've only ever used the hardware that comes for free with the contract from my cable operator, and I guess it works most of the time... but recently blind spots and and slow-speed-spots have been a nuisance.
15
xd1936 1 day ago 0 replies      
There's also a contest going on to win one, if you can think of a creative way to use the new IFTTT integration.

http://blog.ifttt.com/post/143084444158/onhub-on-ifttt

16
alexkavon 23 hours ago 0 replies      
17
DonHopkins 13 hours ago 0 replies      
Where would "If This Then That" be if not without "Put That There"? [1]

[1] https://www.youtube.com/watch?v=RyBEUyEtxQo

18
Matthias247 23 hours ago 0 replies      
Rules that depend on the number of devices connected to a router? Doesn't seem like a very reliable concept.
19
venomsnake 1 day ago 2 replies      
> To create Recipes for things you would like to happen automatically, just register and login at IFTTT.com (its free) and connect to the OnHub channel. Then start cooking up the Recipes that serve you best. Because OnHub on IFTTT works with so many products and services, there are lots of options for Recipes you can create.

And why this needs to be cloud service? Instead of locally running demon on a raspberry pi?

       cached 30 April 2016 19:02:01 GMT