hacker news with inline top comments    .. more ..    24 Apr 2016 News
home   ask   best   2 years ago   
1
The Sad History of the Microsoft Posix Subsystem (2010) brianreiter.org
43 points by the_why_of_y  1 hour ago   3 comments top
1
grennis 7 minutes ago 2 replies      
It seems this article badly needs to be updated in light of Ubuntu on Windows. Here's a link from 2016 instead of 2010. https://insights.ubuntu.com/2016/03/30/ubuntu-on-windows-the...
2
Telomere lengthening via gene therapy in a human individual neuroscientistnews.com
32 points by mkagenius  2 hours ago   12 comments top 5
1
ozborn 29 minutes ago 0 replies      
A good article, but if you take a look at the 7 problems that SENS "Strategies for Engineered Negligible Senescence" identifies as needing to be solved to address ageing the telomere length issue is just one subset of single issue (cellular senescence). Furthermore they appear to have fixed this only for a single cell type - white blood cells which are easy to obtain and do gene therapy on.

Nonetheless it is still encouraging progress...

2
zavi 25 minutes ago 0 replies      
Hats off to brave Elizabeth Parrish who put her life at risk to push humanity forward. People were dismissing her as a lunatic at the time of treatment administration. Now independently verified results speak for themselves.
3
charlesism 29 minutes ago 1 reply      
Thanks to mkagenius for using a reasonable title. Sadly, the actual article went with the tabloid-style "First Gene Therapy Successful Against Human Aging."
4
reasonattlm 53 minutes ago 2 replies      
From BioViva: http://bioviva-science.com/2016/04/21/first-gene-therapy-suc...

They are a startup in their funding-press-funding-doing-things-press-funding cycle, so expect a certain amount of positioning. This is all serious work, however. Deep Knowledge Life Sciences recently took a position, so BioViva is within the circle that includes the reputable In Silico Medicine, Biogerontology Research Foundation, and other for- and non-profit groups in life science and aging research and advocacy:

http://www.eurekalert.org/pub_releases/2016-04/brf-dkl041916...

Take a look at the BioViva board of advisors, and note the presence of George Church, who is essentially the center of the network of connections for all gene therapy activities these days, and a luminary in the field.

The nature of the BioViva press here is that they did a thing, and the thing worked as expected based on the only available way to take a short-term measurement. That seems quite a reasonable thing to announce when you are a young company working on traction. It doesn't really imply rejuvenation without more useful data, such as DNA methylation assays of biological age, for example, and the details on that front are complicated, see below.

Telomerase gene therapies to treat aging are heading for human trials one way or another. Look at this position paper for example, from one of the groups to have demonstrated improved health, stem cell function, and longevity in mice using telomerase gene therapies in past years:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4815611/

BioViva's principals are taking the stance that present regulation and talk of moratoriums are ridiculous given the fact that so much good might be done, and that new gene therapy technologies such as CRISPR are now making it cheap and easy to run these sorts of edits in humans. I agree with this.

So telomerase gene therapy in mice extends life modestly: there are years of pretty robust results to demonstrate that. Average telomere length appears to be a reflection of aging, most likely not a significant contributer to aging. It is a poor measure, as the trend downward with age is statistical over populations. Average telomere length is some function of stem cell activity (delivering new cells with long telomeres) and cell division rates (shortening telomeres with each division). Stem cell activity declines with aging. Average telomere length is presently measured in white blood cells, which are going to have a whole lot of influences on cell division and replacement rates that don't exist in other tissues, due to immune system reactions to circumstances. Telomerase is clearly doing a lot of other things beyond telomere lengthening. Look at the work suggesting it is acting as a mitochondrial antioxidant, for example, or other cryptic activities.

The present consensus is that telomerase therapies in mice extend life through increased stem cell and/or immune activity. Mice have very different telomere dynamics, however, and there are concerns regarding cancer risk in humans. Trying it in dogs or primates would be the next safe thing to do - move to a mammalian species with telomere/telomerase dynamics that are closer to ours.

There is an argument that runs along these lines: telomerase gene therapy is just (primarily) another way of triggering old stem cells to get back to work, and therefore vis a vis cancer and risk should fall in the same ballpark as stem cell therapies carried out over the past fifteen years, and therefore full steam ahead because all of that work produced far less cancer that was feared. Prudence would suggest trying it out in something other than mice first, but I suspect the sudden ease of gene therapy means that this will be bypassed by the adventurous.

I'm totally in favor of adventure when it comes to gene therapies for follistatin/myostatin - I think the risk situation there is pretty much as low as it can get prior to hundreds or thousands of enhanced human patients. I'm more cautious on the cancer and telomerase front; I think more data there would be desirable before I stepped up to try it out. Other people can have other viewpoints, and that is the point - it should be the patient's informed decision, not that of a bunch of self-interested bureaucrats.

5
meeper16 17 minutes ago 0 replies      
I wonder what Google's CalicoLabs http://calicolabs.com thinks about this...
3
New body armour promises to transform fighting sports economist.com
50 points by edward  3 hours ago   17 comments top 7
1
Rhapso 2 hours ago 1 reply      
Having done some SCA heavy, those open inner thighs make me twitch (armpits too, but less could be done about that).

One thing SCA has figured out that the sports industry seems to have failed to is how helms work.It turns out the best way to extend the reduce the impact of hits to the head is NOT cushioning or crumpling to extend impulse but rather just having a lot of mass to move. I've been smashed over the head at full force (with a full wieght sword) while wearing full helm and been more harmed by the gong-like ringing than the impact itself.

If you have not had a chance to try or watch SCA heavy, It is essentially full contact re-enactment (slightly less formalized than HEMA) and only using blunt (but full weight) weapons.

2
mjs 2 hours ago 0 replies      
A small amount of video from the test event:

http://www.stuff.co.nz/business/industries/78468512/Weaponis...

The umpire carries a shield and a hand axe!

There's also a few other videos on a YouTube channel:

https://www.youtube.com/user/UnifiedWeaponsMaster/videos

3
duncanawoods 3 hours ago 1 reply      
I like the idea of using real-time cgi to render shiny suits of armour and the consequence of "hits". As grim as watching decapitation might be, I expect the ratings would be huge.
4
programLyrique 1 hour ago 1 reply      
Maybe it calculates the fractures and other injuries, but if the goal is realism, as a wounded figther is likely to be less efficient, they should also find a way of hampering movement for the wounded parts.
5
mjdude 3 hours ago 2 replies      
Interesting,19 kilos is still fairly significant weight to be carrying around. It'll have a material impact on the way a person fights.
6
Aardwolf 51 minutes ago 0 replies      
Cool, kind of a blend between gaming and real sports
7
Tharkun 1 hour ago 0 replies      
Why is there no "buy now"-button? I want this. For reasons.
4
WISP: Battery-free computer that can be reprogrammed wirelessly fastcodesign.com
11 points by Jerry2  1 hour ago   3 comments top 3
1
baggachipz 28 minutes ago 0 replies      
> "Imagine if your wallpaper could run apps"

Uh. No thanks.

I think the technology has some great potential, but why does every tech article have to dumb everything down into some iPhone analogy?

2
detaro 27 minutes ago 0 replies      
"battery-free computer" = RFID tags with a programmable CPU
3
dang 30 minutes ago 0 replies      
We changed the linkbait title to representative language taken from the article. If anyone suggests a better title we can change it again.
5
How Big Data Creates False Confidence nautil.us
53 points by ezhil  5 hours ago   11 comments top 8
1
rm999 1 minute ago 0 replies      
I think "big" data is being confused with "ubiquitous" data in this article. Larger datasets will always lead to more statistical confidence in a conclusion you make from the data. The article does a great job of explaining the caveats to this (data skews, misuse of stats, external factors), but those issues exist with small data too. In other words, this isn't about the volume of data. I think there are two real and different issues at play here, and both come instead from the ubiquitousness of data nowadays:

1. It's become easier to do more experiments, so even experts are more likely to produce some bad conclusions.

2. Data has become much more accessible, so people without rigorous stats backgrounds have an easier time abusing the hell out of stats on datasets.

2
anotherhacker 1 hour ago 1 reply      
Mo' data, mo' problems

As your data set grows, unbounded variance grows nonlinearly compared to the valid data. As variance increases, deviations grow larger, and happen more frequently. This causes spurious relationships grow much faster than authentic ones. The noise becomes the signal.

Related: Overfitting: https://en.wikipedia.org/wiki/Overfitting

Overfitting happens when you try add too many variables to your training data. This happens because people think that by adding more data (variables), they can remove bias. What they end up doing, is becoming better at describing the data they have, but not the overall phenomena.

It's counter intuitive but mathematically true.

3
DrNuke 21 minutes ago 0 replies      
It's a tool, not a magic wand, but it's difficult to keep your head cool when everybody and his dog competes for contracts, jobs and market attention. It's 2010s gold rush.
4
jkraker 2 hours ago 1 reply      
These problems definitely are not specific to big data. They apply to statistics in general. The author is right, though--many people tend to be overly ready to draw overly confident conclusions from their analysis of big data just because...big data.
5
essayist 3 hours ago 0 replies      
This suggests that in many "Big Data" analyses, validation is an afterthought, because the interpretation of results is "obvious".
6
askyourmother 3 hours ago 0 replies      
Most of the recent "big data" projects we have been involved with will fail due to lack of basic direction from the client, from the beginning.

Trying to explain how they failed to find the twenty needles in the three pieces of straw, they now want to roll forward to a barn-full of bales of hay to try and find less needles!

7
cataflam 3 hours ago 0 replies      
Concise article, good examples. Doesn't go very much in depth after exposing the problematic examples, but recommended read.
8
pessimizer 1 hour ago 0 replies      
The same way false precision creates false confidence. Ten vague, radically differing estimates off the top of ten people's heads? Rubbish. An average of those estimates taken out to 4 decimal places? Science.
6
Fast Finite State Machine for HTTP Parsing (2014) natsys-lab.blogspot.com
4 points by userbinator  36 minutes ago   1 comment top
1
userbinator 11 minutes ago 0 replies      
State machines are really the ideal application of goto, where the semantics map directly - go to this state, making the flow of the code very clear. The performance benefits are nice too, so in that sense I think it's beneficial for both the machine and the programmer.

It'd be funny to see the results of an implementation with "proper OOP design using the State design pattern", I have a feeling that could be worse than the original switch/loop.

7
Use OpenGL to get more RAM github.com
96 points by devbug  6 hours ago   31 comments top 14
1
kpgraham 2 hours ago 0 replies      
I worked with a programmer who cut his teeth on ancient iron in the 1950s. He had a computer that had 4K of ram and two 512 byte I/O buffers. He would set the loader for the program into the I/O buffer and then use that to load and initialize the rest of the program. The I/O buffers would go away on the first read/write. Every byte counted.I used the same techniques when I wrote MASM in the 1990s to create a 16k I/O buffer in the CGA graphics card. The graphics card memory was slower than regular memory, but the 16K made the program fly on reads and writes.I remember a utility that used the Hercules Video card memory for EMS (paged) memory. I think it came with the card.
2
bsorbo 2 hours ago 0 replies      
While the readme does state this (could be) a joke, this unfortunately isn't possible with PCI-e today:

Motherboards only support directly accessing a (up to) 256 MB segment of VRAM directly from the CPU. This is the BAR1 space/aperture space.

So attempts to create allocations larger than that that are resident in VRAM but also accessible from the CPU will land in system memory, in order to ensure they are accessible from the CPU. Graphics drivers will either have the GPU read the data from system memory, or will do hidden copies to the GPU when they detect the resource is bound, etc.

The author sort of suspects this could be happening:

"There is no guarantee that the persistently mapped buffer technique actually references video memory. The worst case it's shadow memory and this actually wastes memory."

3
philsnow 20 minutes ago 0 replies      
Around 2002 I had bought a machine with a duron cpu, which required a different kind of memory. I still had lots of memory for my old machine, something like 768MB cobbled together out of 64-256MB DIMMs.

So I created a ramdisk on the old machine, exposed it as a nbd (network block device), mounted it on over the network (one hop over a 100 Mbit switch, didn't have Gbit kit) on the new machine and created a swapfile on it. I couldn't believe how well it worked. I never actually had any flakiness issues and I used it like that for probably a year.

5
tluyben2 5 hours ago 3 replies      
It used to be standard to use your VRAM as RAM where possible. If you were not coding a game but something like a business tool, you would just stuff parts of the application in VRAM so you could swap it out from there instead of from HD or, before that, floppy. This was without OS support mostly though. For work I haven't needed this technique since the end of 90s I think.
6
datenwolf 3 hours ago 1 reply      
Only, that it doesn't give you any extra memory at all. In case you're using a dedicated GPU, the peripheral bus inbetween (PCIe) is a serious bottleneck. In addition to that the OpenGL object model does not have the concept of automatic "object eviction"; on embedded architectures (Android) you may loose the OpenGL context and have to recreate it. But you'll never loose a single OpenGL object.

The bottom line is, that the OpenGL driver will create a backing store in system memory for each and every buffer object. So if you allocate 4GiB of OpenGL buffer objects, it will allocate 4GiB of system memory; if it's a shared memory GPU that's it, if it's a dedicated GPU the GPU RAM is actually more of a cache to the backingstore.

glMapBuffer will usually just give you access to this backingstore so that writes can be coalesced into a single transfer when unmapping. Also you don't want a full round trip read-modify-write. In case of coherent mappings the coalesced transfer is triggered by a GPU side data read operation on that buffer.

TL;DR: RAM on graphics cards is a cache on top of system memory (as far as OpenGL is concerned).

7
stevetrewick 5 hours ago 1 reply      
Finally an answer to the age old question "where can I download more RAM?"
8
snvzz 2 hours ago 0 replies      
When out of FAST RAM, use CHIP RAM :)
9
known 16 minutes ago 0 replies      
IMO Firefox need more CPU than RAM
10
pjc50 5 hours ago 0 replies      
Once it's offloaded to the GPU, it could compress it on an LRU basis...
11
AshleysBrain 4 hours ago 1 reply      
Is there a reason operating systems don't/can't do this automatically? It sounds like a faster swap space than disk.
12
witty_username 5 hours ago 0 replies      
This seems useful for AMD APUs where system RAM is reserved by the iGPU.
13
kebolio 5 hours ago 1 reply      
sounds to me like a perverse realisation of amd's apu ideal, treating VRAM like system RAM, although in this case they are still disparate memory spaces.
14
Zekio 6 hours ago 0 replies      
Always need more ram!
8
Something Secret This Way Comes llogiq.github.io
134 points by llogiq  8 hours ago   29 comments top 5
1
danschuller 5 hours ago 5 replies      
This a total tangent but I always thought it might be interesting if an IDE pulled data from source control (and presented it inline somehow).

How "hot" a function is - has it reiceved a lot of edits in the recent past? Or is cold and old (and therefore you'd imagine reasonably bug-free for most use cases).

I think the number of unqiue people editing a portion of code would also be interesting to know. As you might expect multiple authors to have different beliefs about the point of the code and therefore the code is probably going to be more muddy and less clear.

2
da4c30ff 5 hours ago 1 reply      
This is something that I'm actually writing for my own programming language, coincidentally also written in Rust.

My idea was to label each expression with these flags, i.e. is the expression constant?, tail recursive?, etc., and then make that information available for the text editor and other tooling, so the user can instantly see certain things about their program, and see the type of optimizations the compiler will do for them.

3
eigenrick 3 hours ago 0 replies      
Sounds like it could be a basis for a security oriented scanner. Rust has less to worry about regarding memory security, but performing taint analysis for logical security flaws could be mighty interesting.
4
cm3 7 hours ago 2 replies      
Reminds me of LLVM's EfficiencySanitizer that will probably land in 3.9: http://lists.llvm.org/pipermail/llvm-dev/2016-April/098355.h...
5
niccaluim 7 hours ago 3 replies      
Maybe I'm missing something but this sounds less like a linter and more like an optimization pass?
9
How London buses are numbered (2009) markhadfield.typepad.com
47 points by wlj  3 hours ago   4 comments top 3
1
bmsleight_ 2 minutes ago 0 replies      
I visited the London Bus Museum (not the Transport Museum in Covent garden) http://www.londonbusmuseum.com/ There Bus pre-first world war with the 164 (my local route) still with the exact same stops.
2
hiharryhere 2 hours ago 1 reply      
The Brits really do write well. What a wonderfully worded response from Tfl.
3
userbinator 57 minutes ago 0 replies      
Here's all of them, along with information on the mentioned numbering scheme:

https://en.wikipedia.org/wiki/List_of_bus_routes_in_London#C...

10
Solar Impulse lands in California after Pacific crossing bbc.com
68 points by frgewut  7 hours ago   21 comments top 4
1
barney54 4 hours ago 2 replies      
The Solar Impulse is really cool technology. It also shows why oil use has been so prolific. the energy-to-weight ratio of jet fuel is a marvel when you consider the size of today's jet aircraft and the distances they fly with hundreds of passengers.
2
cubano 2 hours ago 1 reply      
I am surprised that no one has been developing (or I haven't heard of) fully autonomous airplane pilot systems. Something like that tied to a fleet of solar planes like this for shipping stuff could be a real game changer for certain types of cargo.

Flying actually seems like a much "cleaner" environment for the AI to navigate then roads do, to be honest, and the dependence on instrumentation and not visual cues as much is of course a natural fit.

3
ruffrey 1 hour ago 0 replies      
Impulse propulsion flown by Piccard? Checks out
4
harwoodleon 5 hours ago 2 replies      
"He predicted that, 50 years from now, electric aeroplanes would be "transporting up to 50 people"."

Shame the mega storms that are swirling the planet at that point will probably rip the planes out of the sky.

50 years too late if you ask me.

11
ChaosKey: a Hardware True Random Number Generator That Attaches via USB altusmetrum.org
6 points by pmoriarty  57 minutes ago   discuss
12
Apples Organizational Crossroads stratechery.com
19 points by hullo  2 hours ago   discuss
13
Everybody Freeze The extropians want your body thebaffler.com
9 points by pron  2 hours ago   discuss
14
Amateur ISIS Investigator Ends Up in Prison nytimes.com
59 points by mhb  2 hours ago   41 comments top 9
1
fyirt 1 hour ago 2 replies      
Guy was clearly unstable and in over his head. Not against the law mind you, but his mental health was deteriorating (by his own mothers admission) and he began threatening FBI agents.

The 14 months in prison without due process could have been expanded on more in the article, there was next to no info regarding that situation which leaves the reader with a fair few unanswered questions.

2
user24 1 hour ago 1 reply      
Holy Moly.

> Without access to his records, prison psychologists assumed his tales of talking to Islamic State members were fiction, symptoms of a mental illness that made him incompetent to stand trial. Prosecutors sought a hearing to decide whether he should be forcibly medicated.

3
DanBC 22 minutes ago 0 replies      
If you live in England and think this kind of psychiatric detention needs stronger controls you might be interested in looking at becoming a "Mental Health Act Manager", or "Hospital Manager". This is often a voluntary position that looks at some detentions under the Mental Health Act and sometimes orders the de-sectioning of people detained under section.

http://www.mentalhealthcare.org.uk/mental_health_act

https://www.rethink.org/resources/m/mental-health-act

https://www.rethink.org/living-with-mental-illness/mental-he...

In England it's rare to become a mental health inpatient. Only about 8% of the people getting care from mental health trusts ever go in-patient. Some of that is lack of beds (especially for children), but mostly it's because hospitals are sometimes harmful (This is true for physical health too) and people should get better care from "Crisis Resolution and Home Treatment Teams" (for short term emergency care) and community teams (for longer term recovery and rehabilitation).

But if you think it's something that might happen you should probably think about advanced directives, and sorting out who your "nearest relative is", and getting a crisis plan set up.

4
cubano 1 hour ago 2 replies      
I found the most surprising aspect of the article was the guy absolutely refusing to believe that his Skype contact was phony and trying to scam money from him, even when the very law enforcement he so trusted to help "save the hostages" 1/2 way across the world told him so.

I wonder what sort of physiological issues must be present, and why, that allows someone to be sucked so deeply into believing the fantasy of the situation, even in the face of overwhelming evidence, and how that very thing exists in so many other decisions that confront people.

5
ghshephard 1 hour ago 2 replies      
This guy was really asking for trouble when he went ballistic on an FBI agent, from the article:

he sent an agent 80 increasingly overheated messages in 10 days. In one, he declared, Just remember whatever ends up happening to you You deserved it, and added an expletive.

6
Hondor 23 minutes ago 0 replies      
People are saying he was "unhinged" or mentally ill, but it sounds like he was simply sucked into an online scam. That happens to normal people all the time (even professionals through "whaling"). If you believe your scammer, and he tells you about freeing hostages, and those stories corroborate with what you hear on the news, then what normal person wouldn't get agitated and emotional? That doesn't sound like mental illness, it sounds like normal reaction to finding out that the one person who could save some hostages is refusing to cooperate. Don't forget his friend was actually killed in Afghanistan so he's got some genuine emotional connection to these troubles already, making the other stories easier to believe perhaps.

What would you do if you believed you had a way to free a hostage and the FBI just told you to get some sleep? When the hostage was killed, what would you say to that FBI worker? Normal mentally healthy people would get angry and obsessive when they believe they're being impeded from saving lives. I've seen more extreme reactions for far lesser problems.

7
pinaceae 10 minutes ago 0 replies      
well, the contrast is all the other mentally unhinged, armed men in the US that do not get taken off the street and then go on a shooting rampage - recent example being ohio.

pick your poison.

seen this liberal approach in europe with a neighbors daughter, paranoid schizo. threatened her mother with a knife, admitted to hospital, meds, meds work, she clears up, gets asked if she wants to leave, yes, is home, stops taking meds, goes psycho again, knife comes out. rinse repeat, for years. because yes, let's give mentally unstable people the choice over life or death decisions.

8
blowski 1 hour ago 2 replies      
Off topic, but why does the NYT website have a 'show full article' button (on mobile at least)?
9
a3n 1 hour ago 1 reply      
Man who made a vague threat against FBI agent is disappeared for 14 months.
15
Schools are helping police spy on kids social media activity washingtonpost.com
78 points by raddad  11 hours ago   45 comments top 15
1
imgabe 15 minutes ago 2 replies      
It seems every time there's a mass shooting or a tragic suicide, people find out there were a bunch of social media posts beforehand that clearly broadcast the perpetrator's intent. Every time we ask "Why didn't anyone see this coming?"

Well, this is us "looking" to see these things coming, but now analyzing publicly available information is a violation of privacy?

2
dsfyu404ed 3 hours ago 0 replies      
I'd be ok with this... if it only monitored posts made during school hours on days students were present. But it doesn't, so I'm not ok with this.

Looking on the bright side, a bunch of kids are gonna get a crash course in basic opsec aka "not posting stuff you wouldn't want your boss/the cops/the entire world to see online" unfortunately this will probably screw up the lives of a lot of students who come into contact with law enforcement when they really just need help.

3
rubyfan 4 hours ago 2 replies      
So basically the school system doesn't want oversight?

FTA:

> Details of the 12 police investigations that stemmed from searches in the past year have not been divulged by the school system. The school system told the Orlando Sentinel that it doesn't want public details of the program to interfere with its effectiveness.

4
nv-vn 6 hours ago 7 replies      
At times I like to feel like 1984 didn't come true ever, and that our limited surveillence, in the grand scheme of things, is almost tolerable in comparison. But every time I see somethijg like this I change my mind. Everything here is about infiltrating the privacy of children and not about their safety. As soon as we get parents to accept this surveillence for their children, we know we're not far off from not just mass surveillance of metadata, but large scale surveillance of all Internet communication we conduct on a more-or-less personal level. If the current parents are okay with it, how much more will future parents allow now that the precedent is being set? How much intrusion will these children be willing to take once they're grown up?
5
Hondor 4 hours ago 2 replies      
The whole point of making public posts is because they want lots of people to see them. So who cares if the school sees them too? That's part of the public. It's not spying if the spyee is intentionally broadcasting the information and wants everyone to see it.

Imagine if a teacher walked past some kids bullying their classmate in the hall. She overhears the insults they're shouting and then calls the bullies in to tell them off. Isn't that what we want? Do we want school staff to turn a blind eye to bullying and stand by when they know who's doing it and what they're doing?

6
CPLX 3 hours ago 0 replies      
I wonder what would happen if schools spent their full energy and focus trying to, you know, like teach kids stuff?
7
sdoering 8 hours ago 0 replies      
I'm so glad I grew up some twenty to thirty years prior to this.

I was bullied and there was nothing any grown up could have done without me telling them about the bullying.

Non the less, as said above, I am glad to not gave to grow up in this panopticon.

8
mirimir 9 hours ago 2 replies      
Well, it's apparently public posts that are being searched. So kids just need to learn some OPSEC, no?

But indeed, it must suck, growing up in the panopticon :(

9
cronjobber 7 hours ago 0 replies      
Providing the young with a deep distrust of all things Facebook? Sounds like educators are actually doing their jobs.
10
kinai 7 hours ago 1 reply      
2016 Police Report: 4 kids were successfully stopped from stealing candy. Parents suing us because kids suffered psychological damage and now have a criminal record, but we all know that this is just caused by bad parenting.
11
zyxley 8 hours ago 0 replies      
And people wonder why kids are using apps like Snapchat that are intentionally designed to be incomprehensible to outsiders...
12
lilcarlyung 6 hours ago 0 replies      
How do they identify which social media accounts belong to which students?
13
facepalm 2 hours ago 0 replies      
I don't see how anybody can complain if public posts are being analyzed.
14
lerie 7 hours ago 1 reply      
been happening for years...
15
turninggears 4 hours ago 1 reply      
I can't comment on this specific school district, but in my own school district, there have been a fair number of instances where it was discovered, after an incident, that students had been planning a fight or physical confrontation for days in advance on social media. If the district had programs like this in place, it could have actually improved student safety. I know many here are claiming that this is just a pretense for monitoring students, but that's not what it looks like from my perspective.
16
Creating Magnetic Disk Storage at IBM (2015) ethw.org
3 points by Oatseller  1 hour ago   discuss
17
Fast incremental sort larshagencpp.github.io
7 points by ingve  2 hours ago   discuss
18
Discovery of 4,500-year-old female mummy sheds light on ancient Peru theguardian.com
5 points by diodorus  1 hour ago   discuss
19
Seattle vigilante reuniting stolen bikes with their owners theguardian.com
4 points by bootload  1 hour ago   discuss
20
Designing Ryanairs Boarding Pass medium.com
6 points by plurby  2 hours ago   discuss
21
Megacities, not nations, are the worlds dominant, enduring social structures qz.com
7 points by samsolomon  3 hours ago   discuss
22
Ask HN: How do you decide what to learn next?
86 points by vijayr  3 hours ago   57 comments top 33
1
tedmiston 1 hour ago 3 replies      
Something I've always wanted is "a Netflix queue for tech I want to learn"

Ideal features would include:

- a regular review of the things you've listed to see if they're still relevant and to help you prioritize

- a way to see what's trending amongst everything you've listed (ex. I have three front end web frameworks on my list but React is collectively popular, so perhaps I should start there)

- it could notify you if a new (good) book / blog post is published on a topic you're interested in

- you could compare with your friends to see if someone you know has learned it recently or to sit down and hack together

- it could share a common list of subtasks across users -- for example, starting with Django Rest Framework might consist of: (1) doing the python tutorial + (2) doing the Django tutorial + (3) doing the DRF tutorial

2
shekhargulati 1 hour ago 2 replies      
I am doing a series called 52-technologies-in-2016 https://github.com/shekhargulati/52-technologies-in-2016 where in I learn a new technology, build a small app, and blog about it every week. I maintain an Evernote where I write down all the interesting topics or projects I find. I go through the list and randomly pick any topic that excites me that day and then work over the weekend to publish something. This helps me keep in continuous learning loop.
3
DonPellegrino 25 minutes ago 0 replies      
I pick some technology that has the following characteristics:

- A complete paradigm shift. I want something that will force me to rethink how I approach problems. I want to force my brain to develop new pathways, so to speak.

- Something will at least a minimum of documentation and community online. I've had to abandon dreams of learning some really awesome language before (ATS) because there was simply no resources and that would make my progress too slow.

- Something that could be fun to use in a side project. I need to be able to find occasions to use it. I learn best by doing rather than by only reading. If I can't think of an application, then I won't be able to become proficient, so I'd rather learn something else.

- And finally, it has to be something fun that feels like falling in love with programming all over again.

EDIT: I usually don't pick more than 1 technology to REALLY LEARN per year, so I don't make these decisions lightly.

4
voltagex_ 2 hours ago 1 reply      
Don't underestimate "this is interesting". Couple it with a goal (like building a personal site, doing some home automation, building a NAS) and make sure you take notes (if not a blog). You'll learn heaps in no time.
5
heartsucker 2 hours ago 0 replies      
I pick a project I want to complete or a cause I want to contribute to. Then I look at the smallest step I can take to work toward that (learn the basis of a new language, protocol). Then I iterate (learn a framework, tool). Then I try to close a bug or release the project to the wild. This usually leads to comments on a PR or someone opening bugs with the project, and then I have to learn something new to fix it.

I think it's much easier to learn things when you have a goal because you have to learn tiny nuggets of knowledge that are useful. I find learning something without the context of how to apply it to the real world is very difficult, so I generally don't just go out and learn things (tech-wise) without a legitimate usage in mind.

6
fbr 1 hour ago 0 replies      
I've recently started this method:

- create a list of all the tools/products/whatever you use at work

- rate them from 1 to 5 (5 you are an expert on this topic)

- then pick the most important one for your job and try to increase your grade

The grade is totally subjective, but still it helps.

For the "next big thing" I take a look regularly to the thoughtworks radar [1]. That's a nice overview.

[1] https://www.thoughtworks.com/radar

7
Walkman 3 minutes ago 0 replies      
I usually learn what I need for my job.
8
crispyambulance 21 minutes ago 0 replies      
I am surprised that no one has mentioned consultation with a MENTOR.

Really, if you "don't know what you don't know", you need some type of trusted guide who understands your aptitudes and motivations and can recommend a path of study or provide some clarity of thought.

There is absolutely nothing wrong with fumbling around in the dark and discovering stuff on your own, but if you're asking this question, it means you're resource-constrained and need some clear goals to work towards. This is where a mentor or at least a colleague can help out a lot. In addition to providing guidance, a mentor can critique and analyze the direction you're taking.

"Todo" lists are perfectly fine tools for mastering some narrow focused topic or for achieving completion of a small project, but they're not a strategic tool. Deciding what to learn next is very much a STRATEGIC question, and those kinds of decisions benefit greatly from dialog with an expert who cares about your success.

9
beilabs 2 hours ago 0 replies      
I'm someone who has decided not to learn new language or framework that comes my way.

If there is something that can speed up my workflow, I learn it. If it improves my applications speed without much time to implement / learn then I work on that.

For new shiny Javascript libraries I really have held back to see what the winner will be; backbone used to be the go-to lib, ember (tried to learn but it changed a lot in the early days), angular (decided not to invest any time into it).

There are 24 hours in a day, don't try and learn everything, just try and be productive with the tools you have and the ones that will get the job done for you.

My time these days are spent learning Nepalese, React and trying to build a business in Nepal....keeps me busy.

10
mysticmode 2 hours ago 0 replies      
I learn things by setting a purpose. Eg: For a web project, I need to learn new programming technologies.

I have a very sensitive mind. I can't concentrate on multiple projects at the same time. For example: If I'm working in a day job, I can't work on a side project efficiently, I can't concentrate on both my office work and side projects. If I do, My employer could easily figure that I'm churning out.

So, If I want to work on a project.. I'll make sufficient money then I quit my job and spend next coming months fully-fledged on my project.

11
brightball 2 hours ago 0 replies      
Usually if something is interesting it's because you're thinking about it in the context of a problem that you need to solve. Basically, it's interesting because you see the potential value.

That generally is what drives what I learn. I'm about to start getting deep into BPM2 and Activiti because it looks like it will solve an organizational problem that I'm currently observing, just as an example. Otherwise it's not really connected to anything I would be doing otherwise (although there are a few potential use cases if I understand the system the way I think I do).

12
jobigoud 41 minutes ago 0 replies      
In addition to what others have written, I give an increased priority to things that will help me learn other things in the future.

If I have two topics on the top of my list, and I expect one of these to provide new mental tools, or meta insights about learning or cognition, I'll pick that one.

I also give a higher priority in general to techs that will cascade into improving my speed on future projects in the most generic way. For example I consider that learning how to automate something is never a lost cause, even if I can do it manually at the same speed, because it increases my knowledge about automation, which will be useful down the line.

13
noir_lord 2 hours ago 1 reply      
Generally I look at the stuff I do day to day and then honestly critique myself for where I'm weakest and then learn from that.

Since I'm the only dev and I have to do back end and front end I realised that I was weakest on the front end (particularly JavaScript) so I made a concerted effort to learn JavaScript properly since apart from picking it up organically for years I'd never really studied it.

The funny part (to me at least) is that while I'm never going to like JavaScript I dislike it a lot less than I used to once I understood the underlying structure better.

14
narag 2 hours ago 0 replies      
Mostly what I anticipate I will need for the job. For my own pleasure I choose tools whose proponents talk with a reasonable voice. I dismiss any technonology when I see people that:

* Says that the rest of the world is doing it wrong and they will fix the situation with this "change of paradigm".

* Presents their products as a "social movement" that's "challenging the industry as we know it".

* Promises 10x productivity.

* In general, bases their success on attacking others. Specially if they say things like "everybody knows exceptions are like cancer".

* Uses grandilocuent names to call a two thousand lines library.

* The resulting code looks like gibberish. The most likely a child can understand it, the better. It it's directed at the elite programmers, bad.

* Doesn't put enough care on tooling.

Edit: OK, it's a very negative answer, but it's effective. It quickly discards 99% of shiny new things.

15
yankoff 24 minutes ago 0 replies      
I think the key here is to have long-term goals. I am trying to at least roughly understand where I wanna be in future (1, 5, 10 years) in terms of my skillset, abilities and knowledge. Then topics I learn should be aligned with those goals. There are definitely too much interesting things, but before I jump into something new I ask if it really helps me to get where I want to be in the future.

There are sometimes exceptions to this, when I just want to learn something for fun, do it as a recreational activity.

16
haffi112 2 hours ago 0 replies      
I keep a list of things that interest me with sublists about interesting observations I make about each item on the list (I use workflowy for this task - no affiliation). The observations can be anything from ideas, to blog posts, books, online courses or articles about the subject.

When I want to learn something new I pick an item from the list and work on it (usually some idea I came up with). My preliminary research efforts often help me realise an idea or it gives me a chance to compare two different learning sources. I also try to create something using what I learned. Through craft I feel like I draw more from the learning experience. The outcome can also be that I need to find better references or that I simply want to learn something else.

The most difficult thing is getting started. I find it useful to be systematic about it by explicitly devoting time to it. Once you have a system in place you like it eventually becomes a habit. Also note that it is helpful to break tasks into small subtasks. Having a feeling of accomplishment leads to a more positive experience of the learning process which further leads to increased learning drive.

Note that my process is not much more sophisticated than "this is interesting". However, instead of acting on some hunch in the moment I act on observations which I gather over time.

17
tedmiston 1 hour ago 0 replies      
By whoever's giving away the latest free t-shirt

https://developer.amazon.com/public/solutions/alexa/alexa-sk...

18
codecurve 2 hours ago 1 reply      
Read. And I don't just mean books (although they are a great place to start). Read technical blogs, read documentation, read other people's code. Seek out challenging reads that seem overly ambitious and use them to find out what your unknown-unknowns are, then use that knowledge to steer your learning.
19
agentultra 2 hours ago 0 replies      
Usually I'm pursuing something. Along the way as I gain experience and encounter difficulties I put my head up and look for solutions. I don't always find what I'm looking for but it gives me hunches. When those hunches collide I get ideas and from there it becomes pretty clear what I know and what I need to learn in order to progress.

I'm presently learning predicate calculus and formal specifications of software systems. I came to it by hunches: software engineering should be more like engineering because companies like Yahoo! Japan are building earthquake notification systems on OSS infrastructure and the keynotes at Blackhat suggested it was a requisite for this industry to move forward. It turns out the math is beautiful and it helps me design better software and I'm only just getting started.

It has also added new things to my list of things to learn such as the refinement calculus as well as alternative modelling systems like Event-B.

20
stonemetal 2 hours ago 0 replies      
I use "This is interesting" to give it about three or four days. After the "this is interesting" stage, things that fall into the "this might be useful" bucket get a couple of weeks. Then it is either getting used, or it is getting put on the back burner indefinitely.
21
shrugger 1 hour ago 0 replies      
I try and learn things bottom-up. I don't know if that's a good way or not, but that's how I've always done it. It has seemed pretty natural to me to sort of explode things into pieces and pick it up small bits at a time, gradually composing all of the knowledge that I need to be able to complete the thing I'm working on/learning about.

Is there a better way?

22
pknerd 1 hour ago 0 replies      
> How do you decide what to learn?

Work on my own idea.

Another option; work on freelance projects. Earning could be a good motivation to learn new stuff. At least it is for me.

23
m0rganic 1 hour ago 0 replies      
You need to pick something you can sink your teeth into but unfortunately that doesn't satisfy your first requirement (limited amounts of time). Learning things of value normally takes time and lots of dedication.
24
tedmiston 1 hour ago 0 replies      
Sometimes I just browse StackShare for what's trending or if there's a more highly rated competing tool for something I use regularly.

http://stackshare.io

25
vinitagr 1 hour ago 0 replies      
I decide based on what i need to build next, and that came from what i want to build, at some point in the past.

Also i have "This is interesting" moments, from time to time.

26
asimuvPR 1 hour ago 0 replies      
I look forward in time and try to imagine myself knowing/doing something new. Whatever pops in my head us what I go for. Always live in the future and build towards it.
27
gd2 2 hours ago 0 replies      
I should do better, because I've been random in deciding what to learn But some combination of: found good teaching materials, people I'm in contact with are learning it, and this could pay off big.
28
lazyant 1 hour ago 0 replies      
Intersection of what looks like fun, that I can be good at, and good career or money-wise.
29
Bootvis 2 hours ago 0 replies      
Other than that: 'This is useful' ;)

I believe something can be useful when I need to know it or when it is a good basis for other more applied topics.

30
DavidSJ 2 hours ago 3 replies      
When in doubt, learn more math.
31
sidcool 2 hours ago 0 replies      
Thanks for asking, I have been struggling with the same problem for some time ow.
32
cubano 2 hours ago 0 replies      
I'm not really convinced if its even possible for me to learn something that I'm not interested in.

[edit] By learn, I don't mean simple regurgitation of the facts or some superficial thing, I'm talking about extended study and efforts.

33
mapcars 2 hours ago 0 replies      
I just feel it.
23
Solar Impulse 2 completes 62 hour gas-free Hawaii to SF flight cnn.com
6 points by ilyaeck  53 minutes ago   discuss
24
Verifying Bit-Manipulations of Floating-Point [pdf] stanford.edu
3 points by ingve  3 hours ago   discuss
25
it seems that Fenix finally reached Twitter tokens limit twitter.com
7 points by karangoeluw  28 minutes ago   discuss
26
Ocean Mobile Linux Server getocean.io
63 points by ashitlerferad  5 hours ago   57 comments top 25
1
mciancia 3 hours ago 1 reply      
Not sure what is the purpose of this. No ethernet, only 1GB of ram and only 2x 1Ghz allwinner CPU for $150...RPi/odroid has bigger community and is more powerfull. And if you need something in nice looking case, there is a lot of chinese atom based mini PC with buil in battery for around the same price...
2
atmosx 4 hours ago 1 reply      
I manage various RPis and had more than a few mini-devices acting as Linux server (ebox 2250, bifferboard, etc.).

I would give a shot to this one if it had an SSD, dedicated GBit Ethernet and/or GPU unit, in short if it resembled closely a mini-scale real linux server somehow.

When that doesn't happen I'd go with a device that's more open, cheaper and has better support and bigger community, such the the RPi.

3
tyingq 3 hours ago 1 reply      
I do think there's a market for a RPi type board with a high quality case and an integrated battery. This seems to fall short though, in a few places...

- Marketed as a portable headless server, and indeed, limited to something like that. No access to GPIO, no video. But, relatively low 1GB ram, and limited, non-upgradable storage.

- Wireless charging seems like a feature nobody is looking for in this type of device. Charge time with the wireless is 10 hours vs 4-5 hours via USB. I would guess it's also driving part of the price point.

To me, if you're going to market as a server, you base it on something like the CubieTruck, more RAM, 1G wired ethernet, more storage options, SATA, etc.

Or, go the other route and provide what's good about an Rpi device (GPIO, video, low cost) and add the quality case and battery.

4
reitanqild 4 hours ago 4 replies      
Honest although tangentially related question: why node.js?

I honestly really don't get why someone want to use js serverside, and I say that as someone who did create cool things with js client side and wants to know why I should spend time on this new cool stuff.

Edit: automatic upvotes for serious answers as long as I see them before going offline :-)

5
smoyer 4 hours ago 0 replies      
One of the questions in the FAQ (and repeated in this thread) is why not just buy a RasPi - In the '80s and '90s I was working for companies that manufactured electronics and, unless you're producing very large numbers of units, the case and the power system were always our biggest cost.
6
beagle3 5 hours ago 0 replies      
In their FAQ, comparing an Ocean to RasPi, they say the RasPi has no WiFi or Bluetooth - which was true before Pi3, but is not true any more.
7
iuguy 1 hour ago 0 replies      
Key question: Why would I choose this over setting up a chroot on an Android device?

My rationale is that Android devices are far cheaper, especially older phones second hand. Something like a Nexus 5 can run Kali Nethunter which gives me a full Kali environment if I want it, or I could run a Debian chroot using one of the many various options available.

8
NetStrikeForce 1 hour ago 0 replies      
This is one more ideal use case for an always-on VPN, so you can always reach the server privately, securely and potentially on the same IP address.

Are there any plans to integrate something like (Disclaimer: I made this) https://wormhole.network with it? Actually, you would just need to include SoftEther's VPN client (https://www.softether.org) by default :-)

9
lispm 5 hours ago 2 replies      
I got the beta version and it died already. Doesn't charge and doesn't boot.
10
bikamonki 2 hours ago 0 replies      
The only use case I get from reading the website is bring a server to a remote location with no wallpower/connectivity. I'd argue that is something that nowadays is solved with software, not hardware. We've done data interactive offline first apps that sync when connectivity is available. This approach means that client and server run on the same $50 smartphone with the same CPU/RAM/HD as these severs but with many more features.
11
wiz21 3 hours ago 2 replies      
Given the fact the communication infrastructure is super deployed (wi fi everywhere, networks everywhere), why would I want to actually move my server ? I mean, the client moves and the network infrastructure make sure the server is reachable anytime anywhere... So why would I want to move it ?
12
Raed667 4 hours ago 0 replies      
This looks like an overpriced Raspberry Pi with wireless charging.
13
martin_ 3 hours ago 0 replies      
I was involved in the development of this with two stellar engineers (David/Kousha) at iCracked. Surprised to see this here at this point as all available units sold several months ago. Happy to answer questions!
14
rcarmo 3 hours ago 0 replies      
Having built my own portable server a couple of times* using old Android phones, I genuinely like the idea, but find it quite surprising that someone actually went out and built an integrated device - must be a pretty small niche market.

*: https://taoofmac.com/space/blog/2013/04/28/2330

15
tkubacki 3 hours ago 0 replies      
There is still plenty of room for quality linux laptop for devs - this is much bigger market than this. Question arise is it really that hard to do ?
16
Zekio 2 hours ago 0 replies      
I think a Pine A64 with a Lithium Battery is a better solution(you can a battery bank while using a Lithium Battery, which also allow hot swap of the battery bank)

Since you can get WiFi, Bluetooth and Gigabit Ethernet.

After which you just need a pretty case and you are pretty much set.

17
b0p1x 2 hours ago 0 replies      
I would love this if airlines allowed personal wifi in-flight so I could offload my development builds which I currently have to run on my laptop. As it stands, you can either use the airline's onboard wifi or just not use wifi. :(
18
chefkoch 4 hours ago 1 reply      
Why not use a pi and a powerbank?
19
elcct 2 hours ago 0 replies      
For $15 you can get Orange Pi PC + add $x for battery and WiFi and you are much better off.
20
erikb 2 hours ago 2 replies      
Ads are not desired on hacker news. At least put some content around your advertisment that enables people to learn something new.

And what's the difference between this and all the other small linux machines that you can buy, like raspberry pies? From the landing page it looks like the developers don't even know that these kind of computers exist since more than 5 years now.

21
plaes 3 hours ago 0 replies      
Apparently this device seems to be based on Allwinner A20 SoC which nowadays has quite good mainline Linux kernel support.

So I wonder why they are still using vendor-provided 3.x kernel.

22
cmdrfred 4 hours ago 1 reply      
"Raspberry Pis do not have built-in Bluetooth or Wifi"

Time to update the page. http://makezine.com/2016/02/28/meet-the-new-raspberry-pi-3/

23
imaginenore 4 hours ago 0 replies      
"mobile" as in Wi-Fi, not 3G/4G.

Seems awfully expensive for its specs. You can get pcDuino3 for $66 with the same CPU/RAM.

EDIT:

Raspberry Pi 2 has Cortex 7 and 1GB of RAM too, and it's like $34.

Banana Pi also, and it's around $31.

24
tuananh 5 hours ago 1 reply      
seems too expensive for the spec!
25
chx 5 hours ago 0 replies      
The BattPi Kickstarter didn't succeed last year. This is better how? Also, the PINE64 has a built in battery charger. The ODROID-C0 has a battery option too.
27
For What Its Worth: A Review of Wu-Tang Clans Once Upon a Time in Shaolin dancohen.org
169 points by tintinnabula  15 hours ago   65 comments top 13
1
roywiggins 13 hours ago 2 replies      
> This is like someone having the scepter of an Egyptian king

The point of a scepter is that you can wave it around in front of your subjects, not leave it in a vault somewhere all the time. It's more like the actual grave goods the Egyptian kings were buried with, maybe.

> Sol LeWitt is an unusual artist in that he rarely painted, drew, or sculpted the art you see by him. Instead, he wrote out instructions for artwork, and then left it to constructorsoften art students, museum curators, or others, to do the actual work of fabrication. LeWitt liked to be a recipe writer, not a chef.

So. Algorithmic art, except executed by humans instead of the traditional computer. "calculate z_{n+1} = z_n+1 for each point repeatedly; color it black if it converges..."

2
blaze33 6 hours ago 1 reply      
In France we have a legal definition of what constitutes an original work of art. For instance you can produce up to 8 original copies of a sculpture, that's art. Wanna sell 9 copies, you're no longer an artist but an artisan. That also applies to furniture, I couldn't track down what the actual law says, there are many exemptions and edge cases but basically that's the idea.

Here, this album fits the criteria for being numbered 1/1.

3
keypusher 9 hours ago 2 replies      
Of all the people that could have bought this album, the fact that it was Martin Shkreli continues to amaze me.
4
im3w1l 10 hours ago 1 reply      
If I had it, I would use it to entice famous people to have tea with me.
5
beloch 6 hours ago 2 replies      
Perhaps this album just wasn't very good, and Wu-Tang (or their managers) realized they could make more money by boosting their fame with this stunt release than they would by actually releasing a crappy album.
6
keithpeter 5 hours ago 0 replies      
The OA has a partial quote from Ellsworth Kelly taken from the New York Times obituary [1]. Below is the full quote, which I found useful.

>> I think what we all want from art is a sense of fixity, a sense of opposing the chaos of daily living, he said. This is an illusion, of course. What Ive tried to capture is the reality of flux, to keep art an open, incomplete situation, to get at the rapture of seeing. <<

[Perhaps the GI Bill at the end of the second world war provides us with an idea of what could happen if we had a basic income.]

[1] http://www.nytimes.com/2015/12/28/arts/ellsworth-kelly-artis...

7
BWStearns 11 hours ago 1 reply      
I just looked at HN right after pulling up Wu-Tang on Spotify. I realize that it doesn't fundamentally add much to note that, it was just a fun coincidence and I thought I'd share.

It is fun to think about the meta-art of manufactured scarcity. It's fun trying to articulate a rigorous reason for the value difference of Wu-Tang making an album that only one person will get to hear versus me (a decidedly untalented non-musician) making one, when they both sound exactly the same to all of us (unless that dick Shkreli is reading).

That said I would trade all the idle but-what-is-value-really-man musing for Shaolin monks or an unscrupulous Fed to exfiltrate and share the album soonish.

8
fluffysquirrel 41 minutes ago 0 replies      
Which apparently has nothing to do with Xiaolin Wu's algorithm.
9
Artoemius 12 hours ago 3 replies      
People are infinitely fascinated by scarcity.
10
bwilliams18 13 hours ago 0 replies      
I had the pleasure of spending a week at MassMoCA a few years ago, it's a unique institution and is a treat to visit.
11
ComodoHacker 2 hours ago 0 replies      
The site is down.
12
recivorecivo 12 hours ago 4 replies      
If you had no ego, you would just say Mandelbrot set. If Mandelbrot had no ego, he would have just called it "calculate z_{n+1} = z_n+1 for each point repeatedly; color it black if it converges...".

Ponder this. Without ego, there is no judgement. And no judgement of those who judge. The cycle breaks.

13
mirimir 10 hours ago 3 replies      
> Then, in one of 2015s greatest moments of schadenfreude, especially for those who care about the widespread availability of quality healthcare and hip hop, Shkreli was arrested by the FBI for fraud. Alas, the FBI left Once Upon a Time in Shaolin in Shkrelis New York apartment.

So why doesn't the FBI take the bloody thing, and auction it? They sold DPR's Bitcoin, no?

28
NASA to begin historic new era of X-Planes nasa.gov
134 points by astdb  15 hours ago   43 comments top 10
1
jgeada 5 hours ago 2 replies      
All really interesting & immediately commercially useful. So why aren't these research projects being funded by Boeing, Lockheed etc? Why are we using NASA as the R&D division of commercial companies?

Shouldn't NASA's role be more blue-sky research for things we don't know yet are feasible or possible?

2
rtpg 10 hours ago 2 replies      
This makes me think of that idlewords talk[0], where the intro talks about the failure of the Concorde. The fact that you could fly to NY in 3 hours instead of in 7 made not much of a difference, because with the airport travel time included, you're still going to end up losing a day...

Though here they seem to be focusing on effectiveness rather than speed, so that's good. Just interesting to think about the fact that faster planes are only useful at this point if they're much, much faster.

[0]:http://idlewords.com/talks/web_design_first_100_years.htm

3
watersb 12 hours ago 5 replies      
NASA's previous X-Plane initiative seemed to end without making any substantial change to civil (non-military) aviation.

We could really use small jets out here in sparsely-populated Western USA. Eclipse Aviation got very close, then ran out of money. How much money would be required to start them up again?

We need new engines. My 1966 Cessna 172 required leaded AV gas, which is as rare -- and as damaging -- as the tears from a weeping unicorn.

Why invest in supersonic transport? We need low-end disruption, not high-end incremental improvements.

4
maaku 12 hours ago 0 replies      
I've been following this program since the beginning and I'm very excited. This could be the future of high speed air travel -- because it could reverse laws against overland travel.
5
razzaj 5 hours ago 0 replies      
AS i look at the rendering on this plane, all i could think of is "gee, this eerily looks like the SX from black & mortimer"

http://images.gibertjoseph.com/media/catalog/product/cache/1...

6
erikb 2 hours ago 1 reply      
What is an X-Plane? I assumed something like a Star Wars X-Wing, but it doesn't look like it at all.
7
trendnet 10 hours ago 1 reply      
I thought NASA bought X-Plane (a flight simulator from Laminar Research) to revitalize it as Lockheed Martin did something like that with Microsoft Flight Simulator. Oh well...
8
ndesaulniers 11 hours ago 1 reply      
Pure black screen on mobile
9
rdiddly 6 hours ago 0 replies      
Looks like they've got some x-planing to do.
10
KKKKkkkk1 1 hour ago 1 reply      
NASA's original mission was to put an American on the moon. Fifty years later, and it's still going strong, churning out projects to justify its existence. Perhaps it's time for the US government to let the likes of Milner and Hawking take the front of the stage.
29
The impact of Princes death on Wikipedia wikimedia.org
244 points by The_ed17  21 hours ago   90 comments top 10
1
semi-extrinsic 20 hours ago 2 replies      
For others who were left scratching their heads at what exactly this pop-sci-explained PoolCounter mechanism actually is:

https://wikitech.wikimedia.org/wiki/PoolCounter

TL;DR:

It's a limiter on how many workers start rendering the new page version when the old page version in cache has been invalidated.

2
lordnacho 8 hours ago 2 replies      
How are Wikipedia articles kept consistent with each other? Say someone like Prince dies. His page will instantly change, seemingly while his portrait is still in the sky and the cannon fires.

But with certain people there's a variety of connected items that need referential integrity. For instance, I can imagine Prince being on one of those lists (eg highest grossing) that has bold text for still living artist. For office holders, they need to be moved from "incumbent" to a box with dates and the new incumbent needs to be updated. And then there's text snippets that are in present tense ("Prince and David Bowie are among the greatest living artists").

And then there's the corresponding pages in other languages.

How's it done?

3
Buge 20 hours ago 5 replies      
Interesting how in the graph it looks like some people found out about 25 minutes before it was more publicly found out.
4
JBReefer 20 hours ago 2 replies      
This is so impressive, to see behind the curtains of what has become the central repository of humanities knowledge, during a moment of loss of one of humanity's greats.
5
chris_wot 20 hours ago 3 replies      
I don't think WMF staff are credited enough for the work they do in keeping Wikipedia running. They seriously know how to scale, I think the only ones better than them are honestly Facebook and Twitter!
6
cmdrfred 20 hours ago 2 replies      
What happened at 7:15?
7
yeukhon 19 hours ago 4 replies      
They mentioned 5M views within 24 hours of Michael Jackson's death. With over 3B Internet users out there, I am actually a little surprised how small the spike was. Did they only count English Wikipedia? Even so I am quite surprised. I would expect 10-20M at least. Similarly, many young people like myself have never heard of Prince, I had to look him up to find out who he truly was.
8
EugeneOZ 19 hours ago 2 replies      
Even in peak it's just ~800 hits per second - it shows how is irrelevant the C10k problem (yes, I know it's not exactly about hits per second, but still).
9
xrstf 14 hours ago 0 replies      
Finally a replacement for "Site got slashdotted": "Site got Prince'd". I like it.
10
tgb 20 hours ago 4 replies      
"He was ... known for, among many other things, ... a performance at Super Bowl XLI in a raining downpour in front of over a hundred million people."

Typo and/or I call bullshit.

30
Bots won't replace apps, only better apps will replace apps dangrover.com
223 points by rmason  20 hours ago   50 comments top 19
1
redmaverick 1 hour ago 1 reply      
My ideal interaction:

1. Open Facebook Messenger

2. me: I want a veggie pizza with Jalapeno topping

3. @megabot --> @pizza_bot: "Ordering an Jalapeno pizza: Choose the brands from below a) Papa Johns b) Dominos c) Pizzahut"

4. me: Papa Johns

5. @megabot --> @pizza_bot: Do you want anything else to go with that?

6. me: 1 garlic dip + Pepsi 500 ml

7. @megabot --> @pizza_bot: "Your order will be ready and will be delivered to the address. Please confirm"

8. me: confirm. end pizza_bot.

9. me: I want to go to downtown.

10. @megabot --> @taxi_bot: Shows map. Enter starting Location.

11. me: current location.

12. @megabot --> @taxi_bot: Your driver will be arriving in 4 mins. via a) Uber b) Lyft

13. me: Uber

14. me: confirm. end taxi_bot

15. me: I want to watch Spiderman 4 today evening.

16. @megabot --> @movie_bot: Spiderman 4 is playing in 4 theatres close to you. Please select from the following theatres. a) Sundance b) Rainbow c) AMC Also, the imdb rating is 9.4/10 and Rotten Tomato Rating is 7/10.

17. me: Rainbow

18. @megabot --> @movie_bot: Please select the show times: a)7:30 pm b) 10:30 pm

19. me: 7:30 pm. confirm. end movie_bot

20. me: I want groceries delivered via instacart

21. @megabot --> @instacart_bot: blah blah

22. me: I want my lawn mowed next Sunday via task rabbit.

23. @megabot --> @taskrabbit_bot: blah blah

Basically one interface which provides a seamless experience. Without the bots, I have to either download a bunch of apps or if they are already downloaded, I have to context switch between them. Likewise, I would have to search the web a lot and multiple click my way to what I want.

2
Spearchucker 6 hours ago 1 reply      
This guy described the parameters of the perfect phone. So much stuff really hit home - like the (incorrect) assumption that I'm always online. Even in London. Azure on the Underground, anyone? I also don't see myself using Cortana or similar unless it saves me taking gloves off in the cold. I won't install chat apps because they grab ALL my contacts rather than only those (and the meta data) need Ata point in time. Apps, especially chat apps, do not facilitate user control and consent.
3
rounce 3 hours ago 0 replies      
> Designing the UI for a given task around a purely conversational metaphor makes us surrender the full gamut of choices wed otherwise have in representing each facet of the task in the UI and how they are arranged spatially and temporaly (sic).

It was here I succumbed to hypoxia.

4
readams 14 hours ago 5 replies      
The correct answer here is ... web apps. They're just better than chat bots. The only real issue is an easy way to link identity and payment information into the interactions. These chat bot guys always claim their main advantage is not having to install apps. The web browser is already there and installed.
5
brianchu 16 hours ago 1 reply      
Great read. I found particularly insightful the idea that conversational UI is like skeumorphism - trying to shoehorn analog forms (conversation) into a digital app, bringing along a bunch of details and actions that don't serve any purpose or are worse than the alternatives.
6
djfm 16 hours ago 4 replies      
Finally the non tech people are discovering the power of the command line. The next big thing is bash.
7
usethis 5 hours ago 1 reply      
I wholeheartedly agree, WeChat OA / apps were borne out of a need for a simpler, faster and lighter mobile communication channel as opposed to bloated websites and apps. Message bots reduce the barriers for interaction and offer a more focused experience: they simplify payment, reduce data load, remove or simplify account setup, offer better discoverability, etc. They are also easier to add / remove and don't install anything on your OS.

Some of these issues can be solved by the OS: offering a unified notifications center with a better customizable UX, a per channel notification history, and more freedom to define follow-up actions. But unless the OS offers an alternative to apps, the barriers to install an app will remain bigger than installing a message bot, especially for one off communications.

I see an interesting future in a consolidated, block-chained B2C communications app, with a unified API for payments, notifications, and common UI elements. I can easily add contacts, and if I want marketing, I'll install the app or visit their website.

ps. I can recommend the recent discussion around bots on the a16z podcast.

8
dotch 14 hours ago 1 reply      
My take away from this article is that the notification center on a phone OS should really take the cues from messaging apps and experiment with providing a more meaningful thread-based way of displaying notifications with more information than red bubbles with numbers and a better way to act on them than just launching the associated app.
9
calgoo 16 hours ago 3 replies      
I would prefer a simple app interface with big buttons (on mobile) to do the exact thing i want, than having to use some chat menu system. So when im in an airport somewhere i can click while walking to order a taxi or check my email or similar. Just exchange the AI for a simple column of changeable links / buttons to my most used functions, allowing to have sublists. Then allow me to use a builtin password safe to get the information to me in a simple modal or similar. I dont want 100 apps on my phone for each service, but i also dont want a closed interface to how i access the information. Web pages are ok, but most of the time i can not modify the interface to remove items i dont care about. This causes time consuming tasks such as zooming to be able to click on links or using interfaces that can change to whatever the developer thinks is better for you.

Could we create standards for things such as bank APIs, travel APIs, store APIs etc? I dont like forcing things on people, but if we could get some type of collaboration and create a UI where the user gets to create buttons / menu options to access those services directly? Let the user add only the needed options to a button on the home screen called "bank" for example.

I dont need OAuth or similar (Dont want that stuff close to my bank account) but some standard way to authenticate? Then just request my PIN whenever it needs to access a stored password. Or just request the password with saved username when accessing the request for the first time and then save it in a session etc.

I guess what i want for my phone is a easily programmable interface where I can setup whatever functions to access my information directly and not needing to access any other app. Apps are great, but it does not solve the issues that exists on desktops. How do we organize all these icons? Using menus or using desktops with icons? using lists of some sort? Very few systems have come close to solving this issue and most are probably more experimental then production. I believe the cloud should be APIs instead of HTML and Apps. Mobile interfaces should just be scriptable interfaces with some per-defined tasks and an easy way to download new tasks that others have done. Now, this does sound like apps, but its more specific: Its more like downloading a script from your bank that accesses your account balance. Then another script for your card balance. Then another script for a transfer etc. Then you just say where in the interface you want these buttons / links and done.

Anyway, time to end my long rant about mobile interfaces.

10
dk8996 14 hours ago 1 reply      
This was such a nice read. The author points out a number of small problems that add up to something much larger. I can see how we need some version of something like the Internet for mobile devices -- not mobile web. I think the whole mobile app ecosystem has become a big hindrance for users.
11
JamilD 13 hours ago 0 replies      
The last part of this reminded me of my favorite part of BB10, when I briefly used it: BlackBerry Hub.

It was so freeing to have my texts, Facebook messages, emails, etc in one place. People noticed I responded to messages faster, and I was much more organized than when everything was fragmented into different apps. I just need something like this on the OS-level on iOS.

12
galfarragem 1 hour ago 0 replies      
Is Elixir/Erlang getting trendy because of the rise of bots and chat?
13
dkarapetyan 14 hours ago 0 replies      
Oh so this is why all of a sudden text UI is such a rage. Everyone is trying to copy WeChat's success.
14
thebaer 11 hours ago 0 replies      
Well technically, Android doesn't sort notifications strictly reverse-chronologically, but also within different priority levels that the apps themselves choose (e.g. phone calls and direct messages = always at the top, "Foo just liked Bar's video on FB" = hopefully at the bottom). Not sure if iOS does the same.
15
elorant 5 hours ago 0 replies      
I wonder when the marketing guys will get their head around this and then well have jobs like bot conversion expert, bot marketing ninja, bot social expert, bot this and bot that.
16
Dwolb 15 hours ago 0 replies      
I think the bigger story with bots will be the aspect of a business that the user doesn't interact with, namely the backend.

As businesses seek to provide their services wherever users are (any platform that can take user input) we'll see businesses start to standardize interfaces between any bot/app/web app and their own fulfillment operations. Then businesses can take an incoming order from interface and send their product or service out to a user.

It's funny that Amazon started doing this with their own services many years ago.

17
nathancahill 18 hours ago 1 reply      
The nostalgia hit hard with the phone SMS screens.
18
reitanqild 5 hours ago 0 replies      
Now this seems to be an ux designer I'd be happy to listen to.
19
miguelrochefort 10 hours ago 1 reply      
Wrong. A new language will replace apps.

Here's how the world works. We have:

- the actual world (one in total)

- the perceived world (one per agent)

- the ideal world (one per agent)

The goal of each agent is to get the actual world to match their ideal world. To do that, they need to:

1. make their perceived world match the actual world (learn)

2. identify issues with the actual/perceived world in order to define their ideal world (choose)

3. share their ideal world with others (ask)

4. find agents with complementing resources and desires (commit)

5. act on commitments taken to further their ideals (do)

What we need is a communication platform with the above processes baked in. Imagine a big semantic knowledge base holding the perceived world of all agents, as an approximation to the actual world (which exists outside of it). Think of it as an encyclopedia, a mirror of the world. All measurements of the actual world made by humans, machines, or bots, will be pushed there. Whenever you browse this knowledge graph, you get to contribute by confirming or opposing facts. Eventually, you will notice a pattern and side with those who agree with you the most. At this point, you can reliably expect those agent's future perceptions to align with yours, and will notice the same for your ideals/wishes. This is when you don't only get to learn about the present/past, but get to decide about the future, shaping your ideals. This is done just like you would describe perceived events, except this takes place in the future. Wishes and predictions, are effectively communicated by describing the future (the only difference being that you accept to lose money/resources/reputation/score in exchange for a wish to happen, and you accept to lose money/resources/reputation/score if a prediction doesn't happen). Once the knowledge graph knows enough about you, it will be able to identify complementing ideals/reality-deltas with other agents (i.e., "I need a couch" and "I no longer need my couch"). This leads to a contract in which all members agree in a shared future reality. This contract/commitment becomes a prediction, and predictions that involve yourself often become todo/tasks for to user to get done (expressed as the state of reality expected to be met). Then, all people need to do is make the actual world match predictions communicated by describing the future of the world.

Basically, all we need is a language and interface that let's people describe the world in static terms (i.e., "I was in New York yesterday", "I am in California today", "I will be in China tomorrow"). You don't want a Uber, you want to be home before supper. You don't want to buy a drill on Amazon, you want a hole in your wall. You don't want to unlock your door, you want to be inside the house. You don't want to share a tweet, you want its content to be seen.

By getting rid of all verbs except state verbs, we significantly simplify the language. You quickly realize that "read a book", "watch a movie", "learn a recipe", "listen to a song", "see a painting", "meet a person" can all be replaced by "know x". When you share/retweet/submit/send/bookmark x, what you actually want is for someone to "know x". The same thing applies when you "send money", "ship a package", "fly to Hawaii", "walk to the drugstore", "invite someone to an event", "order a pizza". They're all just ways to describe "x is at y".

Natural languages are extremely inefficient. We keep repeating the same things, over and over again. We say things that other people have already said before, we say things we have already said before. This is insane. It would be like everyone posting distinct comments on an article instead of upvoting a comment that reflects their thought. Actually, it should not be possible to communicate something that has already been communicated before. There should be no difference between the act of searching for an existing comment/question and coming up with a novel one (i.e., like on Stack Overflow). FAQs are extremely useful, and prevent people from trying to formulate and ask questions that have already been asked and answered. Why not apply that concept to communication as a whole? That's what the knowledge base I mentioned above is for. You no longer have to ask 99% of the questions you would ask otherwise, and you can communicate about yourself by simply agreeing or disagreeing with what has already been said by others.

I could go on, but I see this is going off the track. Natural language is fundamentally limited and we must build a new communication platform (and language) that will elevate humanity. The fact that nobody is talking about this is impossible.

       cached 24 April 2016 16:02:01 GMT