hacker news with inline top comments    .. more ..    27 Mar 2014 News
home   ask   best   5 years ago
86 points by boolean  1 hour ago   25 comments top 12
1
ljd 36 minutes ago 0 replies
It's interesting to see a country dismantle internet access site by site. It challenges our notions of what we think will always be there.

If your site has a dependency on YouTube, your site isn't going to be functioning in Turkey.

It's like Netflix's Chaos Monkey [0], but in real life.

2
hocaoglv 53 minutes ago 3 replies
After Twitter, this was expected. Just couple of hours ago, leaked audio recordings were uploaded to Youtube which show "Foreign Minister Ahmet Davutolu, National Intelligence Organization (MT) Undersecretary Hakan Fidan, Foreign Ministry Undersecretary Feridun Sinirliolu and Deputy Chief of General Staff Gen. Yaar Gler are heard discussing possible intervention into Syria and possible reactions from the world"

They were planning to organize artificial attacks from Syrian border and put the blame on Syria and strength Erdogan's position before local elections.

3
erichurkman 35 minutes ago 0 replies
I hope enough copies of whatever damning evidence the Turkish government wants to suppress are being disseminated outside of Turkey as well as inside the country.

Long live Sneaker Net.

4
tzaman 6 minutes ago 0 replies
It's sad that one corrupt politian has this much power
5
corkill 38 minutes ago 1 reply
This will just push everyone sharing info to Facebook.

Next move bye bye Facebook...or internet.

Any way to built a mobile app that rotates servers enough to circumvent this kind of blocking?

6
DickingAround 29 minutes ago 0 replies
Blocking social media sites and expecting it to reduce tension is like turning off TV & radio broadcasting and expecting people to remain in their homes; it's a failure to understand modern systems of control. Orwell doesn't work at all. Huxley is your only hope Turkey :P
7
stephengillie 34 minutes ago 0 replies
At least we have a Redundant Array of Independent Social Networks to help us route around this intentional damage.
8
theverse 28 minutes ago 1 reply
9
BgSpnnrs 22 minutes ago 0 replies
"Earlier, what appeared to be a leaked audio recording of Turkish officials discussing Syria appeared on YouTube."

A bit light on the details there, auntie beeb. This is some pretty damning evidence.

10
MCarusi 23 minutes ago 0 replies
Since the Twitter block was working just so well.
11
PetoU 32 minutes ago 0 replies
and the next revolution comes to...
12
pearjuice 8 minutes ago 0 replies
All my support goes to the Turkish government for trying to abrupt the Western propaganda and disrupting the influences of American secret national agencies. It is no secret that the protests and revolutions in Syria, Egypt, Tunis and the like were caused by internal fabrications from American soil so they could use the uproar for settling marionettes and proper satellite states instead of having to deal with nations which are against America and Israel.

Hopefully the Turkish government is successful in its endeavors to protect its citizens from a faux-revolution fed by propaganda easily spread in this digital age (i.e. Youtube and Twitter, Facebook is of high concern too).

The only counter-measure is to shut. it. down.

2
The Mill It just might Work engbloms.se
109 points by dochtman  2 hours ago   21 comments top 10
1
Symmetry 1 hour ago 1 reply
It's really exciting, but here are a few worries I have about their ability to meet their performance claims:

1) I don't see that they'll be able to save much power on their cache hierarchy relative to conventional machines. Sure, backless memory will save them some traffic but on the other hand they won't e able to take advantage of the sorts of immense optimization resources that Intel has, so that fraction of chip power to performance isn't going away.

2) The bypass network on a conventional out of order chip takes up an amount of power similar to the execution units, and I expect that the Mill's belt will be roughly equivalent.

3) I'm worried on the software front. The differences between how they and LLVM handle pointer is causing them trouble, and porting an OS to the Mill looks to be a pretty complicated business compared to most other architectures. It's certainly not impossible, but it's still a big problem if they're worried about adoption.

All of which is to say, I think the 10x they're talking about is unrealistic. The Mill is full of horribly clever ideas which I'm really excited about and I do think their approach seems workable and advantageous, but I'd expect 3x at most when they've had time to optimize. The structures in a modern CPU that provide out of order execution and the top-level TLB are big and power hungry, but they're not 90% of power use.

If they're going to hit it big they'll probably start out in high-end embedded. Anything where you have a RTOS running on a fast processor, and your set of software is small enough that porting it all isn't a big problem.

Also, the metadata isn't like a None in Python, it's like a Maybe in Haskell! You can string them together and only throw an exception on side effects in a way that makes speculation (and vector operation) in this machine much nicer.

EDIT: Whatever the result of the Mill itself, it contains a large number of astoundingly clever ideas some of which would be useful even without all the other ideas. Like you could drop in Mill load semantics to most in-order processors and you'd have to do something different with how they interact with function calls but it would still be pretty useful.

EDIT2: I may sound pessimistic above, but I would still totally give them money if I were a registered investor. The outside view just says that new processors that try to change lots of things at once is pretty bad even if, on the inside view, they have a good story for how their going to overcome the challenges they'll face.

2
ChuckMcM 27 minutes ago 0 replies
I am really rooting for these folks, after going to a talk on it last year about this time and trying to everything thing I could to pick it apart (they had good answers for all my questions) I felt confident that its going to be a pretty awesome architecture. Assuming that reducing to practice all of their great ideas doesn't reveal some amazing 'gotcha', I'm hoping to get an eval system.

The thing I'm most closely watching are the compiler stuff since this was such a huge issue on Itanium (literally 6x difference in code execution speed just from compiler changes) and putting some structured data (pointer chasing) type applications through their paces which is always a good way to flush out memory/cpu bottlenecks.

3
dkhenry 1 hour ago 1 reply
So where do I buy one and test it myself. I love the theory, and some of the claims are awesome, but I am reminded of the Cell-BE and the chatter around it at release time. It wasen't untill we got the Cell into the hands of developers that we learned it's real limitations. I want a Mill I can write programs for and run benchmarks against. My benchmarks on my bench.
4
dochtman 2 hours ago 1 reply
I wonder what the compilers would be like. If these guys contribute, say, an LLVM backend, that would make it so much easier to support.
5
Brashman 1 hour ago 0 replies
What differentiates Mill with Itanium?

Also, what are the 2.3x power/performance improvements based on? Is there silicon for this?

6
rayiner 1 hour ago 0 replies
This is a detailed description of the architecture: http://millcomputing.com/topic/introduction-to-the-mill-cpu-....

It describes Mill's approach to specifying inter-instruction dependencies, grouping instructions, and handling variable-latency memory instructions.

7
Xdes 1 hour ago 0 replies
Can't wait to buy a Mill and mess around with it. Hopefully it isn't more expensive than current desktop or server processors.
8
comatose_kid 1 hour ago 0 replies
I worked on a VLIW processor long ago and it had a theoretical peak of 700 MIPS (iirc) back in 2000. It was a neat architecture but required fairly low level knowledge to get the most out of it.
9
DiabloD3 2 hours ago 0 replies
I want to replace every computer in the world with this.
10
WhitneyLand 24 minutes ago 0 replies
Who is this guy? Where can you teach post-doc computer science without ever having taken a course in CS, let alone a degree?

Obviously a degree is not a necessary condition for success and it's always bothered me that people like Michael Faraday had to battle academic and class prejudice before changing the world.

However I don't think it's unreasonable to see a bio of past projects/companies/research papers.

"Despite having taught Computer Science at the graduate and post-doctorate levels, he has no degrees and has never taken a course in Computer Science"

3
We Built a Hacker News for Biotech harlembiospace.com
51 points by owensmp  1 hour ago   17 comments top 11
1
JTon 58 minutes ago 1 reply
Kudos. Not in the biotech space myself, but I've bookmarked for a rainy day. I particularly enjoy the UI. Arrow keys work to navigate posts (comment section). Enter works to launch URL. Sexy.
2
danso 42 minutes ago 0 replies
It's funny...looking at the OP, I suddenly realize how much real estate is wasted on HN and its single column layout...if the OP reduced font-size and a little line-height, it could fit almost as many headlines above-the-fold as HN does, using much less horizontal space.

But maybe I'm old, and too familiar with HN, and too resistant to change, but HN's fuck-it-just-fill-the-column-with-a-table layout is comfortable in ways that I've only noticed when doing direct comparisons to HN-like sites.

In regards to the OP, I think that what HN loses in information density, it gains in "zen"...On the front page, there's simply less "conflict"...my mind feels more relaxed at looking at a single flat list...and when I click through to the comments, seeing just a long (and admittedly, too wide) column of comments. With the OP, my mind has to divide itself between scanning the list on the left and whatever may be on the right...and even if the right side is mostly blank (as it is with empty threads), something in my subconscious thinks that something is wrong...it's enough cognitive burden to make the experience not as effortless as it is with HN.

But I may be stretching here...HN works instinctively because I read it enough for its quirks to be instinctive. But the bigger picture is that HN, day in and day out, provides good reading material and almost never fails to disappoint in the comment threads...so any HN-clones, regardless of improved UI, still have that major hill to adoption to climb.

3
pseut 20 minutes ago 0 replies
Probably not something you should care about, but this site doesn't load at all in Konqueror.
4
frozenport 13 minutes ago 0 replies
Make the font size smaller and center the text. Most of the screen is occupied with irrelevant data.
5
bct 34 minutes ago 0 replies
zerop posted a comment with links to many different HN clones here, but it's marked dead. Maybe because it included too many links?
6
micro_cam 25 minutes ago 0 replies
Cool, I hope this takes off. I joined and posted a project a coworker and I are working on.
7
japhyr 37 minutes ago 1 reply
Shameless plug: I just built a Hacker News for Education, and I'd love to get some feedback/ interested users:

http://educatornews.net

PS Also very happy to check out OP's take on building an HN "clone". I am a math and science teacher, and keeping up with current science always makes my classes more interesting and relevant.

8
dakrisht 43 minutes ago 0 replies
Nice work, look forward to adding to the new community.
9
dmix 53 minutes ago 1 reply
Ah, the weekly "hacker news for X".
10
zxexz 48 minutes ago 0 replies
This is amazing, thank you. I've been waiting for something like this for a while now. Hacker News gets some biotech stuff, but not very much.
11
anarchoni 53 minutes ago 2 replies
Any more science related news aggregates like this?

F*ck it - list all the great aggregates like Hacker News.

4
How to Make GIMP Work More Like Photoshop lifehacker.com
26 points by paulloz  55 minutes ago   8 comments top 6
1
greggman 23 minutes ago 1 reply
Maybe someday the gIMP will be even close to Photoshop. Heck, maybe it is for some people's needs. But, I'm not even an artist and the gIMP doesn't come close to meeting my needs.

"No decent OpenType typography, no layer styles, no smart objects, no dice." (from the comments on the OP)

If all I need to do is crop or scale an image than sure, I might get by with gIMP though in that case I'd arguably get by better with something simpler than gIMP.

But, I actually use vector layers with layer styles ALL THE TIME. I actually use text layers with layer styles ALL THE TIME. I actually use non destructive adjustment layers ALL THE TIME.

Photoshop layer styles are like CSS. You can declare your styles and then edit vectors or text and the styles apply dynamically.

AFAIK gIMP has no equivalents. Those are not minor features. They're what set Photoshop apart.

2
jordigh 24 minutes ago 0 replies
This reminds me of the endless bug reports we're getting right now about the Octave GUI not being indistinguishable from Matlab. Makes me wish people weren't so inflexible about the tiniest UI differences. It's difficult to please the converts.
3
fidotron 5 minutes ago 0 replies
Krita may have a different aim on paper, but that just looks like a way to avoid conflict with the GIMP devs until it's clear to absolutely everyone that they've replaced it.

One of the real shockers with the GIMP is how badly it plays with Ubuntu's Unity, where the menubar will get emptied whenever you change something in a non-image editor window.

4
nomadcoop 11 minutes ago 0 replies
I haven't tried it myself so can't vouch for it but GimpShop (http://www.gimpshop.com/) fills a similar niche.
5
skrowl 30 minutes ago 1 reply
If you use Windows, consider Paint.NET (http://www.getpaint.net) instead of GIMP. It's much quicker (particularly the 4.0 beta builds) and has a much more Photoshop-like UI out of the box.
6
Dale1 13 minutes ago 0 replies
The only way to get something that works like Photoshop.....

I bet you can guess the answer!

It's a bit like all those people who try to get a PC to run OSX. Just buy a damn Mac for goodness sake!

5
The problem with self-published books startupreader.net
39 points by mijustin  1 hour ago   27 comments top 19
1
spindritf 40 minutes ago 1 reply
First, I have a collection of unread books sitting in a folder called eBooks on my computer.

Second, its hard to know which books are good.

I have had those exact problems long before ebooks. I still have a pile of unread legacy books. Some bought a decade before, others borrowed and never returned (my local library closed down), many inherited, older than me.

Not to mention, new books are being written all the time while older titles remain available. You can read a book a week and you'll still be behind... forever.

This is a direct result of the effort required to really read a book, tens or hundreds of hours. It's much easier to buy than to consume.

Professional reviewers and a recognizable author's name are the traditional solutions to those problems. Buying a book from someone whose blog you like (essentially sampling) is IMHO superior to both. Maybe because I don't share reviewers' tastes.

A better solution is of course always welcome but the challenge isn't new.

2
visakanv 57 minutes ago 1 reply
"But instead of reading books regularly, when I sit down with my tablet I end up reading Zite, Flipboard, and Instapaper."

This isn't a problem with self-published books. I don't have any ebooks at all. I have two full IKEA bookshelves loaded with actual physical books I want to read. But I don't read them as regularly as I'd like to. Why? It's because I don't make time for them, plain and simple. Same reason I don't exercise as much as I'd like to, or go on dates with my wife as much as I'd like to.

You have to set aside time to do reading, or any important task where the payoff isn't immediate (in seconds, like 2048), because otherwise it won't happen because there are so many other things competing for your attention.

3
egypturnash 3 minutes ago 0 replies
I self-pub comics. Weird sci-fi comics about robot ladies with reality problems to be precise.

A major part of my business strategy is going to comic book conventions and sitting behind a table, talking to people who stop there, telling them about my book and possibly exchanging one for some money. Sometimes I'll have someone buy my comic and come back the next day to rave about it. It may be relentlessly physical and retro, but it's working. I feature the URL of my site in my physical books, and let people comment on new pages as I draw them; there's a modest amount of back and forth between them now and then.

I dunno what parts of that can be transferred to tech books, I don't think there's a vibrant network of tech book fairs out there! A book club sounds like a pretty good idea, really.

(If you're gonna be at ECCC in Seattle this weekend, stop by table CC-09 and say hi.)

4
ivan_ah 13 minutes ago 0 replies
I think the idea of a "book discussion page" where readers and the author can interact is very promising.

In the past I've used an etherpad to "live chat" and collaborate on problems with students (e.g. https://piratenpad.de/p/linearalgebra ), but I think a forum dedicated to the book would be more interesting.

It's a win-win situation: the author benefits from receiving feedback from readers, and readers can connect with like-minded people. Excuse me while I go learn how to install discourse with MathJax support ;)

> use drip emails to track your progress through a book,

Yes. This would be awesome as it might push you to read the book, but it would have to be done intelligently to work. Just a nagging reminder prodding you to read won't do it. Maybe receiving a weekly batch of exercises or review questions?

5
mikeash 1 hour ago 2 replies
None of this has anything to do with self-publishing itself.

You have trouble reading ebooks and, I presume, prefer paper? Self-publishing is entirely compatible with offering a paper version of the book.

Books that aren't listed on places like Amazon are troublesome? Good thing it's really easy to get a self-published book on Amazon.

Self-published doesn't have to mean self-everything.

6
j2kun 3 minutes ago 0 replies
A company like Leanpub is in a perfect position to solve the interaction problem: have a dedicated forum for each chapter of each book it publishes.
7
krmmalik 1 hour ago 0 replies
I've been having a similar problem myself. I just don't read as many books as I used to. Part of it is because I just don't want to spend so much time in front of a screen (I use an iPad to read), and the other is because I just can't seem to relax enough to get into reading mode, so reading only happens every other weekend when I can get away from thinking about my work from the day.

I guess the screen problem can be solved by getting a Kindle with e-ink, but getting the right headspace is harder.

I like the idea of a book-club, but the problem is I have a list of books i want to get through personally and so my list may not be the same as the book club. I wouldn't read a book just because it was on this month's reading list in the book club.

The book has to be of interest to me for me to participate, and there's no knowing if interests would align well or not.

8
cschmidt 35 minutes ago 0 replies
I don't know if any of the rest of you remember it, but my favorite book club ever was the Global Business Network (GBN)[1] book club, edited by Stewart Brand [2].

The GBN was a network of "interesting" people, and they would put forward books. Stewart Brand would take these suggestions, add his own, and write very insightful reviews. You would really hear about new, interesting stuff there first. It was a book club in a broadcast sense, where they suggest things to read, rather than a discussion forum.

Sadly, GBN was acquired and people moved on. For a long time, the list of books and reviews was still online, but now all of gbn.com seems to be down.

Maybe that could inspire more of a HN book club, which wouldn't just be "startup books", but interesting stuff in the HN vein.

Edit: [3] is a list of the books for 1988 to 2006, from archive.org

9
Dotnaught 28 minutes ago 0 replies
This is not a new problem. I self-published a sci-fi novel in 2001: http://www.amazon.com/Reflecting-Fires-Thomas-Claburn/dp/073...

The book sold a few hundred copies, thanks to a few reviews on websites and a post on Slashdot. The challenges then are the same today: self-publishers tend to be poor marketers or to not have the time/resources to market effectively; lack of quality reviews and an abundance of pay-to-play review sites that will take your money and do very little for sales; self-published titles are (often justifiably) seen as less worthy than texts backed by an established publisher; and there's an overall shortage of available attention, thanks to the abundance of media options today.

>Tools like Draft, Scrivener, and Penflip have improved the writing, editing and collaboration process.

Technology may make editing easier but it does not improve it. A decent writer won't need a grammar checking algorithm.

>Publishing software like iBooks Author, Leanpub, Softcover, Pressbooks, and Liberio allow authors to easily design, format, and publish their books themselves.

These tools are helpful but there's a reason professional designers exist.

>And e-commerce platforms like Gumroad, Memberful, and Digital Goods Store have solved the payment and distribution problem.

But these problems pale in comparison to getting marketing and attention.

What's more, these issues are the same for other self-published media: apps, music, and videos.

I'd love to have my book or my app (http://blocfall.com/) discussed by a reviews group, but a review in the New York Times or front-page placement in the iTunes App Store would be a lot more helpful.

10
Rafert 1 hour ago 0 replies
Just because a book isn't sold on Amazon it doesn't mean you can't find reviews somewhere else (e.g. http://www.goodreads.com/ ).
11
willaa 9 minutes ago 0 replies
No time for reading is the biggest issue which is also an excuse for most of us. I like the idea of having a book club with interesting people, but don't you think there are already some meetups out there for that? I found myself joined "one week a book" meetup but never being committed.
12
drdeadringer 1 hour ago 0 replies

This was true for me... 10 years ago. Now I have a kindle and all these ebooks zipped onto a device more suited for reading ebooks. And they're getting read.

> It's hard to tell which books are good.

I find this to be true for All The Books; even if I do find/read favorable reviews, I can still mislike a book even if it's not independently published.

13
davidw 1 hour ago 0 replies
This isn't so much the problem with self-published books, as with books that are self-published and more or less "self-sold". I wrote a bit about Authority here: http://blog.liberwriter.com/2013/11/21/nathan-barrys-authori... and think this is a real problem. I don't like not having reviews. There are some books that look interesting to me, like this one: http://www.rachelandrew.co.uk/books/the-profitable-side-proj... where I'm a bit reticent because of the lack of reviews and Amazon integration, combined with a higher price than I'm used to.

I don't think that everyone selling their own books on their own sites is a stable equilibrium.

14
DanielBMarkham 1 hour ago 0 replies
I've been a self-publisher for a couple of years now. (My current e-book is about backlogs, or to-do lists. Shameless plug: http://tiny-giant-books.com/backlogs.htm )

I've also been a freelance writer since my teens, having been published in books, magazines, newspapers, and weeklies. And I read like heck. So I know this arena.

There are a few more problems the author does not mention. The number #1 problem on the internet is that everybody wants to be an author, but nobody wants to be an editor. You can click a button and poof! You're published. Back when you had to send it to somebody, and get sometimes biting criticism, you tended to think more carefully about what you wrote. As a self-publisher, you have to be extremely paranoid about quality. And even then, what you don't see, you don't see. It's hard/impossible to replace a good professional editorial staff.

The second problem is that the physicality of books is different from e-books. Don't get me wrong: I love e-books. But for certain things, like learning a new complex skill, I want to have physical books scattered around the office opened to certain passages with other passages dog-eared or bookmarked. E-books just ain't the same.

I also wonder if we're not selling a shitload of e-books that nobody is ever getting around to reading. A lot of people buy books (or e-books) for their imagined experience -- not for the real one. When you have physical books, you can see when your stack is growing large. With e-books, it's very easy to over-consume.

Not sure a book club would help with that, but it would solve the problems the writer mentions. Perhaps some other features could be added to the group?

(I also need to mention that years ago I started a website for startups/hackers to recommend and share books. The idea was something like a social network, but instead of posting or sharing links or status updates, you posted new book titles and shared them. http://hn-books.com)

15
coreymaass 1 hour ago 1 reply
I could see authors creating a forum for their own book, and then leading the discussion. I'd love the chance to ask questions to an author while reading their book!
16
einhverfr 1 hour ago 0 replies
This is specifically a problem with self-published e-books with no print copy, right?

There's no reason you can't publish a hard copy and sell it on Amazon (I did that, on a topic relatively off-topic for HN).

17
nephics 1 hour ago 1 reply
A book club for startups. It could be interesting, but how would it solve the problem of prioritising what to read, if everybody is expected to read the same?
18
sivaku98 1 hour ago 0 replies
Nice one
19
sivaku98 1 hour ago 0 replies
23 points by mikeevans  50 minutes ago   3 comments top 2
1
ddorian43 5 minutes ago 1 reply
Isn't the mysql test suite no longer open source/available ?I thought oracle decided to hide them ?
2
42 points by liuliu  2 hours ago   6 comments top 5
1
yuvipanda 1 hour ago 0 replies
Just the samples were moved to CC, which makes more sense. Code still is 3 clause BSD, and always has been.
2
stefantalpalaru 1 hour ago 1 reply
From http://wiki.creativecommons.org/FAQ#Can_I_apply_a_Creative_C... :

> We recommend against using Creative Commons licenses for software.

3
caio1982 1 hour ago 0 replies
This older post explains better why we should care about it (or not): http://libccv.org/post/an-elephant-in-the-room/
4
jkrippy 26 minutes ago 0 replies
Here's a link to the their documentation for the algorithm: http://libccv.org/doc/doc-convnet/

Had to dig for a few minutes and wanted to help others find it.

5
bsaul 12 minutes ago 0 replies
is there any image recognition software working with texturized 3D models as input for data training ?
42 points by feelthepain  2 hours ago   39 comments top 14
1
chimeracoder 21 minutes ago 0 replies
This doesn't surprise me at all.

Even if the effect described in the article is not conscious ("let me pay this person less, since he's a foreigner[0]"), I have no trouble seeing how it could happen (unconsciously), based on my own personal experiences.

My first name is very difficult for Americans to pronounce because (A) it is not phonetic; and (B) even a phonetic spelling would include sounds not common in English.

My last name is actually an Anglicisation of the original family name, so it's entirely phonetic. Even still people have trouble saying it. That one confuses me to this day.

The impact on my life of having a tough-to-pronounce name is usually subtle, but it's noticeable in minor ways. I can imagine that these would add up in the long run (perhaps not for every individual, but in the aggregate).

[0] That said, let's not discount the impact of being able to identify a person's race by his/her name (the original Freakonomics book goes into this a lot). I have a friend who has a very obviously Chinese name, and he started putting "US Citizen" at the top of his resume, because so many people thought that he needed visa sponsorship, even though he was born in the US.

Another friend of mine is white, but his surname also happens to be very common for Koreans. He's had a few funny interactions where he walked into an interview and the person was momentarily surprised to see a blonde, white man instead of an Asian interviewee.

2
jmnicolas 0 minutes ago 0 replies
Or they earned more because changing their name might be a sign of strong motivation to fit-in and succeed.
3
allochthon 4 minutes ago 0 replies
The article deals with immigrants to the US trying to fit in the 1930s, a time when great pressure was brought to bear on people to conform to a narrow, archetypal notion of Americanness, and appreciation of diversity was far from people's minds. I'm hoping that in 2014 incremental progress is being made in a direction in which there will be less and less pressure to change one's name to fit in and advance economically.
4
jimbobimbo 10 minutes ago 0 replies
Whenever people ask me how I pronounce my last name, I answer "I don't".

Even my compatriots have problems spelling it properly; in US I had my credit card re-issued three times due to misspellings and Amex doesn't allow as much characters on their card, so I mix my Americanized first name with an original last name. The only benefit of keeping it as is in US is when someone tries to call you in the crowd - "Mr. erhm... ugh... hmmm..." - is usually me. :)

5
raverbashing 1 hour ago 4 replies
And sometimes "Americanized" means "the guy at the immigration counter didn't know how to spell their name so he put the closest thing there"

If you knew how to spell Matthus, good, otherwise it was Matthew

"Over half of Russian migrants Americanised their names; only 4% of Irish migrants did so."

Not surprising, and I would bet the 4% consisted of Siobhans, Padraigs, etc

6
JamilD 1 hour ago 6 replies
Back in the 90s, my uncle couldn't find a job as a software engineer in Texas - he wouldn't even get an interview, despite having graduated from a top-tier American university.

After changing his first name on his resume from a very Muslim-sounding name to an "Americanized" one, that changed completely.

Things are changing now, but I wouldn't be surprised if this bias is, at least to some extent, still evident.

7
xplorer 58 minutes ago 0 replies
That's actually what happens in Canada. In Quebec even if you're a native French speaker ( the official language in Quebec ) but you happen to have a Muslim name or a Chinese name you will suffer from discrimination... And this is a fact... it's sometimes hidden behind an excuse that the person didn't meet the criteria or didn't have enough experience.

In Quebec if you don't have an American name or a French name they might not read your resume.

8
tomrod 1 hour ago 1 reply
Small issue with the studywriteup: the emigration pool from each country was not homogenous. Unless the authors addressed that within the study I'm hesitant to believe the outcomes. The outcome is wages-from-occupation, which occupational skills may have been the impetus for moving from a home country.
9
drdeadringer 57 minutes ago 0 replies
Asimov's short "Spell my name with an 'S'" addresses this. The title says it all.

More recently, I've one or two articles regarding engineers having to add "Mister" to their resume with dishearteningly profitable results.

10
disputin 36 minutes ago 0 replies
Eastern Europeans in London do this. Andrzej becomes Andrew, Wojciech becomes Voytek, names beginning with Sz drop the z. People working in coffee shops often change their name after working there a while.
11
harmonicon 36 minutes ago 0 replies
So does this mean I should change my first name? My first name is one syllable and it does not involve anything like tongue clicking or rolling. But many people still find it hard to say. Name starts with a J and end with N and I think people sabotage themselves by always trying to fit a "John" in there.
12
kievins 44 minutes ago 0 replies
This reminded me of Commodore founder Jack Tramiel who originally was called Jacek Trzmiel. http://en.wikipedia.org/wiki/Jack_Tramiel
13
bluedino 41 minutes ago 0 replies
This happens today in the US, especially with African-Americans with 'unique' names.

'Charles Taylor' is going to get more callbacks than his sister 'LaQuashandreka Taylor'.

14
kingmanaz 1 hour ago 0 replies
When in Rome do as the Romans do.
9
Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)? stackoverflow.com
133 points by Doublon  5 hours ago   52 comments top 10
1
tikhonj 4 hours ago 5 replies
Because, of course, floating point addition and multiplication is not associative. This turns out surprisingly easy to demonstrate:

    0.1 + (0.2 + 0.3) = 0.6    0.1 + 0.2 + 0.3 = 0.6000000000000001
and the same for multiplication:

    0.1 * 0.2 * 0.3 = 6.000000000000001e-3    0.1 * (0.2 * 0.3) = 6.0e-3
It actually isn't "surprising" if you understand how the format works. It essentially uses scientific notation but in binary, with a set number of bits for both the mantissa and the exponent as well as a few changes thrown in for better behavior at its limits (like denormalization). This means that it can't directly express numbers which are very easy to write in decimal form, like 0.1, just like we can't express 1/3 as a finite decimal. It's designed to manage this as well as possible with the small number of bits at its disposal, but we still inevitably run into these issues.

Of course, most programmers only have a vague idea of how floating point numbers work. (I'm certainly among them!) It's very easy to run into problems. And even with a better understanding of the format, it's still very difficult to predict exactly what will happen in more complex expressions.

A really cool aside is that there are some relatively new toys we can use to model floating point numbers in interesting ways. In particular, several SMT solvers including Z3[1] now support a "theory of floating point" which lets us exhaustively verify and analyze programs that use floating point numbers. I haven't seen any applications taking advantage of this directly, but I personally find it very exciting and will probably try using it for debugging the next time I have to work with numeric code.

A little while back, there was an article about how you can test floating point functions by enumerating every single 32-bit float. This is a great way of thinking! However, people were right to point out that this does not really scale when you have more than one float input or if you want to talk about doubles. This is why SMT solvers supporting floating point numbers is so exciting: it makes this sort of approach practical even for programs that use lots of doubles. So you can test every single double or every single pair of doubles or more, just by being clever about how you do it.

I haven't tried using the floating point theories, so I have no idea how they scale. However, I suspect they are not significantly worse than normal bitvectors (ie signed/unsigned integers). And those scale really well to larger sizes or multiple variables. Assuming the FP support scales even a fraction as well, this should be enough to practically verify pretty non-trivial functions!

2
octo_t 4 hours ago 3 replies
Basically:

floating point is hard. Programmers get it wrong, compilers get it right.

(normally).

3
sheetjs 29 minutes ago 0 replies
Always a great read: "What Every Computer Scientist Should Know About Floating-Point Arithmetic"

4
Khaine 4 hours ago 1 reply
5
d23 1 hour ago 0 replies
Random sidenote, but I've never seen an SO answer be upvoted in realtime until now. Didn't even know they programmed such functionality.
6
willvarfar 5 hours ago 2 replies
7
TeMPOraL 2 hours ago 2 replies
One thing I always wondered about is why are we using floating point arithmetic at all, instead of fixed point math with explicitly specified ranges (say, "here I need 20 bits for the integer part and 44 for the fractional part")? What is the practical value of having a floating point that would justify dealing with all that complexity and conceptual problems they introduce?
8
username42 3 hours ago 2 replies
The non associativity of addition is obvious, but for the multiplication, I understand why it does not give always the same answer, but I do not see why a change of the order could change the accuracy.
9
pygy_ 4 hours ago 2 replies
I already knew about the lack of associativity of IEEE floats, but TIL that pow(x,n) is more precise than chained multiplications.

Good to know.

10
ww2 2 hours ago 2 replies
How would a functional language compiler deal with this case? like GHC?
18 points by jambo  1 hour ago   1 comment top
1
jambo 53 minutes ago 0 replies
Posting this here because I thought the many people who posted here, on github, and on twitter might want to know that there is a scholarship fund being started in Jim's name to help pay for CS education. The fund is being administered by the Cincinnati Scholarship Foundation. If you're a ruby developer, you know who Jim is, and you use his work whenever you type rake or use Flexmock.

11
Spatial Data Structures for Better Map Interactions seatgeek.com
14 points by efkv  49 minutes ago   discuss
12
You will become who you choose to be around kevinchau.me
11 points by kevinchau  39 minutes ago   9 comments top 6
1
steven777400 18 minutes ago 1 reply
Although I have to accept that the premise is generally true, it is also a little concerning. Friends, close friends, seem like they're "supposed" to be a bond like family: someone you're always there for in good times and bad.

Instead, philosophies like the one espoused in this article transform that deep friendship into a friendship of "utility". It kind of reminds me of the political marriages sometimes considered typical of the upper classes in various historical societies: when "love" may not even be a factor in a marriage.

I have a friend that has said similar things to this article. It makes me wonder: are we friends because we've been through a lot and share that bond, or is he my friend because I present a certain "utility" to his success? If the latter, does that in turn mean our friendship is discarded as soon as I fail to present that utility?

2
candybar 6 minutes ago 0 replies
I think the causation is at least partially the other way around - people you choose to spend time with reflect who you are.

When you try to game the correlation by forming friendships aspirationally, this largely breaks down. It's much more important to come to terms with who you are than to try to become more like someone else. When you're trying to be X, you're not X, you're just someone who's trying to be X.

3
kevando 3 minutes ago 1 reply
This is so true and also helps explain how echo chambers develop. I always try to change my perspective on something big at least once a month. Last month I disabled flash and all my Chrome extensions. Holy cow, the default internet is weird!

I build web sites, so this helped me see the way certain people (my dad) experience the internet.

4
aaronchall 4 minutes ago 0 replies
I like the entrepreneurship story, as well as the overall message.
5
robotic 17 minutes ago 0 replies
Surround yourself with world class olympians - become world class olympian.
6
stronglikedan 7 minutes ago 0 replies
This is too generalized. There are born leaders and there are born followers. Followers are influenced by those around them, while leaders forge their own path no matter what. However, a well rounded person knows when to adapt. They lead when necessary, and take instructions when required. There are some people who will never become like those around them, but will instead shape those around them to their own image.
15 points by scottoreilly  1 hour ago   4 comments top
1
frade33 46 minutes ago 2 replies
they say i am a naysayer so pardon me. but may I know who is your 'paid' audience? and ever wondered why don't people like Yahoo!? widgets! and you have built an entire startup around it. remember iGoogle? it nearly didn't work well either.

Apart from this, I am loving the website (UI/UX), and it's the only reason I am sticking around, and perhaps get addicted to it. :)

11 points by matevzmihalic  37 minutes ago   3 comments top 2
1
napoleoncomplex 3 minutes ago 0 replies
Brilliant! So simple.

I'm still quietly rooting for an E-Ink resurgence. Platforms like this and the E-Ink phones announced recently are great to see, something about E-Ink's simplicity is just so damn charming.

2
nodata 17 minutes ago 1 reply
Sometimes the simplest things are the best. Good job!
15
18 points by davexunit  1 hour ago   1 comment top
1
cottonseed 10 minutes ago 0 replies
I'm very excited for bunnie's open laptop.

http://www.bunniestudios.com/blog/?p=3597

16
Sparkfun offering discounts on Arduino day sparkfun.com
41 points by erre  3 hours ago   30 comments top 7
1
Eduardo3rd 2 hours ago 2 replies
$3 for a micro controller as powerful as the Pro is absolutely insane. I remember paying 10x that price for some components for an early generation Arduino back in college 5 years ago. Eventually I'm sure that$3 will be the normal retail price for these boards, at which point I think we will start to see some amazingly disruptive disposable electronics emerge.

Isn't it great to live in the future?

2
danielweber 1 hour ago 6 replies
As a n00b, what's the minimum I need to have to play with an Arduino? Will generally any Linux / Windows / Mac computer be able to run the software and connecting hardware I need to program it?
3
austinz 58 minutes ago 4 replies
Is there a toolchain I can use to compile and load plain C onto Arduino boards? I'm not too familiar with the development environment, although I'd like to try some stuff out with the hardware I have.
4
lsiebert 7 minutes ago 0 replies
You can get a uno clone from China for 9-10 us plus free Shipping.
5
tomswartz07 1 hour ago 1 reply
I have an old R1 Uno, and I've always been interested in the smaller form factor devices.

How exactly are the devices like the Pro and Mini programmed? I've never been able to get a straight answer.

6
Yhippa 2 hours ago 2 replies
Any recommendations for what to get if you're looking to get into developing on the Arduino?
7
antonio0 1 hour ago 1 reply
US only?
17
Btapp.js: BitTorrent in the browser btappjs.com
62 points by AndyBaker  4 hours ago   17 comments top 9
1
STRML 2 hours ago 0 replies
Bizarrely, their first example app - http://www.paddleover.com - gives us this message:

   We currently only support OS X Lion.    Enter your email below and we'll let you know when    support is available.
Could there be a more user-hostile useragent detect script?

In any case; it's not a JS app, it's a plugin.

2
captainmuon 1 hour ago 0 replies
Kinda unfortunate that browsers don't allow general socket access.

The browser could ask for permission (and possibly disable access to cookies or saved credentials for this tab), and it would be pretty safe. You can do that with a signed Java applet, and I believe it was possible with Flash and a couple other techniques, but they have all been killed or are on the way out.

Imagine what you could do... Bittorrent clients, something like Popcorn Time, Anonymous P2P, Anonymous instant messaging, Tor in the browser, ... or less nefariously, Mail clients, Mashups, ... all in simple HTML files, hosted anonymously on a free webhoster. You can build awesome stuff, but can't be hold accountable by your domain name, etc..

puts tinfoil hat on The paranoid part of me thinks the restriction is on purpose, to prevent this kind of app in the browser. Apple, Google & co. control the walled gardens of their App stores. They don't control the "open" web, but they have subtly pushed it in a direction where it is very powerful, but has very specific weaknesses.

3
ddalex 2 hours ago 2 replies
So you need to install the application to make it work. Presumably, they install a HTTP API-driven binary on the system, and then they connect to it from your browser.

Not impressed.

4
RRRA 1 hour ago 0 replies
A pluginless distributed P2P over WebRTC would be interesting!

The whole trust could be built client side and the servers used only for handshaking clients. We could then imagine clients that are aware of multiple servers and distribute both the client webApp & server on different URLs and the whole network could somehow connect in a yet to be defined network structure for efficient traversal.

The one part missing is a w3c standard allowing web of trust signing of the packaged webApp, debian keyring style, to be able to host it anywhere and still trust some set of developers.

5
KaiserPro 1 hour ago 0 replies
Its not a torrent client in the browser, its an overly complex interface API.

Why you wouldn't just use the transmission API is beyond me.

6
DieBuche 3 hours ago 1 reply
This requires the installation of some plugin to actually work. They made it sound like it was JS only.
7
malokai 39 minutes ago 0 replies
Here's another app for torrents: http://jstorrent.com. It is a torrent client itself, no outside client needed, though proprietary.
8
nashashmi 2 hours ago 0 replies
This came up a few months ago on HN under a different name. BTapps is a js torrent library that you can use to integrate into your apps or build something on top of it. But so far I have not been able to figure out how to use it.
9
oskarhane 3 hours ago 2 replies
They are terrible at explaining what it is.

Can I download torrents directly in the browser? Is it an UI to some underlying service? Can I seed with pure JS?

18
Depackaging the Nintendo 3DS CPU gaasedelen.blogspot.co.uk
76 points by robin_reala  5 hours ago   25 comments top 8
1
captainmuon 1 hour ago 0 replies
This is incredibly awesome work, and I envy students who can actually take a course in this stuff!

However it makes me realize in what kind of weirdly antagonistic society we live. In an ideal universe, you could just ask the Nintendo engineers for the chip layouts, or for the boot ROM. It reminds me of a documentary I saw the other day where they tested the chicken content of chicken salad. Again, in an ideal universe, you could just ask the manufacturer. Of course that would be naive in the real world, since manufacturers and consumers have different interests. Most of the time we don't notice anything weird about this. A bit of competition is usually considered a good thing. But every now and then I have a WTF moment... Why are we working against each other? How much productivity do we loose through duplication of effort? Are the losses from this smaller or bigger than the gains from healthy competition? I have no great answer for that...

Oh and sorry for hijacking the comment area for a rant about the world and so on ;-).

2
lawl 1 hour ago 0 replies
If you're interested in that kind of stuff I highly reccomend watching Karsten Nohl's "Reviving smart card analysis" talk from Chaos Communication Camp 2011 [0].

Basically he takes pictures of the circuits on the smart card and then reverses the logic from that.There's even software to assist with that [1].

As a pure software guy I was pretty baffeled when I saw this the first time.

3
Two9A 10 minutes ago 0 replies
I seem to recall that imaging the bootloader ROM straight off the silicon was how the original Gameboy's bootloader was finally pulled out.

I just forget who did it, which is unfortunate.

4
nfoz 2 hours ago 3 replies
That page has web buttons that overlap my scrollbar.... how is that even a thing? please make it stop.
5
maaaats 2 hours ago 1 reply
Wow, this is a world I've never seen before.
6
guiomie 4 hours ago 2 replies
That is one classy lab.
7
mattp123 2 hours ago 1 reply
Wait, so are they actually trying to take a picture of the boot ROM used by the 3DS?
8
the_mitsuhiko 3 hours ago 5 replies
It's impressive how Nintendo can continue selling heavily underpowered processors to gamers and nobody complains.
19
RIP Captain Jerry Roberts Bletchley Park Codebreaker unbound.co.uk
40 points by ColinWright  3 hours ago   9 comments top 2
1
minimax 2 hours ago 3 replies
Note for visitors to London from abroad: It's super easy to get to Bletchley Park from London. I think it took me something like 90 minutes total to get from my hotel in central London to Bletchley park on the train. The staff is well informed and very helpful. They have working replicas of the bombe and colossus machines. It's all very cool. Highly recommended if you have half a day to kill in London.
2
cpcallen 2 hours ago 0 replies
By chance I happened to meet Captain Jerry Roberts while looking for a room some years ago: their garden (basement) flat in Pimlico had a spare room in what had originally been a cellar under the pavement (sidewalk) in front of the house, which they rented out.

I decided the nearly windowless room was a little dark and dank for my liking, but both he and his wife were very pleasant and it was certainly the most interesting of the many visits to different properties I made at the time.

20
Exercising for Healthier Eyes well.blogs.nytimes.com
7 points by boh  36 minutes ago   3 comments top
1
127 11 minutes ago 2 replies
Requires registration to read the article. No thanks.
21
Famo.us Demos famo.us
65 points by epaga  5 hours ago   46 comments top 23
1
superqd 1 minute ago 0 replies
Given the hype (from them), I was very underwhelmed with the demos. I then remembered that this stuff is supposed to run well on phones (not the WebGL stuff, obviously, at least for iPhones). So I ran it on my phone and several of the demos stuttered / jerked a lot if you tried to use it while loading. Though once they fully loaded, they ran really well (smoothly).
2
untog 4 hours ago 1 reply
Shrug. Yeah, some of this stuff looks great, but it's not doing anything that wasn't possible before. The really important part is how it's structured, what the code looks like, how you go about actually implementing any of these features.

And famo.us appears to be in some kind of silly closed beta, so I'm not sure what I can even evaluate here.

3
kmfrk 2 hours ago 3 replies
How viable are "clever" ccTLDs outside .com and .org? I've always wondered how usable those domains are to people outside our small tech bubble.

Especially back when the .ly domains were all the rage, but also now that .io have the same SEO as .com domains.

4
camus2 4 hours ago 1 reply
This whole thing has been handled quite strangely. Let's compare with angularjs or threejs.

These frameworks have been developped openly from the very beginning and grew based on user's needs.

Here we have a project,that has received 2 million in funding,which is great,yet it's completely opaque,with a closed beta (I signed for it,never got anything but junk mail,no code to test whatsoever).

IMHO it qualifies as vaporware, an tech demo that looks great but with little practical use.

5
ldn_tech_exec1 3 hours ago 2 replies
I was a huge fan of famo.us until just now. Testing in iPhone4S, most of these apps are super jittery (never continuously 60fps) and unresponsive to touch in the browser, especially Yahoo Weather, which looks to be the only one complex enough to resemble a full app.

At best, this is a very impressive "mobile web app" framework NOT a native replacement. I would never trade up obj-c for cordoba/js if this is the kind of inconsistent performance users will have to deal with... and btw I love javascript.

I think steve is a phenomenal entrepreneur but may have built this launch up way too much. Without their native wrapper and MVC this feels like it's going to be a few years before it can rival native development.

6
Geee 1 hour ago 0 replies
Yup, famo.us is like a 'rendering engine' for the web, not a traditional JS app framework. I expect seeing architecture similar to game engines, such as Unity.
7
puppetmaster3 2 hours ago 2 replies
2 years ago, it was cool.

Today #GSAP is the state of art, if you want futuristic UX, spend a few minutes exploring this site UX:http://intothearctic.gp get past 'start exploring')

Today it is a solution looking for a problem and no market.

8
general_failure 1 hour ago 0 replies
Famo.us is a sad company that doesn't know how to die. All their sessions and talk are just hype. I am guessing they built some fancy DSL at the end of the day.

It is simply not possible to revolutionize HTML without changing/updating the runtime. Sadly for us, the runtime WebView shipped as part of Android is super underperformant. The one shipped with iOS is stripped of features and JIT.

9
onion2k 5 hours ago 1 reply
I really want to like Famo.us. There's some really nice effects being demonstrated, the fallbacks to older browser technologies are useful, and it seems to make it really easy to develop new things. But... it's not really anything that a combination of d3.js, three.js, rapheal.js, webaudiox, dancer.js, etc can't do for free and without all the "secrecy" nonsense.
10
crucialfelix 2 hours ago 0 replies
Finally something that makes the fan go on my macbook 2GHz i7 Intel Iris Pro 1024 MB. I was starting to think this thing didn't even have a fan.
11
NicoJuicy 3 hours ago 0 replies
To be honest, their demo's seem confusing and don't feel "right" in an ux way.
12
nailer 1 hour ago 0 replies
2 seconds per frame on iPad.
13
Kiro 3 hours ago 1 reply
I thought famo.us was about using the DOM and CSS matrix3d but these examples are just using a canvas element like you would expect. What is famo.us exactly?
14
jannes 3 hours ago 0 replies
Lightbox looks pretty great, but the non-native scrolling is a no-go. Especially on mobile devices you expect the native inertia formula to be used and not some contrived approximation.
15
joshdance 1 hour ago 1 reply
People keep mentioning funding etc. Anyone have a link to a quick backstory for these guys?
16
Touche 3 hours ago 1 reply
http://famo.us Has a nice YouTube clip of someone's personal site that looks quite impressive.... why is that not one of the demos? Is the video a fake?
17
brianchu 3 hours ago 0 replies
I'd like to point out that famo.us recently announced a public beta release date of April 9th, with respect to the "secrecy" criticism.
18
xixixao 1 hour ago 0 replies
Buttons are nicked from Windows Phone yet this is totally broken in WP IE10.
19
zxexz 1 hour ago 0 replies
This is pretty, and a nice framework - but it needs a LOT of optimization before it could be considered anywhere near usable.
20
pgsandstrom 3 hours ago 0 replies
Is it just me, or is mouse wheel the ONLY way to scroll in those lists? Not even "page down" works.
21
izolate 4 hours ago 2 replies
Rise is the most beautiful thing I've seen.
22
adrnsly 3 hours ago 0 replies
23
LukaszB 3 hours ago 0 replies
"CodePen Evaluation License" in each app.js ?
1
jofer 2 hours ago 4 replies
What's interesting is that everyone's very aware of the seismic hazard along the San Andreas, but many people aren't aware of the much larger (but much less frequent) hazard in the Seattle/Portland area.

The San Andreas is a strike-slip system. It's not capable of generating very large (magnitude 8 or greater) earthquakes.

The Cascadia subduction zone has had magnitude 9 earthquakes in the past. The last one was on Jan. 26th, 1700. (Thank Japan for having excellent historical records of earthquakes and tsunamis. We know a large earthquake occurred around 300 years ago in the area, and thanks to Japan's record of the tsunami it caused, we can make a solid link to the exact date.)

We don't know the statistical hazard as well due to the small sample size, but when the next large Cascadia earthquake occurs the damage will be absolutely catastrophic.

It's not just the direct earthquake damage, but also the tsunami hazard. You need vertical offset to cause a tsunami. A strike-slip system like the San Andreas is very unlikely to cause a tsunami. The permanent offset is dominantly horizontal, so the only way to generate a tsunami is through secondary effects such as landslides. (Also, most of the length of the San Andreas is onshore.)

Subduction zones are thrust systems. One plate moves up, and the other moves down. Earthquakes there are likely to produce permanent vertical deformation at the surface.

Furthermore, certain types of subduction zones are more prone to generating large tsunamis. A deep earthquake is unlikely to cause much deformation at the seafloor, and therefore doesn't generate as large of a tsunami. However, the shallower the rupture penetrates, the larger the deformation at the seafloor is, and therefore the larger the tsunami is. Certain types of subduction zones are more prone to having large earthquakes that rupture all the way up to the seafloor. (The amount of sediment on top of the incoming oceanic plate is thought to play a large role in this, among other things. The recent Tonankai earthquake in northern Japan turned a lot of what we thought we knew about this on its head, though.)

The Cascadia subduction zone is one of the end-member types that's likely to have both large earthquakes and large tsunamis. We know it has in the past, and it's likely to in the future. It's unusual in that most of the deformation along the fault occurs through periodic creep ("slow-slip events") that doesn't cause an earthquake. (Actually, as we're finding out, it's not that unusual around the world, but it was first observed and is best documented in Cascadia.) However, while this creep does relieve a significant portion of the accumulated elastic strain, it doesn't relieve all of it. The plate boundary fault is still accumulating elastic strain that will eventually be released in a large earthquake.

At any rate, just something to think about. The seismic hazard in the Bay Area can be reduced through proper engineering solutions. (Though SOMA is going to be in very rough shape for the reasons this article mentions. Lesson for next time: Don't bulldoze all the rubble into a pile and then build on top of it!)

For Cascadia, though, you can't engineer your way around a magnitude 9 earthquake and tsunami. You do the best you can, and try to avoid putting critical infrastructure near the coast.

2
steven2012 11 minutes ago 0 replies
Actual former SF resident here. I'll tell you exactly why I would live in SF, it's because it's one of the most beautiful cities in the world, and I also choose not to live my life in fear.

Yes, there's a threat of earthquakes. Yes, I'm sure there is going to be an earthquake at some point, possibly even large. I experienced a tiny earthquake in SF a few years ago and it scared the shit out of me, because my entire apartment was shaking.

But there are threats everywhere. Would I buy a house (if I can afford one, that is) in a liquefaction area? No. But there are plenty of areas of SF that don't have imminent liquefaction risk. And the chance of actually dying is quite low. There are plenty of things to worry about, but earthquake would be on the bottom quartile for me. And compared to the benefits of living in SF, it isn't even a big concern.

3
raldi 1 hour ago 0 replies
The problem is that earthquake insurance is so expensive and so high-deductible that a risk analysis only concludes it's worth getting if you live in an especially risky neighborhood, or have an especially at-risk house.

So only the riskiest buildings get it.

So the average customer of the insurer skews ever closer towards the extremely at-risk end of the spectrum.

So the insurance companies have to raise the rates and deductibles.

And then you go back to step one, and the vicious cycle gets even more extreme.

4
chrisfarms 3 hours ago 2 replies

    "Between now and 2038, theres a 99.7% chance of a    6.7-or-larger earthquake striking somewhere in     California"
I can't see that date without thinking a 32bit timestamp prevented them from calculating over a larger range. :)

5
_stephan 4 hours ago 3 replies
If you don't buy earthquake insurance, can you count on state or federal disaster relief programs to pay for reconstruction?

 Or to rephrase the question, do disaster relief programs create a moral hazard that should be countered by making earthquake insurance in high-risk areas mandatory (with possible subsidies by the state)?

6
Don't look at it as an "earthquake", look at it as a buying opportunity! ;-D
7
jfb 3 hours ago 0 replies
"Whistling past the graveyard". There's good stuff in John McPhee's Assembling California [1] about the '89 Loma Prieta earthquake, and about earthquakes in general.
8
facepalm 4 hours ago 1 reply
I wonder, could it make sense to build houses in risky areas more like ships?
9
danieltillett 4 hours ago 1 reply
I always thought it was the classic human response to disaster planning - a large river in North Africa.
10
cordite 2 hours ago 1 reply
The same story applies to Utah Valley, we are 60 years overdue
23
Mt. Gox CEO Karpeles refuses travel to U.S. for questioning cryptocoinsnews.com
34 points by Tenoke  2 hours ago   28 comments top 10
1
jxf 2 hours ago 1 reply
As a US citizen, if I lived outside the US and was summoned back for "questioning" by adverse parties, there's almost no reason I can imagine going. If that's truly all they want, then there's no reason it needs to be done in person. Depositions can be conducted remotely in many (though not all) jurisdictions.
2
xenophonf 1 hour ago 0 replies
Seeing how the U.S. arrested foreign gambling website CEOs transiting through U.S. airports - CEOs and businesses that didn't violate U.S. law, in my opinion - I can't blame Karpales.
3
badman_ting 2 hours ago 2 replies
I wouldn't either. Law enforcement can do what they want here.
4
ChuckMcM 21 minutes ago 0 replies
Not surprised that he doesn't want to step into the jurisdiction of federal marshalls. And the excuse that it would be an 'unacceptable misuse of resources' from the opposing counsel is pretty thin. People teleconference all the time and there are plenty of certified court reporters in either Japan or Taiwan who could facilitate a deposition.
5
CharlesMerriam2 1 hour ago 2 replies
Remember when the U.S. was the "shining beacon"? Where someone coming into questioning couldn't be held indefinitely as a "material witness"? Or to first have all assets seized and then assigned a terrible attorney because of insolvency? When it would not be a stupid move to submit to questioningon U.S. soil?

Yesh, that's what I thought.

7
jrockway 2 hours ago 2 replies
So a bunch of Americans are suing a Japanese company in US courts? Let me know how that works out.
8
logfromblammo 36 minutes ago 0 replies

Given the U.S. propensity for arresting foreign nationals for allegedly breaking U.S. laws while completely outside of U.S. territory just as soon as they finally set foot on it, I would politely decline this request as well. The offer to pay 100% of all travel expenses up front is particularly suspicious.

Given what has already occurred, I'd probably also wear nitrile gloves and a dust mask when handling the "first class plane tickets" sent by my former customers.

9
shellox1 1 hour ago 0 replies
As a European I wouldn't go to this messed up country either, especially since he isn't citizen there.
10
notastartup 57 minutes ago 0 replies
peculiar he would choose Taiwan of all places, where there is no extradition treaties with the US.
24
Is Google overreaching by forcing me to use TLS? stackexchange.com
16 points by AndyBaker  2 hours ago   15 comments top 6
1
ds9 13 minutes ago 0 replies
Can anyone explain this line from commenter Darren Cook: "Once this enforcement is in place, browsers will simply refuse to connect to Google over an insecure or compromised connection. By shipping this setting in the browser itself, circumvention will become effectively impossible."

Some browsers are open source, and it seems to me that developers can never definitely rely on their behavior. Surely the enforcement depends ultimately not on the browsers but rather on the server refusing non-TLS connection attempts?

2
finch_ 1 hour ago 1 reply
I agree with the overall point of the responses that Google isn't in fact evil to be doing this, but I want to disagree somewhat with one point - the idea that Google doesn't have any obligation to respect users wishes just because its a free service that no one is forcing you to use.

The problem with this is that Google's very existence makes it harder for similar services to exist. There are a few reasons for this, including:

1. Google benefits from economies of scale

2. Google benefits from having massive amounts of data to crunch through (for example, its hard to build a span filter as good as Gmail's without a training dataset as big as Gmail's)

Its kind of like the argument for the minimum wage - conservatives would say its not needed because you can just choose not to work for a company that isn't offering enough money, but sometimes you don't really have an alternative.

3
sarreph 1 hour ago 2 replies
A comment on the answer perfectly encapsulates this post:

Did you just troll security.SE and then reasonably answer your own question? Stephen Touset

4
skywhopper 1 hour ago 1 reply
I suppose it's nice to have the rationale written out somewhere, but does anyone anywhere actually balk at being required to use HTTPS?
5
malux85 50 minutes ago 1 reply
Uh, did this guy answer his own post?
6
afhsfsfdsss88 1 hour ago 2 replies
Google and others[Telecoms] are in positions to collect rents on your PI from third parties and G.O.'s. When they[Google] recently learned that the NSA had tapped their unencrypted fiber lines between data centers, they were pissed.

Not because they give a fraction of a shit about you, but because the NSA was stealing their product.

Now they encrypt everything with [very strong] SSL to force everyone to ask/pay for their info.

16 points by McUsr  2 hours ago   5 comments top 4
1
gregpilling 1 hour ago 1 reply
I am surprised how often W. Edward Deming is left out of articles like this http://en.wikipedia.org/wiki/W._Edwards_Deming

"n Japan, from 1950 onward, he taught top business managers how to improve design (and thus service), product quality, testing, and sales (the last through global markets)[1] by various means, including the application of statistical methods. Deming made a significant contribution to Japan's later reputation for innovative, high-quality products, and for its economic power. He is regarded as having had more impact upon Japanese manufacturing and business than any other individual not of Japanese heritage."

2
polskibus 11 minutes ago 0 replies
Kaizen is a small part of the Lean philosophy that originated in Toyota. I dont understandbwhy the article focuses on kaizen technique only.
3
e12e 28 minutes ago 0 replies
Reminds me of the story where Toyota decides to help charities by donating help with management/process:

https://news.ycombinator.com/item?id=6139960

4
yakshemash 36 minutes ago 0 replies
Not very often we get stories about Ethiopia on front page of hacker news...
1
tgb 2 hours ago 3 replies
Math doesn't check out for me: plane flies at ~700mph or 0.2 miles per second. Satellite speed is on the order of 5 miles per second. Sensors separation is on the order of a few inches. It would take about 15 microseconds between first and last sensor passing over the same spot. The plane can only move around 5 millimeters in this amount of time. It looks like it's moved several hundred feet.

What am I missing?

2
yaur 54 minutes ago 1 reply
Since there shouldn't be too much moving around the open ocean besides planes, couldn't you use this artifact to build a semidecent search algorithim rather than going through all the data by hand?
27
Gunshot victims to be suspended between life and death newscientist.com
298 points by Torgo  15 hours ago   110 comments top 16
1
jbattle 12 hours ago 6 replies
> At this point they will have no blood in their body, no breathing, and no brain activity. They will be clinically dead.

Is this right? That seems to imply that brain activity can be restarted from a cold, "electrically" inactive mass of grey matter. I thought brain dead was dead and there was no coming back from that.

Is the article accurate? If so, does the ability to restart the mind from an inert brain tell us something important about how thought and consciousness works?

2
ekianjo 14 hours ago 6 replies
There are other ways to do suspension as well. Certain gases are known to have the very same effect (induce clinical death, slow down your body metabolism) and the body can be restarted when oxygen is pumped in forcefully again.

Alas with the extremely slow state of regulatory changes, many people that could be already saved nowadays using these techniques are just ending up dead.

3
hawkharris 12 hours ago 2 replies
Why are the researchers focusing only on knife wound and gun shot victims? I understand that those injuries are particularly sudden and severe, but so are many of the injuries associated with automobile accidents, which occur more frequently.

Of course, they need to introduce this technology in a small, focused way, but it would seem more logical to use a patient's physical condition as the deciding factor rather than his or her exposure to two specific crimes.

4
logfromblammo 5 minutes ago 0 replies
Science fiction is becoming medical practice. In Lois Bujold's Miles Vorkosigan books, the main character is killed by a grenade to the chest. The emergency medical procedure was to dump the lower-ranking dead body already in the portable body freezer, exsanguinate the corpse by opening the carotid arteries, and pump the circulatory system full of "cryoprotectant fluid". The body is then frozen. Replacement parts are grown from the corpse's own tissues, which are surgically implanted when the body is thawed in a fully equipped, state-of-the-art medical facility.

In the context of the fiction, the procedure was imperfect, and is not without side effects. The frozen dead people often fail to revive. The main character, for instance, was left with a debilitating seizure disorder for the remainder of his life, something that was eventually treated by a neurological pacemaker implant.

Based on existing studies and technology, the fiction is a very plausible future technology. Between stem cells, volume printers, and extracellular matrix, autologous donor organ replacement seems possible. Hibernating amphibian studies tend to indicate that a blood replacement containing glycerine, perfluorodecalin, raffinose, glycogen, and drugs would help minimize human tissue damage from the freezing and thawing process. It would be an emulsion, and would probably superficially resemble the android blood from the Alien movies.

The only question, really, is whether the person that wakes up after surgery is the same person that "died" beforehand. Is it really saving someone's life, or is it just replacing them with a simulacrum that has their memories?

5
olalonde 13 hours ago 3 replies
If this works, could it give some credibility to cryonics?
6
barlescabbage 11 hours ago 0 replies
I emailed my best friend's dad who is a retired ER doc and Harvard grad, this was his response...

"We already do this with CPR survivors. It is not clear that it is helpful.It is logical, but as you know logic is the great deceiver."

7
mdonahoe 13 hours ago 0 replies

http://www.ccm.pitt.edu/research/projects/epr-cat-emergency-...

"[T]est the feasibility of rapidly inducing profound hypothermia ... with an aortic flush in trauma victims"

8
tdaltonc 14 hours ago 4 replies
If the goal is just to get the patient cold why not use ice cold blood? Why use saline?

You could even use a cardiopulmonary bypass to rapidly cool a patients own blood.

9
DanielBMarkham 5 hours ago 0 replies
I remember reading about this research in pigs many years ago and over the years I kept wondering "what's going on with this?"

If they can make this work only in a statistical sense, reviving more people than would have died otherwise, it'll lead to even more research. My firm belief is that this is one of those things that the more we do, the more we'll be able to do. It wouldn't surprise me to see people being "dead" for 4-16 hours then brought back to life -- assuming a decade or two of research.

At that point, all kinds of weird things become possible, like head transplants, or people who have lost their body from the navel down being saved.

Very cool stuff.

10
BehindScenes 13 hours ago 0 replies
There we go, soon will see zombies like in walking dead if something goes wrong.
11
bicknergseng 14 hours ago 2 replies
I haven't seen that many pop up ads since 2003.
12
b6fan 12 hours ago 0 replies
Does this mean people live in Autarctica could live longer but think slower?
13
Unai 12 hours ago 0 replies
> "We are suspending life, but we don't like to call it suspended animation because it sounds like science fiction," says Samuel Tisherman, a surgeon at the hospital, who is leading the trial. "So we call it emergency preservation and resuscitation."

Because that doesn't sound like science fiction at all...

14
whitehat2k9 11 hours ago 0 replies
If the human body is anything like the first generation of ACPI this is not going to end well for the patients :P
15
j2kun 13 hours ago 1 reply
Replacing all of someone's blood with anything is extremely scary-sounding.

Also, now I can't help but imagine replacing all of someone's blood with things like jello and cream cheese.

16
downer76 14 hours ago 0 replies
even a tl;dr is long, but worth reading:

  The technique involves replacing all of a patient's   blood with a cold saline solution.   The technique was first demonstrated in pigs in 2002 by   Hasan Alam at the University of Michigan Hospital in Ann   Arbor, and his colleagues.  Their blood was drained and replaced by either a cold   potassium or saline solution, rapidly cooling the body   to around 10 C. After the injuries were treated, the   animals were gradually warmed up as the solution was   replaced with blood.  Surgeons are now on call at the UPMC Presbyterian   Hospital in Pittsburgh, Pennsylvania, to perform the   operation. Because the trial will happen during a   medical emergency, neither the patient nor their family   can give consent. A final meeting this week will ensure   that a team of doctors is fully prepared to try it. Then   all they have to do is wait for the right patient to   arrive. When this happens, every member of Tisherman's   team will be paged.  The technique will be tested on 10 people, and the   outcome compared with another 10 who met the criteria   but who weren't treated this way because the team wasn't   on hand. The technique will be refined then tested on   another 10, says Tisherman, until there are enough   results to analyse.  "...we don't like to call it suspended animation because    it sounds like science fiction..."   says Samuel Tisherman, a surgeon at the hospital, who is   leading the trial.  "After we did those experiments, the definition of 'dead'   changed, Every day at work I declare people dead. They    have no signs of life, no heartbeat, no brain activity.    I sign a piece of paper knowing in my heart that they    are not actually dead. I could, right then and there,    suspend them. But I have to put them in a body bag.    It's frustrating to know there's a solution."  says surgeon Peter Rhee at the University of Arizona in   Tucson, who helped develop the technique.
The suspense is KILLING me!</pun>

28
Python Maze Generator janthor.com
41 points by TonyNib  5 hours ago   9 comments top 7
1
koblas 2 hours ago 1 reply
One of the things that I continue to notice is that in the machine generated mazes there are always lots of dead-end paths off of mainline path. While when a human makes a maze by hand there tends to be fewer quick dead end paths and a branch will usually take you 4+ squares before you realize that it's non-terminal.

I would think that either there is an algorithm or metric that would score mazes to make them more "realistic". Any references?

2
dudus 2 hours ago 0 replies
This one has a nice 3d effect to my eyes, where the dark blue on the left appears to be farther than the rest of the maze.

Can someone explain?

http://www.janthor.com/maze/rainbowpath.32.32.1128433606.84....

Zoom in for a better effect

3
dec0dedab0de 1 hour ago 0 replies
4
TonyNib 5 hours ago 1 reply
There's probably better ones avaliable now on github, but this one caught my eye.

I was wondering, are there any examples of mazes (real or programs) where the structure, rules or walls change based on a certain algorithmic pattern?

So that you've not only got to find the right way out, but you must crack the code before a way out is even possible.

Kind of like the movie Cube:http://www.imdb.com/title/tt0123755/

5
yalue 3 hours ago 0 replies
I was expecting a simple explanation of some maze generation algorithms, but was very pleasantly surprised with the path-length based visualization stuff. What I mean is that there is more to this page than the title suggests!
6
acomjean 2 hours ago 0 replies
sometimes I think there is a very fine line between math visualizations and art.

I keep thinking it would be interesting to take an image (like a face) and let it show through the maze instead of colorizing it.

7
keithxm23 54 minutes ago 0 replies
There should be an epilepsy warning! @_@
29
ASCII Delimited Text Not CSV or TAB delimited text ronaldduncan.wordpress.com
629 points by fishy929  23 hours ago   263 comments top 51
1
Pxtl 22 hours ago 6 replies
I've done this.

Everybody hated it. Most text editors don't display anything useful with these characters (either hiding them altogether or showing a useless "uknown" placeholder), and spreadhseet tools don't support the record separator (although they all let you provide a custom entry separator so the "unit" separator can work). Besides the obvious problem that there's no easy way to type the darned things when somebody hand-edits the file.

2
dxbydt 21 hours ago 8 replies
Don't do this. Tsv has won this race, closely followed by Csv. Anything else will cause untold grief for you and fellow data scientists and programmers. I say this as someone who routinely parses 20gb text files, mostly Tsv's and occasionally Csv's for a living. The solution you are proposing is definitely superior but isn't going to get adopted soon.
3
mikestew 23 hours ago 6 replies
Anyone that's ever had to parse arbitrary data knows of the approximately 14 jiggityzillion corner cases involved when sucking in or outputting CSV/TAB delimited formats. Yet much like virtual memory and virtual machines, we find that a solution has existed since the 60s. For those wondering about the history and use of all those strange characters in your ASCII table: http://www.lammertbies.nl/comm/info/ascii-characters.html
4
mjn 22 hours ago 2 replies
Alas, I don't think this works with the standard Unix tools, which is the main way I process tab-delimited text. Changing the field delimiter to whatever you want is fine, since nearly everything takes that as a parameter. But newline as record separator is assumed by nearly everything (both in the standard set of tools, and in the very useful Google additions found in http://code.google.com/p/crush-tools/). Google's defaults are ASCII (or UTF-8) 0xfe for the field separator, and '\n' for the record separator. I guess that's a bit safer than tabs, but the kind of data I put in TSV really shouldn't have embedded tabs in a field... and I check to make sure it doesn't, because they're likely to cause unexpected problems down the line. Generally I want all my fields to be either numeric data, or UTF-8 strings without formatting characters.

Not to mention that one of the advantages of using a text record format at all is that you can view it using standard text viewers.

5
sigil 21 hours ago 1 reply
Meh. What if some data has ASCII 28-31 in it? If you're not using a "real" escaping mechanism, and instead relying on the assumption that certain characters don't appear in your data, then I don't see anything wrong with using \t and \n (ie TSV). Either way, you know your data, and you're using whatever fits it best.

If you need something that's never, ever going to break for lack of escaping, might I suggest doing percent-encoding (aka url encoding) on tabs ("%09"), newlines ("%0a") and percent characters ("%25")? Percent encoding and decoding can be made very fast, is recognizable to most developers, and can be used to escape and unescape anything, including unicode characters. Unlike C-escaping, which doesn't generalize and accommodate these things nearly so well.

6
revelation 6 hours ago 2 replies
It is now 2014. The world doesn't use ASCII, you will still need escaping for binary or misformatted data, and overall the idea of mapping control characters and text into one space is dead and dusted. Don't do it, don't let other people do it, use a reasonable library that handles the bazillion edge cases safely if you need to parse or write CSV and its ilk.
7
DEinspanjer 57 minutes ago 0 replies
I never knew about these which is just a bit shaming considering how long I've been in the data munging field. :)

I agree with several other comments that the biggest issue is not being able to represent them in an editor. If you use some form of whitespace, then it is likely to lead to confusion with the whitespace characters you are borrowing (i.e. tab and line feed). If you use special glyphs, then you have to agree on which ones to use, and it still doesn't solve the problem of readability. Without whitespace such as tab and line feed, all the data would be a big unreadable (to humans) blob, and with whitespace, it would lend confusion about what the separator actually is. Someone might insert a tab or a linefeed, intending to make a new field or record, and it wouldn't work. If the editor automatically accepted a tab or linefeed and translated it to US and RS, then there would have to be an additional control to allow the user to actually insert the whitespace characters that this is supposed to enable. :/

8
JoshTriplett 22 hours ago 2 replies
Leaving aside the pain of displaying and typing such characters...

> Then you have a text file format that is trivial to write out and read in, with no restrictions on the text in fields or the need to try and escape characters.

Phrases like that lead to lovely security bugs.

9
My guess is that TSV/CSV won out simply because anyone can easily type those characters from any standard keyboard on any platform.
10
rgarcia 17 hours ago 2 replies
How about everyone just started following the CSV spec? https://tools.ietf.org/html/rfc4180

Doesn't allow for tab-delimited or any-character-delimited text and handles "Quotes, Commas, and Tab" characters in fields.

11
rwmj 22 hours ago 3 replies
This is factually wrong about CSV, which can store any character including commas and even \0 (zero byte), provided it's implemented correctly (a rather large proviso admittedly, but you should never try to parse CSV yourself). Here is a CSV parser which does get all the corner cases right:

https://forge.ocamlcore.org/scm/browser.php?group_id=113

12
peterwwillis 1 hour ago 0 replies
You know where this is useful? Databases.

No, please, put the gun down... let me explain. Sometimes you have a database that's so complex and HUGE that changing tables would be a nightmare, or you just don't have the time. You have a field that you want to shove some serialized data into in a compact way and not have to think about formatting. You could use JSON, you could use tabs or csv, but both of those require a parser.

With these ascii delimiters you can serialize a set of records quickly and shove them into a string, and later extract them and parse them with virtually no logic other than looking for a single character. And because it's a control character, you can strip it out before you input the data, or replace control characters with \x{NNN} or similar, which is still less complex than tab/csv/json parsing.

Granted, the utility of this is extremely limited, probably mainly for embedded environments where you can't add libraries. But if you just need to serialize records with the simplest parsing imaginable, this seems like an adequate solution.

13
mikeash 23 hours ago 2 replies
It doesn't solve the problem, although it does make it far less likely to run into it.

For a trivial example, try building an ASCII table using this format, with columns for numeric code, description, and actual character. You'll once again run into the whole escaping problem when you try to write out the row for character 31.

14
htp 22 hours ago 1 reply
Took some time to figure out how to type these on a Mac:

1. Go to System Preferences => Keyboard => Input Sources

2. Add Unicode Hex Input as an input source

3. Switch to Unicode Hex Input (assuming you still have the default keyboard shortcuts set up, press Command+Shift+Space)

4. Hold Option and type 001f to get the unit separator

5. Hold Option and type 001e to get the record separator

6. (Hold Option and type a character's code as a 4-digit hex number to get that character)

Sadly, this doesn't seem to work everywhere throughout the OS- I can get control characters to show up in TextMate, but not in Terminal.

15
tokenrove 22 hours ago 0 replies
I think people are missing the fact that you have a "control" key on your keyboard in order to type control characters. (Of course, control is now heavily overloaded with other uses.
16
trebor 23 hours ago 1 reply
Now I feel silly for having glossed over the control characters since I was a kid. Those characters are decidedly useful on a machine level, though the benefit of CSV/TSV is that it's human friendly.
17
eli 22 hours ago 0 replies
Makes sense, but it's practically a (very simple) binary storage format at that point. You can't count on being able to edit a document with control characters in a text editor. And I wouldn't trust popular spreadsheet software with it either.
18
csixty4 21 hours ago 1 reply
Pick databases have used record marks, attribute marks, value marks, sub-value marks, and sometimes sub-sub-value marks in ASCII 251-255 since the late 1960s. Like the control characters this blog post recommends, the biggest obstacle for Pick developers working on modern terminals is how on Earth to enter or display these characters. There's also the question of how to work with them in environments that strip out non-printable characters.

This isn't some clever new discovery. It's begging us to repeat the same mistakes that led to the world adopting printable ASCII delimiters in the first place.

19
susi22 20 hours ago 1 reply
Related: This tool:

https://github.com/dbro/csvquote

will convert all the record/field separators (such as tabs/newlines for TSV) into non-printing characters and then in the end reverse it. Example:

    csvquote foobar.csv | cut -d ',' -f 5 | sort | uniq -c | csvquote -u
It's underrated IMO.

20
Roboprog 19 hours ago 1 reply
CSV is a solved problem - RFC 4180: http://tools.ietf.org/html/rfc4180#section-2

As used by Lotus 1-2-3 and undoubtedly others before there was an Excel.

Example record:

    42,"Hello, world","""Quotes,"" he said.","new    line",x
Now go write a little state machine to parse it... (hint: track odd/even quotes, for starters)

21
Falling3 22 hours ago 1 reply
This reminds me of a depressing bug I run into frequently. I do a bit of work integrating with an inventory management program. Their main method of importing/exporting information is via CSV. The API also imports and exports via CSV, except whoever wrote the code that handle the imports decided not to use any sort of sensible library. Instead they use a built-in function that splits the string based on commas with absolutely no way of escaping, so that there is no way to include a comma in a field.

It's led to many a headache.

22
hrjet 8 hours ago 0 replies
I have created a (work-in-progress) Vim plugin [1], that uses Vim's conceal feature to visually map the relevant ASCII characters to printable characters.

It sort of works, but there are known issues which I have listed in the README.

23
slavik81 21 hours ago 0 replies
This seems a little better than it is. Those control characters are appealing because they're rarely used. Making them important by using them in a common data exchange format will dramatically increase the rate at which you find them in the data you're trying to store.

Ultimately, this is a language problem. If we invent new meta-language to describe data, we're going to use it when creating content. That means the meta-language will be used in regular language. Which means you're going to have to transform it when moving it into or out of that delimited file.

There is no fixed-length encoding you can use to handle meta-information without imposing restrictions on the content. You're always going to end up with escape sequences.

24
ChuckMcM 21 hours ago 1 reply
I've used these in ASCII files and they are quite useful. But as most folks point out, actually using control characters for "control" conflicts with a lot of legacy usage of "some other control." Which is kind of too bad. Maybe when the world adopts Unicode we'll solve this, oh wait...
25
Eleutheria 21 hours ago 2 replies
EDI is actually a wonderful and simple ASCII format for complex documents in use for over 30 years.

The underlaying mapping formats for specific industries are a pain to parse but everything is easily formatted using stars or pipes as field separators

    ST|101    NAM|john|doe    ADR|123 sunset blv|sunrise city|CA    DAT|20140326|birthday
Ah, the joy of simplicity.

26
christiangenco 21 hours ago 0 replies
Here's an implementation of ASCII Delimited Text in Ruby using the standard csv library: https://gist.github.com/christiangenco/73a7cfdb03e381bff2e9

The only trouble I ran into was that the library doesn't like getting rid of your quote character[1], and I don't see an easy way around it[2].

That said, I really don't like this format. The entire point of CSV is that you have a serialization of an object list that can be edited by hand. Sure using weird ASCII characters compresses it a bit because you're not putting quotes around everything, but if you're worried about compression you should be using another form of serialization - perhaps just gzip your csv or json.

In Ruby in particular, we have this wonderful module called Marshal[3] that serializes objects to and from bytes with the super handy:

    serialized = Marshal.dump(data)    deserialized = Marshal.load(serialized)    deserialized == data # returns true
I cannot think of a single reason to use ASCII Delimited Text over Marshal serialization or CSV.

1. ruby/1.9.1/csv.rb:2028:in init_separators': :quote_char has to be a single character String (ArgumentError)

27
dsjoerg 21 hours ago 0 replies
This is a good illustration of how the hard part isn't "solving the problem" -- it's getting everyone to adopt and actually _use_ the standard.

Reminding everyone that an unused, unloved standard exists is just reminding everyone that the hard part went undone.

28
binarymax 22 hours ago 1 reply
Protip - if it doesnt appear on keyboards, you can use ALT+DDD (DDD being 000 to 255) to enter a control character. For those on windows, drop into a command prompt and hold ALT while pressing 031 on the numpad. You will see it produce a ^_ character.
29
chinpokomon 11 hours ago 0 replies
Having read through all the comments, I think the only real benefit to using the control characters is in the original intent. That is a flat file that represents a file system, with file separators (FS), group separators (GS), like a table, record separators (RS), and unit separators (US), to identify the fields in the record, storing only printable ASCII values.

This isn't intended to be a data exchange format, it is a serial data storage format. In this way, there may be some valid usages, but modern file systems do not need this sort of representation and it has no real benefit over *SV formats for most use cases. I suppose It could still be used for limited exchange, but since it can't be used storing binary, much less Unicode (except for perhaps UTF-8), other formats are less ambiguous and more capable.

30
yitchelle 22 hours ago 1 reply
The big problem with the two markers mentioned in the post is they are not part of the visible character set. Using a comma delimiter is good as it is visible, you can just use a basic text view to see it.

A tab delimiter is not preferable as it is not visible, and can be problematic to parse via command line tools (ie what do I set as the delimiter character?).

I think that is the whole point of having ASCII delimited text files is to have human readable data in it.

31
gwu78 13 hours ago 1 reply
Why not use the non-printing char as the comma instead of the record separator.

1. Replace all the commas in the text with the unique non-printing char before converting to CSV.

2. Convert this char back to a comma when processing the CSV for output to be read by humans.

Because commas in text are usually followed by a space, the CSV may still even be readable when using the non-printing char.

I must admit I've never understood why others view CSV as so troublesome vis-a-vis other popular formats.

in: sed 's/,/%2c/g' out: sed 's/%2c/,/g'

I guess I need someone to give me a really hairy dataset for me to understand the depth of the problem with CSV.

Meanwhile, I love CSV for its simplicity.

32
yardshop 11 hours ago 0 replies
There actually are glyphs assigned to these characters, at least in the original IBM PC ASCII character set:

Ascii table for IBM PC charset (CP437) - Ascii-Codes

http://www.ascii-codes.com/

They correspond to these Unicode characters

    28  FS    221f  right angle    29  GS    2194  left right arrow    30  RS    25b2  black up pointing triangle    31  US    25bc  black down pointing triangle
They may not be particularly intuitive symbols for this purpose though.

see also:IBM Globalization - Graphic character identifiers: http://www-01.ibm.com/software/globalization/gcgid/gcgid.htm...(then search for a code point, eg U00025bc)

Unicode code converter [ishida >> utilities]:http://rishida.net/tools/conversion/

http://en.wikipedia.org/wiki/Code_page_437

33
rcfox 15 hours ago 0 replies
I've recently had to deal with exclamation mark separated values. It sure is exciting, especially when there are empty fields:

    foo!bar!baz!    a!b!c!    d!!!    e!f!!`

34
Pitarou 14 hours ago 1 reply
While we're on the subject, we should probably be using control code 16 (Data Link Escape) instead of the backslash character to escape strings.

The problem is, of course, that we can't see it (no glyph) and we can't "touch" it (no key for it) so people won't use it. Ultimately, we're all still stick-wielding apes.

35
mmasashi 22 hours ago 1 reply
It does not solve the problem. Here is the points which I think.

1. Control characters are not supported in the almost of text editors. 2. Control characters are not human friendly. 3. The text may contain control characters in the field value.

In any formats, we cannot avoid the escape characters, so even I think CSV/TSV format is reasonable.

36
omarforgotpwd 22 hours ago 0 replies
"Alright let me get you some quick test data. Just need to find the 0x29 key on my keyboard... or 0x30? Wait is this a new row or a new column? What was the vim plugin for this?"

And then someone wrote an open source CSV parsing library that handles edge cases well and everyone forgot these characters existed.

37
rcthompson 21 hours ago 1 reply
If you use unprintable characters in your file, it's no longer human-editable as text. It may as well be XML (i.e. technically text-based but not practically human-readable).
38
snorkel 21 hours ago 0 replies
Hah! Wonderful example of a forgotten feature.

It's not often that the tab delimited format is problematic, at least nothing that a simple string-replace operation can't solve, so it's not worth trying to convince every existing text reader and text processors to recognize these long forgotten record separators correctly instead.

39
Balgair 22 hours ago 1 reply
Haha! Oh wow, I just finished a project dealing with this exactly. The obvious problem is that most editors make dealing with the non-standard keyboard keys very difficult. As a consequence, most programs (Python, MatLab, etc) really don't like anything below 0x20. I was reading in binary through a serial port, and then storing data for records and processing. Any special character got obliterated in the transfer to MatLab, Python, etc. I ended up storing it as a very long HEX string and then parsing that sucker. I'd have loved to use special characters to have it auto sort into rows and columns, but that meant having it also escape things and wreck programs. Ces la vie.
40
tracker1 16 hours ago 0 replies
I actually really appreciate this article, though I've known about it for decades now. In fact, I used to return javascript results in a post target frame back in the mid-late 90's and would return them in said delimited format... field/record/file separated, so that I could return a bunch of data. Worked pretty well with the ADO Recordset GetString method.

Of course, I was one of those odd ducks doing a lot of Classic ASP work with JScript at the time.

41
htns 19 hours ago 0 replies
CSV if you do it as in RFC 4180 [1] already has everything the link describes, plus pretty good interoperability with most things out there. If you abused CSV you could even store binary data, while ASCII has no standard way to escape the delimiter characters.
42
mncolinlee 20 hours ago 0 replies
I thought I was one of the few using pipeline-delimited format in my tools. You can handle many of the incidental problems by having both pipeline and quotes like CSV.
43
zenbowman 21 hours ago 0 replies
Interestingly, Apache Hive uses control characters for column and collection delimiters by default. I commend them for that decision.
44
notimetorelax 20 hours ago 2 replies
Would it work if the file was encoded with UTF8 or UTF16?
45
sitkack 22 hours ago 0 replies
We need a font to display the hidden characters and a keyboard with 4 more keys. Problem solved.
46
Splendor 21 hours ago 0 replies
I tried to use it but one of the tools I rely on (BeyondCompare) resorts to hex comparisons when it detects these comparisons. In contrasts, it treats CSV files better than anything; letting you declare individual fields as key values, etc.
47
neves 22 hours ago 0 replies
Wow. I can't count the number of times that I saw a bug due to a newline, a coma or quotation marks inside a field.
48
co_dh 22 hours ago 5 replies
How do you enter them ? in console, in editor? Since they are invisible, how do you find if you have entered a wrong character?
49
hamburglar 20 hours ago 0 replies
I will be sure to use this if I ever encounter data that is guaranteed to be pure ASCII again.
50
kyllo 20 hours ago 0 replies
This is awesome, but sadly no one is going to use it until Microsoft Excel allows you to export spreadsheets in ASCII delimited format.
51
efalcao 20 hours ago 0 replies
mind. blown.
30
The Patterns Behind the Most Shared NYTimes Articles percolate.com
4 points by jasonshen  41 minutes ago   1 comment top
1
protonfish 9 minutes ago 0 replies
I see something totally different. The email sharers are older and conservative, the Twitter users are younger and liberal. Older people are finally comfortable with email but not new forms of information sharing. Also, conservative groups prefer to use email to spread hate propaganda because it is less public and hard to trace the source.
cached 27 March 2014 16:02:01 GMT