hacker news with inline top comments    .. more ..    5 Dec 2014 News
home   ask   best   5 years ago   
BPG Image format
249 points by mjs  8 hours ago   144 comments top 24
pslam 3 minutes ago 0 replies      
The big story here is this introduces a new image file format without requiring you to upgrade a browser or download a plugin. Those aren't PNGs of representative images you're looking at - that's BPG decoding happening in your browser.

So we don't like HVEC due to patent worries? Fine, we can swap in another format and use the same technique.

We don't have an I-frame format for Daala that's finalized yet? Fine, we can use work-in-progress. The format does not need to be finalized. If you update it, you rebuild your images and bundle a new JavaScript decoder.

The ability to ship media along with a sandboxed, runnable decoder is awesome, and I'm surprised it hasn't caught on until now. I remember Google a while back implemented their own "video" codec in JavaScript using JPEGs and primitive inter-frame processing, exactly because there wasn't a universal video codec format they could use.

mark-r 2 hours ago 3 replies      
I was ready to pass by this post with a yawn until I saw where it was coming from: Fabrice Bellard. He's no doubt an absolute freakin' genius. And if anybody knows about image conversion, it's him.

Even the things he does just for fun are impressive. Have you ever booted up Linux inside your browser? http://bellard.org/jslinux/

anigbrowl 2 hours ago 7 replies      
EDIT: I didn't expect this comment to be so popular and feel like I've hijacked the thread a little - sorry. Feel free to continue at https://news.ycombinator.com/item?id=8706850

I would much rather someone revived the Fractal Image format, which is now out of patent. It's very expensive to encode, but that's nowhere near as big a problem as it used to be. It's very fast to decode, very small, and the files are resolution independent: http://en.wikipedia.org/wiki/Fractal_compression

I was blown away when I encountered it at PC Magazine in the 90s and it seems like it would be very responsive to the needs of today's web.

ChrisGranger 2 hours ago 0 replies      
Looking at the Lena pictures demo, the extremely low file size comparison at the top shows just how good .bpg is in that use case. That could make for some much lighter websites when used for less important items like image thumbnails on shopping sites, for example.

When the file size gets larger at the end, it looks like there might be a little loss of detail. Ideally I'd like to compare them by switching back and forth using the original image and the .bpg as Photoshop layers...

jason_slack 2 hours ago 1 reply      
There was an interesting article about Fabrice:http://blog.smartbear.com/careers/fabrice-bellard-portrait-o...

Does anyone know if he has a day job or does he just lock himself away and work on these interesting projects?

pdknsk 2 hours ago 1 reply      
I notice the container has no ICC profile support. Trivial do add as an extension tag, but should definitely be in the first spec IMO. And if I read this correctly, extension tags are hardcoded as numbers, rather than using a tag name. I don't think that's a good idea.
userbinator 3 hours ago 1 reply      
Its purpose is to replace the JPEG image format when quality or file size is an issue.

Some of the HEVC algorithms may be protected by patents in some countries

There have been some patent disputes over JPEG, but I don't think replacing it with another possibly patented format is a good idea, even if it's technically superior in some ways.

jason_slack 2 hours ago 0 replies      
No doubt Fabrice is very smart. I read about his 4G base station the other day. I'd love to be a fly on the wall while he codes and thinks out these projects.

His accomplishments are impressive: QEMU, FFMPEG, TCC, JSLinux, the list goes on

ChuckMcM 1 hour ago 0 replies      
Is there anything this guy can't do? Seriously.

I have been wishing there was a JPEG equivalent with an alpha channel for like forever. That allows better compositing to arbitrary background images or patterns. Now the question is how long before browsers might support it natively.

1ris 2 hours ago 1 reply      
Why not use Daala to start from? The overlapping transform probably help especially for still images, and the patent situation is probably at least better.
pronoiac 53 minutes ago 1 reply      
I think we crashed the site. Coral Cache has the front page, at least: http://bellard.org.nyud.net/bpg/
hyp0 52 minutes ago 0 replies      
I guess the JS is just for proof of concept until support is shipped with browsers etc, but it isn't rendering properly in the stock Android browser (4.2.2).

It gets to Lena's head in the first image, then becomes brigtly multi-coloured, though it looks like the difference between colours right... as if maybe an int overflow in the browser's JS implementation?

msoad 1 hour ago 1 reply      
How does it compare to WebP? Also, how you measures lossy image quality? What are the metrics? I hope it's not just by looking at result and judging.
vmarsy 1 hour ago 1 reply      
Impressive! Right now it requires a .js decoder, of 75kb.

Assuming a fast C++ decoder instead,(possibly GPU accelerated if the decoding algorithm is well suited for it) and not using JS but what would be the rendering times?

PNG are 4x bigger in his experiments, but interlaced PNG makes it more pleasant to users since it can be rendered progressively, can BPG benefit from such a thing?

edit: Interlacing is also used in JPEG, isn't it ?

It's a tradeoff, as a user, it's obvisouly a win situation when we have a low bandwidth, and as a server it's obviously a win.

spb 4 hours ago 3 replies      
Don't we already have WebP?
_nickwhite 3 hours ago 1 reply      
I really dig it, but I'm not yet familiar with BPG. It seems that decoding it would take more processing power, and potentially be slower than JPG. Is this the case? Under the "performance" section, decoding speed was not mentioned.
aidenn0 2 hours ago 3 replies      
How many "better JPEGs" have been created now, without significantly displacing JPEG's market share?
sp332 3 hours ago 3 replies      
Wow, I didn't even know there were open-source HEVC encoders. I thought there were patent issues. Now I'm off to re-encode my bluray collection!
than 3 hours ago 1 reply      
Pronounced 'bee-peg'?
loudmax 2 hours ago 10 replies      
I appreciate the historical tradition of using the photo of beautiful young Lena Sderberg as a test image, but it's time to move on. It's fun for us hetero males, but like it or not, this sends a message to young women that they aren't welcome in this field. I wish Fabrice Bellard would have left them out of the demo set.

Having said that, all those demo photos do look good. I was wondering how we were going to see a demo in the browser without built-in support, but leave it to the man who put Linux in the browser to write a decoder in javascript. This is an encouraging project.

joelthelion 2 hours ago 0 replies      
Fabrice Bellard strikes again!
faragon 1 hour ago 0 replies      
Bellard is a genius.
frontsideair 1 hour ago 3 replies      
Very out of topic but, is lena.jpg still acceptable? I mean, tech industry is getting better at inclusion but come on, are we still using that crop from Playboy?
Animats 2 hours ago 2 replies      
The results only show marginal improvement.

What's needed is a compression method that doesn't introduce artifacts on hard edges, as JPEG does, but is otherwise no worse at compression than JPEG. Then we wouldn't need to do some things in JPEG and others in PNG, and we'd be spared the pain of JPEG screenshots. Much better results on the Tecnick image set (which is mostly hard edges) would indicate one had been found. The results only indicate modest improvement in that area.

NASAs Orion Spacecraft Splashes Down in Pacific After Test Flight
140 points by dnetesn  4 hours ago   85 comments top 6
lisper 3 hours ago 4 replies      
The [non-]money quote:

"The next test flight is not expected until 2018 because of limited NASA budgets, and Orion will not carry astronauts until 2021 at the earliest."

So while I don't want to downplay the technical achievement, it is politics, not technology, that remains the limiting factor at NASA. :-(

JeffL 3 hours ago 5 replies      
How does this compare to what SpaceX is doing? Does it make sense for NASA to be doing this in parallel with Dragon?
chriskanan 58 minutes ago 0 replies      
A related article makes a point about the very low morale from astronauts for going to an asteroid using Orion and SLS, which is the major first use for Orion: http://gizmodo.com/orions-a-triumph-lets-not-waste-it-166656...

I suspect public opinion about such a quest is just as low for this milestone.

Shivetya 4 hours ago 6 replies      
I am both amazed and disappointed at the cost to do one launch of this machine. How much of that cost is going to occur each launch?
wglb 4 hours ago 1 reply      
Was very cool to watch the descent and splashdown.
stumpf 3 hours ago 0 replies      
11 parachutes split into 3 separate stages, very cool to watch descend.
Run to Stay Young
34 points by papad  7 hours ago   11 comments top 2
hf 34 minutes ago 1 reply      
As the article referenced here is rather light, here isthe actual paper of the mentioned study (open access):http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna..., and a slightly more informative report by the University of Colorado Boulder,http://www.colorado.edu/news/releases/2014/11/20/running-rea....

Conclusion from the paper:

  Running mitigates the age-related deterioration of walking  economy whereas walking for exercise appears to have   minimal effect on the age-related deterioration in walking   economy.

increment_i 13 minutes ago 1 reply      
After 30 years of running, I can attest to the irreversible damage caused to knees, shins, and other joints and ligaments due to running on paved and other man made surfaces. A good alternative as you get up in years is swimming and/or cycling or recumbent biking.
Show HN: Collapse HN Comments
62 points by Igglyboo  3 hours ago   27 comments top 19
542458 2 hours ago 0 replies      
Cool! For non chrome people, I've been using this bookmarklet since forever


sillysaurus3 2 hours ago 2 replies      
I've been using this: http://hckrnews.com/about.html#extensions

In addition to collapsing threads, it also tastefully identifies new comments: http://i.imgur.com/qs6FDRU.png

It doesn't mess with the layout beyond that, which I like.

By the way, http://hckrnews.com/ itself is wonderful. It's HN, but in chronological order. That seems like a bad thing, but in fact there are only a few dozen submissions per day, and it's easy to pick out the interesting ones.

throwaway_yy2Di 2 hours ago 1 reply      
I love how every comment here links to a different implementation.

I also made one (for Firefox/Greasemonkey),


Fuzzwah 2 hours ago 1 reply      
I've been using the Hacker News Enhancement Suite extension for Chrome for so long I couldn't imagine using the site with out it.


edit: I'll also mention that I use http://hckrnews.com/ as my frontpage for HN.

reledi 19 minutes ago 0 replies      
I've been using "Reddit-Style Comments for Hacker News" [1] for years, works great.

1: https://github.com/andrewheins/HN-Comment-Hider

sanqui 19 minutes ago 0 replies      
All I'd like is to be able to click on the left margin of a post to skip under it. Sometimes it's just that the first comment has too many replies and I'd like to jump past it easily.
evo_9 1 hour ago 1 reply      
I still don't get why HN is against adding features like this or open links in new windows/tab AS AN OPTION.

I get that some people don't want anything to change on HN - no problem - but what is the harm in adding features that we individually can turn on that would not effect anyone else? Aka, add the features but have them off by default.

kissickas 2 hours ago 0 replies      
On Firefox I've been using HN Utility Suite.


thebrokencube 56 minutes ago 0 replies      
Despite there being a whole bunch of implementations out there, I had also written one a while ago: https://github.com/thebrokencube/yahnes . It does need some updating, but seems to work well enough for my use (plus, I just wanted to scratch my own itch).

Probably best to sort of consolidate some of these into a few really good ones, but we'll see.

hardwaresofton 54 minutes ago 0 replies      
Quick gif with LICEcap (http://www.cockos.com/licecap/) would do wonders to show how awesome this is...

would be a nice-to-have on the README

Igglyboo 3 hours ago 1 reply      
Little chrome extension I made for personal use, feel free to use it if you want.

It will allow you to collapse comment trees on HN.

crescentfresh 1 hour ago 0 replies      
Dropped the .crx onto the Extensions page.


Thanks Chrome!

fivedogit 1 hour ago 0 replies      
http://hackbook.club, a Chrome extension.

Follow other HN users, notifications when replied to, notifications when your karma changes, notifications when followed, comment viewing while on article/video/page, and chat.

SuperKlaus 58 minutes ago 0 replies      
nodata 1 hour ago 1 reply      
I use Hacker News Threadify: https://github.com/scouttyg/hacker-news-threadify available as bookmarklet or Grease Monkey script.
Ethan_Mick 2 hours ago 1 reply      
I've been using Hacker News Collapsible Comments [0], since I enjoy the default look of HN, but wanted to collapse the comments.

[0] https://chrome.google.com/webstore/detail/hacker-news-collap...

joshstrange 1 hour ago 0 replies      
I use, and have been very happy with, the Chrome extention HackerNew https://chrome.google.com/webstore/detail/hackernew/lgoghlnd...
thesilverbanger 2 hours ago 0 replies      
Can anyone recommend an extension for firefox?
Why I like XSLT
44 points by padraic7a  6 hours ago   20 comments top 11
elmarschraml 32 minutes ago 1 reply      
XSLT is one of those cases where the idea and the intention is great, but the actual implementation is horrible.

I would love to see someone propose a modern alternative to XSLT, that works for the same use cases, but avoids XSLTs mistakes.

bri3d 47 minutes ago 1 reply      
I interned at a company with an XML+XSLT based MVC framework once. I actually thought it worked really well and generally agree with the article. A lot of more "modern" template and transform solutions are aiming for the same goals: sandboxed, side-effect free, limited logic, and so on.

I especially like the power of combining XSLT with Batik and FOP. You can take backend data and present it as an interactive table, a visualization, or a PDF report really easily.

I will say that the syntax is awfully obtuse, though. I find that support for various powerful-but-obtuse 1990s/2000s solution stacks splits along an IDE-user line: those who use IDEs are generally comfortable working with XML and XSLT in Java while text-editor devotees tend to hate that class of solution.

baldeagle 37 minutes ago 0 replies      
I bought into xslt2... which I think was only ever implimented in the saxon parser, written by they guy that wrote large chunks of the spec. My chain was simple, MS Access DB -> XML (via hand written vba) -> parser -> 4 documents with information that was consistent. I then handed it over to a programmer to replace the Access DB / XML / Parser with the server versions, and he never did get it quite right. For my next project I chose JSON, on the idea that a beautiful solution doesn't mean much if it can't be productionized. Still sad about xslt though, lots of potential for making good looking pdfs and webpages that were consistent.
serve_yay 1 hour ago 1 reply      
Wow, I never thought I'd see someone say that. I found it to be an endless nightmare when I had to use it. (We thought t would be really cool to store links and other stuff in XML and then transform that into our ASP.NET pages, so we could change the XML file without changing the website. It got complicated over time.)

The point about the syntax is a good one, but I don't like the idea that people should just equivalently like any syntax that expresses the same basic ideas.

I can't say why exactly but something just feels fundamentally wrong with writing code in XML. I felt the same way when I had to write MSBuild scripts, which are XML. I guess it could be trying to do imperative things in a declarative language, but that doesn't seem to be a problem for Lispers.

monksy 47 minutes ago 1 reply      
I hate to promote my own site (http://theexceptioncatcher.com) but I found that it made it a lot easier to separate the design of the site from the content. Most of my websites rarely ever change, so having a dynamic site is over kill. XSLT/XML helps to keep it as a static site, and I have the opportunity to change/fix the design at will.

Also, I've managed to setup a system to generate my resume via an XML/XSLT transform. I haven't published how I do that (or the public version [which is out of date]).

It's a tool and it can be very useful if you understand what you're doing. Additionally, there is Unix support for XSLT processing.

catshirt 1 hour ago 0 replies      
used a ton of xslt. it's great for what it's good for (transforming xml). of course, if i had to do it all over again i wouldn't be using xml in the first place.
Animats 41 minutes ago 1 reply      
I've used XSLT for APIs that return XML. This allows looking at the output of the API in a human-readable format. This is convenient as a debug tool, and anything that returns an XML page should have a link to the XLST for it.
bokchoi 53 minutes ago 0 replies      
The improved syntax is interesting. Are there any better front-ends to XSLT? A quick google turned up SLAX, but I wonder if there are others.


joseacta 1 hour ago 1 reply      
What I like about them the most of the separation. You're actually manipulating the data (XML) and displaying it to end user in HTML without going through the formalities of coding.

Being using them since 1999 too. Still use them today although more work is being done with JSON/JS.

malandrew 6 hours ago 2 replies      
repost: https://news.ycombinator.com/item?id=8695638 but there was no prior discussion there.
jlebrech 54 minutes ago 0 replies      
components could be achieved using xslt, you have a high level app spec, and you transform all the components.
British Court Says Governments Electronic Surveillance Is Legal
53 points by dnetesn  4 hours ago   6 comments top 5
KaiserPro 1 hour ago 0 replies      
Just to clear things up, its not technically a court, its a quasi-judicial body that is in charge of "policing" the RIPA (kinda like the patriot act)

They have limited scope, and act in a similar way to FISA.

rosser 5 minutes ago 0 replies      
There are some very interesting explorations to be had in the places where the sets "things that are legal" and "things that are moral" don't intersect.
pvnick 38 minutes ago 0 replies      
The thing about pervasive government surveillance is that we just can't trust that the judges who decide their legality haven't been compromised. That's what's so devastating about these programs - there is a distinct possibility that their existence self-perpetuates by targeting those that would keep them in check. The same should be considered if the US supreme court decides their constitutionality [1].

[1] http://www.huffingtonpost.com/2014/04/17/supreme-court-nsa_n...

mingmecca 1 hour ago 1 reply      
Throughout our history many dubious actions have been "legal" but not necessarily right or just.

Stateside, hopefully the Supreme Court will side with privacy rights once those cases wind their way through the courts.

higherpurpose 1 hour ago 0 replies      
Is this the case that happened in closed Court? Either way, it seems Amnesty wants to take it to ECHR, and if the case is accepted, I doubt it will stick - not that UK seems to care much about ECJ/ECHR rulings. ECJ has also recently declared data retention by carriers illegal, yet UK keeps doing it, while actually trying to make those laws stronger.
Fractal compression for generating resolution-independent images (1992)
30 points by kibwen  2 hours ago   8 comments top 5
anigbrowl 56 minutes ago 1 reply      
One of Barnsley's notes with math, a concise algorithm, references, and a rather interesting set of figures: http://www.ams.org/notices/199606/barnsley.pdf via a comment on http://pointersgonewild.com/2014/01/02/tile-based-image-comp... from HN user Tachyonbeam who has experimented with the technology.

Remember that photo-enhancing scene in Blade Runner? Are you feeling motivated yet? For me the decompression speed and small file size were nice things when I first encourntered the format back in ~1992 but jpegs were already good in that area and it was obvious that the price of storage space and bandwidth would fall (though I continue to be surprised by the progress of technology). So to critics who say it's not competitive on size or speed - quite right, no argument from me.

However, nothing I have seen in the intervening 22 years comes close to FI compression for image enhancement. It's hard to find examples online but trust me, it was this good on everything we threw at it.

Also from the original thread: http://www.verrando.com/pulcini/gp-ifs1.html < windows en/decoder (IFS-BMP) and some other resources, straight outta 1997.

kibwen 2 hours ago 0 replies      
Saw this mentioned in the BPG thread (https://news.ycombinator.com/item?id=8704629) and it looked too cool not to investigate. I'd love to see some images of the type that you'd see on modern websites (say, Medium's giant page-spanning banners) compressed with this algorithm and scaled up by steps to see what artifacts result as the resolution increases.
gretful 31 minutes ago 0 replies      
I saw this back in '90, showing an aquarium, zooming into a goldfish, zooming into its eyeball, and then zooming into the Earth. I always wondered what happened to the tech. The company was IFS, IIRC.
dekhn 40 minutes ago 2 replies      
Wasn't this the work that was patented, prevented everybody from using it, and never went anywhere?
steven777400 1 hour ago 0 replies      
I remember dreaming up this idea years ago (but significantly after 1992), probably around 2000 sometime. We never tried to implement it, but we tossed around the idea of encoding images as chunks of fractal portions. There were some pretty serious deficiencies in our idea which these authors must have addressed. Neat to see that it can actually work.
A Formula for the Number of Days in Each Month
34 points by tolmasky  5 hours ago   17 comments top 10
danbruc 0 minutes ago 0 replies      
29 + 2 * ABS(SIGN(month - 2)) - FLOOR(ABS(month - 7.5)) % 2
pzxc 17 minutes ago 0 replies      
Thirty days hath September

April, June, and November

All the rest have 31

Except for the very special one.


The knuckles method, for those who don't know: Make a fist. Start with January on your first knuckle. Counting knuckles AND the gaps between knuckles, so each knuckle is 31 days, the gap between is 30 days (or 28/29 for february). Repeat on the last knuckle before reversing.




JAN_feb_MAR_apr_MAY_jun_JUL (repeat last knuckle and reverse)

AUG (knuckle repeat)_sep_OCT_nov_DEC (ending on your middle knuckle)


svisser 5 minutes ago 0 replies      
The easiest function would be a mapping from 12 input values to 12 output values.

For example, see the Collatz conjecture which only defines two cases (but one could have 12): http://en.wikipedia.org/wiki/Collatz_conjecture

USNetizen 38 minutes ago 0 replies      
I believe Zeller's Congruence had already solved this, no?

Zeller's is based upon calculating the day of the week, but it also calculates days in the month and accounts for leap years. Plus, it's been around since the 19th century and is used in many major programming languages to calculate date (its implementation is sometimes a first year Comp Sci course project for introductory programming): http://en.wikipedia.org/wiki/Zeller%27s_congruence

For a programming example that does this same thing: https://github.com/mwhawkins/JavaZellers

How did this make the front page of HN being a problem which has already been solved and well documented for almost 200 years? Not to knock the author, as they put in some time apparently, but was just curious.

ChuckMcM 23 minutes ago 1 reply      
Great analysis. The next step would be to see if the code to implement the function can be shorter than 12 bytes (and array of bytes holding the number of days in the month such that f(x) = days_of_month[x]; is longer than the computation. Of course from a pure cycle counting advantage doing it as a calculation will always be faster as memory is slow to access.
kazagistar 7 minutes ago 0 replies      
Compiles down to something like a hundred bytes of code. What a savings!
doragcoder 35 minutes ago 0 replies      
There was a Hacker News article about why August has 31 days, breaking the 30/31 alternating days. I can't find it but here's an article about it.


It just makes it a more interesting formula because of that!

tlarkworthy 14 minutes ago 0 replies      
my digital attempt:-

function days(m) { return 30 + ((m + (m > 6)) % 2) - 2 * (m == 2)}

recursive 19 minutes ago 2 replies      
Here's mine. No floating point logic required.

let days = (i ^ (i >> 3) | 30) - 1 / (i * 9 % 17) * 2

joezydeco 56 minutes ago 1 reply      
Great start. Now let's add leap years. =)
A quick tutorial on implementing C memory management functions
93 points by redraga  5 hours ago   10 comments top 4
nkurz 3 hours ago 1 reply      
Great tutorial, Dan! Since it sounds like you are planning to continue the series, I have a few thoughts on potential directions to go with it.

First, a link to Doug Lea's classic malloc page might be a good addition to the resources section. His dlmalloc() is the basis for GCC's current ptmalloc. His code is wonderfully clear and commented, both for the implementation and the rationale behind it: http://g.oswego.edu/dl/html/malloc.html

Second, I wonder if it would make sense to jump straight to using mmap() instead of the classic brk()/sbrk(). I think it's no more complicated, has more uses elsewhere, is conceptually more portable, and allows multiple arena's to be added in a straightforward way. Are there advantages I'm not seeing to sticking to the ancient ways?

Last, on the debugging side, I think you might want to start with an introduction to Valgrind rather than gdb. It's a much easier learning curve, and even for an expert it's often the better tool for the memory allocation type bugs that are going to be most common here. Alternatively (or additionally) some examples of the more modern Address Sanitizer that's now in GCC and CLang would be slick: https://code.google.com/p/address-sanitizer/wiki/AddressSani...

emcrazyone 1 hour ago 0 replies      
The HN title doesn't match the article title "A Quick Tutorial on Implementing and Debugging Malloc, Free, Calloc, and Realloc" but from the article it is obviously C.

The mentioning of sbrk in the article seems to do so in passing. Nothing describes what it is. I would spend time on it.

I would also address virtual memory and at least let the reader know that the underlying operating system is ultimately responsible for allocating memory. When a modern OS uses virtual memory, you get a virtual memory pointer which would be different than the physical address.

Linux, in particular, uses a process referred to as Optimistic Memory Allocation to honor malloc requests.

I mention all this because I think anyone interested in these lower level details would also be interested in how the OS and hardware are involved.

Animats 2 hours ago 0 replies      
There is a much better discussion of this in vol. I of Knuth.

(Amusingly, in the original edition Knuth uses a simple algorithm (linking the allocated spaces) in the text and leaves the better algorithm (linking the free spaces) to an exercise. In Unix V6 (yes, this really dates me), the "malloc" in the C library used the simple algorithm from Knuth, variable names and all, causing O(N^2) performance problems.)

yason 2 hours ago 3 replies      
I have "a thing" for basic system services like malloc, linker, etc.

Writing a memory allocator is a nice exercise but it's also fundamental research: by toying around, because of sheer interest, with what everyone else takes for granted you might come up with something that changes things for all, big time.

For example, memory allocators were considered a "well enough solved" problem for a long time until suddenly we had a rush of new, experimental, and/or optimized allocators like jemalloc, tcmalloc etc. While they might not be revolutionary they still blast the old 80's/90's implementations like no tomorrow.

Introducing Varnish Massive Storage Engine
55 points by Jgrubb  7 hours ago   8 comments top 6
damagednoob 1 hour ago 1 reply      
"Varnish allocate[s] some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.

When Varnish next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that's it. "


I'll try and hold back the snark but I find it interesting that after attacking '1975 programming' and Squid's deficiencies, here we are 8 years later and maybe the kernel doesn't know best.

fiatmoney 1 hour ago 0 replies      
"Assumption 1. Using write() instead of implicitly writing to a memory map would lead to better performance."

I've seen this mentioned in the context of RocksDB; but contradicted by e.g. SQLite. The case for mmap has always been that one avoids the overhead of a system call & some double-copying, and in either case it just dirties the page cache and is only "really" written in periodic flushes (assuming it's not writing via direct IO). Can someone explain what the bottleneck is on the mmap side and why write() might be faster?

dantiberian 7 minutes ago 0 replies      
I'd be curious to know why they didn't just use one of the existing cache algorithms from the literature. They talk about it being close to them, but not why they chose to go their own way. I suspect it was because their algorithm had better mechanical sympathy. There's a number of good choices at http://en.wikipedia.org/wiki/Cache_algorithms#Examples, theirs sounds closest to ARC.
skrause 37 minutes ago 1 reply      
Why does a site of a web site acceleration take almost 10 seconds to load?
jallmann 20 minutes ago 0 replies      
Regarding three tier caching between RAM/HDD/SDD: isn't this exactly what ZFS L2ARC is supposed to do? Relying on that seems closer to the Varnish philosophy of leaving as much to the underlying system as possible. Or would that encounter the same bottlenecks they are trying to solve right now? And if so -- how?
lifeisstillgood 28 minutes ago 0 replies      
I am reminded of the slow programming article a while back. How rare is it for companies to say "take a year, see if it works"?

Really good performance needs doing things differently, not the same thing faster. Yet most organisations don't want to try something different or give the space to try it.

Quantum Attack on Public-Key Algorithm
58 points by jonbaer  8 hours ago   13 comments top 3
tptacek 17 minutes ago 0 replies      
The subtext of Schneier's post is that a lattice encryption scheme was found vulnerable to a QC algorithm, which is meaningful because lattice encryption schemes are seen as a promising post-quantum crypto scheme --- that is, a scheme that would, unlike RSA (IFP), DH (DLP) and ECC (ECDLP), remain secure even if it becomes feasible to deploy large-scale quantum computers.

Two things worth knowing:

* There are multiple hard lattice problems in number theory (the paper refers to one of them in the conclusion). And other lattice schemes have been found vulnerable, to non-quantum attacks.

* There are multiple hard problems not involving integer lattices that are believed to be hard for quantum computers. McEliece, for instance, is another well-known candidate for post-quantum public key crypto, and it's based on linear codes, not lattices. Lamport signatures[1] realize public key crypto purely from hash functions (which aren't hugely weakened by QC.) A good intro to these issues:


Nick Weaver's "what if all trapdoors are vulnerable to QC" comment seems a little premature.

[1]: http://en.wikipedia.org/wiki/Lamport_signature

jamoes 2 hours ago 1 reply      
Lattice-based public key schemes are designed to be quantum resistant. So this is a troubling development, because it might indicate that other lattice-based public key schemes are vulnerable to similar attacks. It might be very difficult or even impossible to create quantum-proof public key schemes.

Fortunately, though, a quantum-proof signature scheme (meaning no encryption or decryption, just signing) has already been developed: Lamport Signatures. These rely on the security if hash algorithms (such as SHA256), which are not weakened by the existence of quantum computing.

zkhalique 2 hours ago 3 replies      
Wait, so can quantum computers now crack public key cryptography, or not?
Interview with Laura Poitras
170 points by hotgoldminer  8 hours ago   80 comments top 7
oconnor0 6 hours ago 2 replies      
"Thats the stuff that drives me crazythis response of more violence and more war to make us all safer actually creates more violence." - I hope that one day we realize this is true.
slg 5 hours ago 3 replies      
Poitras says "the reporting weve done has all been filtering whats in the public interest versus whats operational" yet in the same interview reveals "the communication flow for the drone system all goes through Rammstein, so its part of the nerve center. The controls are elsewhere, but it all runs through fiber optic cables that go in and out of Rammstein" How is it in the public interest to know that all drone communication flows through a specific base especially considering how valuable that fact might be to someone who would want to ground our drone fleet?

That is my only problem with Snowden and how these things are being reported. It started out with valid whistle blowing but there are plenty of real operational national secrets that are being revealed in the process. If Snowden simply stuck to the NSA's overreach in domestic spying, it would be much harder for politicians and the like to call him a traitor and marginalize these issues.

konradb 7 hours ago 3 replies      
I found it odd when she said

> I was destroying the physical media, because you cant encrypt SD cards.

Does she mean they don't come with built-in encryption? I'm curious as to why you couldn't just save an encrypted file or encrypted container to an SD card.

edit: aaah, camera. I got the wrong end of the stick. Thanks!

cryoshon 5 hours ago 2 replies      
Mediocre interview, I think. Nothing really groundbreaking asked or answered with depth.

That said, I agree with her when she says that these two decades of war and terror we've inflicted on the world will be seen as dark times.

I say two decades because we've already been through 13 years, and only now are starting to show the faintest hints of slowing down the rampage abroad and the totalitarianism at home.

abandonliberty 5 hours ago 1 reply      
Does Snowden's legal fund actually have under 9000$?https://fundrazr.com/campaigns/6mzUd

Seems legitimate, but that's crazy to me. Maybe they don't count everything?

jordigh 7 hours ago 1 reply      
Can we fix the URL without the Google redirect? It should go to:


nihaody 4 hours ago 3 replies      
"You can't encrypt SD cards."

Right there she throws any accountability that she has any idea what she's talking about regarding encryption out the window.

Academia as an anxiety machine
79 points by johndcook  2 hours ago   22 comments top 10
alevskaya 44 minutes ago 0 replies      
The worst consequence of this insecure, anxious, hypercompetitive culture is the near-impossibility of sincere collaboration. I speak from my experience as a very successful grad-student and postdoc in synthetic biology and neuroscience. Unlike equity in a startup, the kudos credit for publishing a high-impact paper is really only split between the first author and the lab PI, removing any incentive for trainees and labs to work together on larger projects. Of course, a great deal of noise is made in grants and PR about academic collaborations, but they are always paper affairs of convenience in securing funding, never truly incentivized joint missions, at least not in the biological sciences. I can't count the number of attempted collaborations that I've seen collapse in heated acrimony. As a result, most academic science is a very lonely affair, and the ambition of what can be achieved today in experimental biology is completely constrained by what one poorly-trained young researcher can do in a few years of hard labor with only a modicum of outside help and advice.
chatmasta 1 hour ago 3 replies      
My girlfriend is getting her PhD. We are planning a vacation right now. She works in a lab as a researcher, which is not a "real job" since she's getting her PhD. There are no specific regulations for vacation time, or anything for that matter. However, she's unsure how long she can take for vacation, and unsure whether she should ask her P.I. This is an example of the irony of academia.

On one hand, the lack of clear policies and regulations governing academia is one of its greatest advantages, because it gives your mind space to explore, and expands creative freedom. If you want to work from 5pm - 3am, nobody will stop you. Ultimately you work for your own advancement, set your own goals, and plan your own schedule.

On the other hand, when other people are involved, especially those with "seniority" like P.I.'s and advisors, it's no longer obvious what decisions you should make. You need to consider more than just yourself, but the lack of regulation creates ambiguity in the "politics" of academia. How do you balance the expectations of those with some power over you, with the expectations you have of yourself? I imagine this question is a source of pressure for many academics, especially those who are extrinsically motivated.

Balgair 1 hour ago 1 reply      
How to get the best out of people is a millennial long question. Egyptians used whips and slaves, we have been using money and propaganda lately, but everyone uses something that may or may not be the most optimal method. The issue is that people are individual and respond differently each time you ask them. My condolences on the loss of life (I too have felt like the world was asking far too much of me and I had no way out) but suicide is rarely the answer. Quitting your job, going to live in Montana for a while, having far too many beers, considering that everyone around you is an asshole, those are good options. And yes, I know it is real hard to climb out of those dark mental holes and realize this, but damnit you have to try.

If there was 2 things I wish the whole world could know, truly know deep down, they would be that you personally are worth all the stars in the heavens, all the grain in the fields, all the water in the oceans, and that everyone else is worth exactly the same amount; that we are all priceless.

jsnk 1 hour ago 1 reply      
Going into academia seems like the dumbest thing only smartest people do.

Jump into a job market where you compete against the bright and ambitious individuals who will do everything to out-work you to fight for a small prize pool.

The ending will most likely not be a happy one.

mathattack 4 minutes ago 0 replies      
"One of the reasons academic infighting is so vicious is that the stakes are so small" - Kissinger
freshhawk 1 hour ago 1 reply      
"There is no trace of evidence that you can get the best out of people at high-level tasks through pressure and competition. The opposite is true. Worried people get dumber. They may be faster at carrying rocks but they do not get smarter."

That last sentence is a great turn of phrase.

davidmerriman 1 hour ago 0 replies      
Life is an 'anxiety machine.'

Claiming academia killed this specific man is in extremely bad taste. And if we're speaking in generalities, I'd need to see evidence that academic positions correlate with higher rates of suicide.

mdcpepper 1 hour ago 1 reply      
I didn't stay in formal education beyond high school so I can't really comment, but from that article:

"Unfortunately, some of his colleagues felt that he did not secure sufficiently large research grants. So he was to be fired."

However, Imperial College' statement on his passing claims otherwise:

"Contrary to claims appearing on the internet, Professor Grimms work was not under formal review nor had he been given any notice of dismissal."


eli_gottlieb 15 minutes ago 0 replies      
> This is not, I shouldn't have to say, how academia works. Peter Higgs, of Higgs Boson fame, said that there was 'no Eureka moment' to his work, and he only has 4 papers listed on Google Scholar: but what papers! Science rarely has a Eureka moment: it's rather a series of careful, thoughtful developments of work done by one's forebears and peers. A management which demands a Eureka a day is one which doesn't just not 'get' academia, it's a management which contradicts the academic method and it's one which has forgotten that it's meant to serve the needs of science, the arts, students and researchers, not the insatiable maw of attention seeking 'Leaders' (that's the word they use now) and the PR office. It's also a management that kills.

slow applause

Chris Hughes Purges The New Republic
91 points by danielweber  6 hours ago   44 comments top 11
ColinCera 3 hours ago 3 replies      
It's a good thing Chris Hughes likes to break things, because he's irreparably broken The New Republic.

The people who read TNR (myself included) do so specifically because it's been a bastion of traditional journalism, and moreover we read it for specific writers & contributing editors, and since they've all resigned now, TNR is dead.

Outside of a very small sliver of the population nobody's ever even heard of the The New Republic it's not a brand name that Hughes can gut and remodel. The New Republic is not a brand that anyone cares about; we cared about the content, the writers and the editors.

I can see no way for TNR to carry on as a viable operation. As a TNR reader for more than 30 years, it makes me rather sad.

This is all so stupid and sad.

pastProlog 2 hours ago 1 reply      
If they resigned, how were they "purged"?

The real New Republic purge happened in 1974. Goodbye articles on Ralph Nader and consumer auto safety, hello articles supporting Contra attacks on the elected left-wing Nicaraguan government etc.

This magazine has fallen to 50,000 paid subscribers. The New Republic had become a sinecure for self-important blowhards with no name recognition, and Hughes is clearing the decks. I mean, people like Michael Moore write for The Nation, Mother Jones broke the Romney 47% comments, what has the New Republic been known for in past decades other than Stephen Glass scandals or Bell Curve black/white genetic intelligence articles?

Hughes is reversing a decline which began 40 years ago. These bloviating fossils are not good for much other than biting the hand which fed their sinecures, and getting their sour grapes into rival publications.

benbreen 2 hours ago 1 reply      
"According to informed sources, Hughes and Vidra didnt bother to inform Foer that he was out of a job. Instead, the editor was placed in the humiliating position of having to phone Hughes to get confirmation after Gawker.com posted an item at 2:35 p.m. reporting the rumor that Bloomberg Media editor Gabriel Snyder, himself a onetime Gawker editor, had been hired as Foers replacement."

Yikes. Reading about your replacement's hire on Gawker has got to be one of the worst ways to find out you're fired. I'm not surprised to see a shakeup at TNR, but to see it handled so badly is shocking.

ajsharp 2 hours ago 1 reply      
This story is almost exactly the plot of The Newsroom (HBO) episode from two weeks ago, where a mega-douchey tech entrepreneur (played brilliantly by BJ Novak) purchases ACN with the intent of turning the network into a crowdsourced, citizen-journalism "digital media company." The real-life story is hilariously similar.
augustocallejas 5 hours ago 1 reply      
> The friction escalated with the arrival of Vidra, who is said to have complained to Foer that the magazine was boring and that he couldnt bring himself to read past the first 500 words of an article. According to witnesses, Vidra did little to hide his disrespect for TNRs tradition of long-form storytelling and rigorous, if occasionally dense, intellectual and political analysis--to say nothing of his lack of interest in the magazines distinguished history--at an all-hands meeting in early October.

Why run a magazine if you're not even interested in the type of journalism it produces?

rossjudson 13 minutes ago 0 replies      
Time for these editors to get together and build "The Old New Republic". Or maybe that's "The New New Republic".
mturmon 2 hours ago 1 reply      
For much more (but worth it, if you're interested), see this profile of Hughes, from Dec. 2012, in NY Magazine: http://nymag.com/news/features/chris-hughes-2012-12/ --

It basically predicts Wieseltier's ouster:

"Given that Hughess interests are at least as literary as they are political, I found that many of the people I spoke with suspected the real changes at the magazine would come at the expense of Wieseltierwho had his own charmed life as the oldest young man in the room. As the editors came and went at Peretzs favor, Wieseltier ruled a sort of archipelago of learnedness in the magazines back pageshaunted by its own testy thoroughgoing-ness, dense with type and argument, and deliberately off-putting. [...] His culture section, which often made up nearly half of each issue, was supposed to have nothing to do with the rest of the magazine at all.

"But Hughes wants a single, readable magazinewith photographs!not two stapled together, and this will entail treating Wieseltier, as one person familiar with the magazine put it, as an employee for the first time.


Looks like he got treated as an employee all right.

ilamont 2 hours ago 0 replies      
Staff and readers have legitimate concerns about the new direction of TNR. However, I wonder if the outrage would be the same if the organization were to stay in Washington.

For many people who have been at an organization for a long time, being forced to relocate is a very unattractive proposition, especially if spouses' careers and children's schools will be impacted. Moving to NYC means downgrading living space and potentially increasing the commute as well. If moving to NYC is a non-starter for these writers and editors, that changes the dynamics of whats being portrayed as an old guard/new boss strategy split.

Another question: Are there other employment opportunities for the staff in D.C.? The Post has very deep pockets now, along with an owner who wants to make a national publication. Politico is also doing well. Who knows, maybe some other local or national media startup (or established player) would love to build out their masthead

snowwrestler 5 hours ago 3 replies      
The Daily Beast is engaging in some heavy-duty schadenfreude here.

The New Republic was sold for reason: its business model is failing. While Hughes might be making a mess of this transition (perhaps no surprise for a 30 year old who lucked into getting rich), some kind of hard transition was going to happen in any case. It's not like all these folks were going to have long happy careers at a niche long-form print magazine, if only Chris Hughes had never come along.

mcantelon 2 hours ago 1 reply      
This seems part of a trend, by East coast pro-state leftists, of growing tabloid-y media to make their ideology more accessible.
A Data Analyst's Blog Is Transforming How New Yorkers See Their City
34 points by muraiki  9 hours ago   7 comments top 3
paulannesley 44 minutes ago 0 replies      
> The skinny data nerd

Looks like a normally proportioned guy to me.

rcarrigan87 1 hour ago 1 reply      
I'm all for open-sourcing public data.

However, the general public has a very crude understanding of causation vs correlation(I certainly struggle with it in areas outside of my expertise). I'm not totally sure how the conclusion of the MTA over-charging people was drawn, but to make that leap from the data is questionable.

I would be impressed if government workers thought that hard about fare denominations. The reality is, Joe Blow government worker pulled the fare deposit amounts out of his ass. This isn't big oil or Wall Street...

Now you've got citizens up in arms about what was probably just standard gov't laziness. Maybe instead of the incredulous spin, the headline I'd like to see: "Blogger working with MTA, helps riders save $50M."

I have to say though, this guy is the man for popularizing no doubt what will become a major trend and push other local gov'ts to open up their data.

TLDR; Teach more stats to the children!

edit: typo

th0ma5 1 hour ago 1 reply      
You all should be starting something like this in your own town!
Before the Garden Gnome, the Ornamental Hermit
15 points by pepys  5 hours ago   1 comment top
pavlov 39 minutes ago 0 replies      
What a terrible job to have: seven years of living in a shack, not talking to anyone, not allowed to wash yourself or cut your fingernails.

Fortunately it appears not all hermit employers were so draconian. This sounds like an interesting form of entertainment:

"You pull a bell, and gain admittance. The hermit is generally in a sitting posture, with a table before him, on which is a skull, the emblem of mortality, an hour-glass, a book and a pair of spectacles. The venerable bare-footed Father, whose name is Francis (if awake) always rises up at the approach of strangers. He seems about 90 years of age, yet has all his sense to admiration. He is tolerably conversant, and far from being unpolite."

I'm now hoping for a smartphone app called Hermit, which does exactly what is described above. The "tolerably conversant" 90-year-old sounds like a fun kind of AI to write. Compared to the stuff I do on a phone in idle moments, chatting with a simulated hermit while contemplating a memento mori desktop seems positively useful.

Intro to statistical data analysis in Python frequentist and Bayesian methods
119 points by sebg  8 hours ago   11 comments top 7
icki 4 hours ago 0 replies      
I also recommend Probabilistic Programming & Bayesian Methods for Hackershttp://nbviewer.ipython.org/github/minrk/Probabilistic-Progr...
niels_olson 2 hours ago 0 replies      
Most helpful thing I ever learned about Bayesian statistics came from Kant: all of a sudden the "prior" and "posterior" were easy to remember. In his introduction, he discusses the origin of synthetic knowledge, and sets about distinguishing between a priori and a posteriori knowldge: that which one had before, and that which one has after. Of course we all know about "a priori" but I had never associated "a posteriori" with the same line of thinking.


kylebgorman 1 hour ago 1 reply      
Consider using good Python style (like consistent use of whitespace) when trying to teach people to use Python (in any fashion).
sebastianavina 2 hours ago 1 reply      
I love how every day there is a new post about R and statistical data analysis. It's really a hot topic. I hope somebody could upload a course using measure theory for the ones like me interested more in the abstract probability concepts.
syedahmed 7 hours ago 1 reply      
Sweet! Bookmarked. Thanks for sharing. I'm just getting started with Python and this will indeed serve as a great resource once I start delving in Data Science stuff.
JHonaker 7 hours ago 0 replies      
I use R for my statistical programming mostly, but I use Python for a lot of other things. It's nice to have this as a reference when I don't feel like moving back to R.
abhishekkr541 7 hours ago 1 reply      
Awesome to have this page. I was wondering about this only few days back, if I could find a website where I could learn Statistical Data Analysis in Python. :)
Antha A high-level language for biology
43 points by wspeirs  9 hours ago   4 comments top 2
samuell 4 hours ago 1 reply      
Interestingly, uses GoFlow [0] (Flow based programming[1] lib, by Vladimir Sibiroff [2]) under the hood, and has a whole section on FBP, at [3].

Was in fact playing with the idea of using it for bioinformatics processing some 1 year ago (see [4] for components and [5] for an example program) and thought it was a great idea :)


[0] https://github.com/trustmaster/goflow

[1] http://www.jpaulmorrison.com/fbp

[2] http://twitter.com/sibiroff

[3] http://www.antha-lang.org/docs/concepts/flow-based-programmi...

[4] https://github.com/samuell/blow

[5] https://gist.github.com/samuell/6164115

PT_2014 56 minutes ago 0 replies      
Be interesting to see if there are plans to integrate with BioKepler(http://www.biokepler.org/) or similar existing bioinformatics and lifescience workflow systems.
Std::string half of all allocations in the Chrome browser process
115 points by mlrtime  9 hours ago   136 comments top 17
userbinator 3 hours ago 1 reply      
25000 (!!) allocations are made for every keystroke in the Omnibox.

The Omnibox is no doubt far more complex than simple text box since entering characters into it can invoke things like network connections (for search suggestions), but 25k allocs is still a bit on the excessive side.

Strings are an interesting case in that in general they are of indeterminate (and variable) length, which makes them somewhat difficult to accommodate in computer memory which is finite and allocated in fixed-length pieces. Abstractions like std::string have been created to make it simpler and easier to perform operations like appending, resizing, copying, and concenating, but I think this is part of the problem: by making these operations so easy and simple for the programmer, they're more inclined to overuse them instead of asking questions like "do I really need to create a copy just to modify one character? do I really need to append to this string? how long can it be?" Essentially, the abstraction encourages ignorance of the real nature of the actual operations, leading to more inefficient code. It only helps the programmer to perform these tedious operations more easily, and doesn't help at all with the decision of whether such tedious operations should be needed at all, which I think is more important; the first question when designing shouldn't be "what abstractions should I use to do X?", but "do I really need to do X, or is there are simpler way that doesn't need to?" The most efficient way to do something is to not do it at all.

Contrast this with a language like C, in which string operations are (unless the programmer writes or uses a library) far more explicit, and the programmer can be more aware of what his/her code is actually doing. That's why I believe every programmer who has to deal with strings should have at one point been exposed to implementing a resizable string buffer and/or length-delimited string library, to see the real nature of the problem (including how to do length management correctly.) Without this basic, low-level understanding of how to use memory, the advantages of all the other fancy string abstractions won't make much sense either.

humanrebar 7 hours ago 3 replies      
The problem with std::string is that it's named wrong. It should be called std::string_buffer, because that is what it is. Its performance characteristics are closer to a std::vector than a std::array (now available since C++11).

Many projects cannot copy around std::vector<char> in good conscience. They really want a copy-on-write string, an immutable string, a rope, a reference-counted string, or an always-in-place string. Or some combination of the above depending on the circumstance.

The problem is that std::string is not a good type to use as a parameter for various reasons. In addition to its aggressive allocation behavior, it's also fairly inflexible. What are the alternatives?

1. boost::string_ref - available now, so use it

2. std::string_view - available starting in C++14 and works roughly like boost::string_ref

3. pass around pairs of iterators instead of single objects

3) is actually the most flexible, though it requires different kinds of overhead. The most obvious way would be to template all your string-accepting functions on two parameters: the type of your begin iterator and the type of your end iterator. But the benefit is that you can pass around any of the above to your heart's content, plus more, like elements in tries.

std::string still has an important place, but it should generally be used as a private member variable, not as something you require in your interface. Pretty much the same thing goes for char* unless you are implementing a C ABI (plus a size, please). Even then, you can immediately convert to/from a boost::string_ref and still have yourself a self-contained reference to a bounded character sequence.

ctur 8 hours ago 3 replies      
folly::fbstring, a drop-in replacement for std::string, is part of the folly library that we (Facebook) open sourced a while back. It allocates small strings in-line and larger strings on the heap and has optimizations for medium and large strings, too. It's proven quite effective for us, particularly when used with jemalloc, which it conspires with for more optimal memory management. We use it as std::string for our C++ applications and libraries, completely replacing std::string both for our own code and third-party code.


In addition, it is worth noting folly::StringPiece (from folly/Range.h), which is generally a better interface for working with in-memory ranges of bytes. Hardly a new idea (it's inspired by similar libraries, such as in re2), but it permeates the APIs of many of our larger C++ systems, and folly itself, and often avoids passing std::string objects around at all.

Finally, there is also folly::fbvector, which offers similar improvements over std::vector.

ryandrake 8 hours ago 1 reply      
If you dig deeper and actually look at the source diffs, you will see that this is not about std::string being "bad" (it's not), but it's about problems with how they're using std::string. Most of the trouble was constantly converting back and forth between std::string and const char*, which needlessly produces temporary allocations. Simply moving to passing everything around as const references should help enormously with memory churn.

EDIT: Spelling :)

acqq 8 hours ago 4 replies      
I'm old enough that I actually programmed in Turbo Pascal 3 (around 1985) which of course produced quite fast code even for the speeds of the processors then (4 MHz processors were enough for everybody, not really, but that's what we had!). That Turbo Pascal had strings that were of the limited size, but they were able to use the stack, not the heap. I still don't understand that the library string in C++ can't use the heap instead of the stack even for the small strings. Most of the allocations detected in the post are actually the short-time ones, and I'm also quite sure that most of the strings aren't too big, which means that using the strings in the Turbo Pascal style (on the stack for the local variable) would remove the need for most of the allocations.

I guess that will maybe come in 2020 in C++ standard, if anybody of the people who would need that reads this and works hard. Yay for the march of progress.

hesdeadjim 7 hours ago 0 replies      
It's always driven me nuts that the base string in the STL is a fully mutable container that must manage it's own memory. I much prefer a string being either immutable, or an interface that you can implement however you see fit: a view over a raw buffer, a ref to a globally shared string instance that may or may not be ref counted, or like the STL, a version that self-manages its own memory.

Besides boost::string_ref, there is also a more generic flyweight implementation requiring only an equality concept for it's template parameter:


ceejayoz 7 hours ago 1 reply      
I sure as hell believe it. I'm on an older, slightly creaky Macbook Pro, and typing in Chrome's box is frequently a nightmare if I'm running something like a VM. Keystrokes lag tremendously.
ajuc 8 hours ago 3 replies      
I've worked on a project that used all of these:std::string, QString, OString, char*. All were required by a different library that we needed.

This is why a good string type should be in core language.

DigitalSea 6 hours ago 1 reply      
It all makes sense now. For some reason on my gaming PC which is pretty spec'd out in almost every way, Chrome will lag when typing into the Omnibox and requires me to close it completely and reopen it frequently. For a while I thought perhaps I had an issue with my CPU getting too hot, a bad plugin or faulty RAM, but this exact issue appears to be the cause of all of my problems. The only thing that seems to fix the issue temporarily is clearing out my browsing history every few weeks, otherwise the issue gets to the point where you can wait a couple of seconds for a word you have typed to appear.

Don't get me started on the performance of using Chrome inside of a VM, that is a whole other world of hurt right there.

ExpiredLink 8 hours ago 2 replies      
During the 10 years I used C++ I never saw a project that used std::string as their standard string. The implementation of std::string not standardized. It may or may not have a 'small string optimization' which may not be an optimization at all. Instead of specifying a mundane immutable built-in string like in Java the C++ Standards committee decided to add even more 'advanced' features to an already overloaded language.
amaks 4 hours ago 0 replies      
There are many things in the standard C++ library which are named incorrectly (std::string), awkwardly (std::unordered_map) and and implemented inefficiently from the perspective of modern CPUs (same std::unordered_map, which uses linked list for the underlying hash table buckets). See the great talk on CppCon 2014 about those issues - https://www.youtube.com/watch?v=fHNmRkzxHWs.
CyberDildonics 6 hours ago 1 reply      
I'm not even sure if I do stuff like this in prototypes. My experience has been that using a matrix/arena/pool can speed a program up that has inner loop allocations by x7. I think the average pc can do about 10,000,000 heap allocations per second, but as far as I know it causes thread locking to some degree.

Don't many std::string implementations have small string optimizations? This is actually the first time I have every heard of C++ strings being the bottleneck of an application (and it seems that is even still up for debate here).

jamesu 6 hours ago 0 replies      
In a certain 3d game engine we use, the devs decided to refactor a lot of old code in the animation code which used a hashed string type to use their new all-in-one reference counted String type instead.

Safe to say this turned out to be a disaster for performance since every time an animation or not had to be evaluated from a name (which we ended up doing a lot), a string had to be allocated on the heap.

Some people sadly underestimate how bad objects which rely on heap allocation can be in performance-critical code.

TazeTSchnitzel 8 hours ago 3 replies      
One thing I've learned from PHP's internals is that using reference-counted strings and copying on write is a fantastic idea. You can save an awful lot of memory and allocations, and simplify your code.
GregBuchholz 5 hours ago 0 replies      
Well, it is a good thing they wrote it in such a high-performance language then...
DanielBMarkham 7 hours ago 1 reply      
One of the recurring things I see in programming literature for the last 20 years or more is performance hits when using string. I've seen essays on this in at least four different programming languages.

You'd think it'd be simple, but it's not. String is an allocated buffer of unknown final size, and since it represents some kind of meaning in a human language, and since human languages have indeterminate length for conveying any one concept, concatentation is extremely common.

This is actually one of those cases where I like c better, warts and all. Whenever you use a string, you should carefully think about what you're going to do with it, and if at all possible allocate all you need up front. Beats the heck out of taking an unexpected GC call somewhere later when you weren't expeccting it.

vegancap 8 hours ago 4 replies      
I literally started programming in C++ this week and I figured the over-use of std::string couldn't be a good thing hehe
Zelkova Elm-style FRP for Clojure and ClojureScript
23 points by gaalze  5 hours ago   discuss
Chiral Key Found to Origin of Life
9 points by softdev12  6 hours ago   discuss
I Love Julia
86 points by tokai  4 hours ago   41 comments top 16
jjoonathan 3 hours ago 2 replies      
I love Julia too, but it currently has a bug which kills my desired workflow: the plotting library (Gadfly) takes 30 seconds to import.

Yes, there are a number of workarounds. You can use python or R for plotting, you can keep a single interactive session going and reload your source file in it, you can precompile Gadfly into Julia's global precompiled blob, etc, but the solutions take time and effort that significantly offset Julia's value proposition (for me, at least). If you haven't looked at Julia yet you might want to hold off for a while until they get it sorted out. It looks like it might happen in the next version, 0.4, which hopefully will support pre-compiled libraries.

I feel like a bit of a dick for complaining about hiccups in a beta version, but I really like Julia and I really don't want to see it go the Clojure route and simply accept several-second delays in oft-repeated tasks. Drake, I'm looking at you -- Make can build my entire C++ codebase in less time than it takes your "make-replacement" to launch! I love your features, but the stuck-in-molassas feeling I get when using your program was enough to scare me away.

dantiberian 15 minutes ago 0 replies      
There were a few concerns I have about Julia packaging from this article and reading the docs http://julia.readthedocs.org/en/latest/manual/packages/

* It looks like packages are installed in a global namespace that is shared by all projects. This seems like it will get messy when you try to run older and newer projects.

* The default way to add packages is to just Pkg.add("package-name"), this makes reproducible builds difficult. This is especially an issue with a language used in scientific contexts where reproducibility is extremely important.

Are there solutions to these issues that I can't see? I'm aware that Julia is a young language so I don't expect them to solve everything at once.

eagle2001 9 minutes ago 0 replies      
It's still early days for Julia, and performance is uneven. I wouldn't use it for serious work unless 1) an expert in your field is already using it (e.g. Udell and Convex.jl) or 2) you carefully benchmark your key computations. In my case, I wrote C++ and Python benchmarks and stumbled on a performance problem that the Julia team knew about and plans to address.
overgard 3 hours ago 2 replies      
You know, one of the things that has annoyed me about python isn't that it's slow, its just it seemed like the BDFL was unconcerned about addressing performance. It is what it is, you can't fault them for having their priorities, but it bothered me. PyPy to an extent has done an excellent job of addressing it, but I'm really happy that there's a language like this that seems like it cares about making things fast without making it incredibly verbose -- I am impressed.
m0th87 2 hours ago 0 replies      
Multiple dispatch alone make Julia lovable. I'm not a lisper, so I didn't really get the point until trying Julia. It's a beautiful way to solve the expression problem [1], and a nice alternative to pattern matching in functional languages.

1: http://c2.com/cgi/wiki?ExpressionProblem

doctoboggan 1 hour ago 0 replies      
Instead of Julia Studio, I would recommend Juno[0], the IDE based off of LightTable. While it still has some rough edges, it works very well.

[0]: http://junolab.org

curiouslearn 3 hours ago 0 replies      
The author of the article says that relative (to Python and R) paucity of the libraries should not stop one from using Julia. I completely second this for another reason, which is the awesome PyCall package, using which you can make use of any Python library.
noobermin 3 hours ago 0 replies      
I learned Julia in a week and implemented a EM test particle integrator in a couple of hours. Julia is fantastic, easy, and fast.
kmike84 3 hours ago 0 replies      
> If you want to install a branch instead of master in a repo, you can do Pkg.checkout("name_of_package", branch="a_branch"). This kind of package management is much better than what is currently available for Python packaging.

Is it different from pip install git+...? See https://pip.pypa.io/en/latest/reference/pip_install.html#vcs...

tokai 4 hours ago 1 reply      
The most interesting thing from this, for us that know Julia, is that Juliabox is open again without invite.
georgeg 3 hours ago 1 reply      
Does Julia community have a typesetting tool(s) like knitr + Latex? a.k.a can i embed Julia code on to a latex document the same way I can do with R code? That is one feature that can convert me very quickly.
aidenn0 4 hours ago 2 replies      
I do wonder why Julia caught on where lush died. Any thoughts?
juxtaposicion 3 hours ago 0 replies      
Is Julia being used in production at StitchFix?
no_future 2 hours ago 3 replies      
How can Julia be "on par" with C when it is itself implemented in C?
pearjuice 3 hours ago 3 replies      
I hate Julia. It is not about the language but rather about the community surrounding it. Take for examples this post. I have nothing against the author, but everyone using Julia pretends they have leveled the playing field. That they need more than C but don't want to waste their with Assembly because they are too superior. It is a bit like the Arch community, where superiority is claimed and all alternatives are dumbed down because they are not on the cool table.

Out of principle I don't engage with products like these because you either become part of them or will never get truly involved.

no_future 2 hours ago 1 reply      
Oh wow, another person evangelizing about X obscure language from inside their pseudoacademic ivory tower yet providing no examples of anything useful they've done with it, or anyone has done with it for that matter.

Everytime I see this BS I think of this:


It's fine to say something more reasonable like "I have high hopes for this very early stage language in the future", but this kind of fanfare is the reason why stuff like Java got so big.

Grothendieck and Creativity [pdf]
9 points by poindontcare  2 hours ago   discuss
Chaos Computer Club Website Blocked by UK ISPs
80 points by dubbel  3 hours ago   22 comments top 7
teamhappy 14 minutes ago 0 replies      
Just for the record: You can still access the website via it's IP address.
keithpeter 1 hour ago 0 replies      
Both https://www.ccc.de/ and http://www.ccc.de/ and all pages below those reachable fine on EE consumer adsl over copper here in sunny Birmingham UK. I happen to be using Epiphany web browser on an alpha install of gNewSense 4.

The Open Media gallery in Birmingham (just under part of New Street Station) has a joint exhibition by a local artist and the CCC.


dz0ny 2 hours ago 2 replies      
I Am getting NET::ERR_CERT_AUTHORITY_INVALID also here https://www.ssllabs.com/ssltest/analyze.html?d=ccc.de
lectrick 2 hours ago 2 replies      
Google Chrome strongly advises you not to continue.

If you click the "Advanced" link, you can, though.

Who decided that a centralized entity could be the authority for these things, anyway, instead of a Web of Trust?

And also, should the books "1984" and "Fahrenheit 451" be required reading in British schools?

petecooper 2 hours ago 1 reply      
I am absolutely against this blanket form of ISP-level censorship, but I have to wonder if the intended clientele of CCC (i.e., technically-minded, curious, etc) would be very the near the top of the list of people who could bypass this block with trivial effort. Sort of self-defeating, really.
TazeTSchnitzel 2 hours ago 2 replies      
Worth noting that said filters are optional, you can turn them off.

Though the process to turn them off might resemble this[0].

[0] http://www.departmentofdirty.co.uk/

cirosantilli 1 hour ago 1 reply      
What was the exact reason for banning them?
Freelancing: How to talk yourself into charging more
311 points by andy_adams  9 hours ago   155 comments top 20
tptacek 4 hours ago 6 replies      
Be very careful basing your rate on your expected salary. It's a trap that will hurt your returns.

The most important lesson I think you can learn in consulting is that businesses are paying for more than just lines of code; that a lot of things that you don't intuitively think have any value actually have enormous value. The best way I've managed to describe it is in this comment:


Each of those bullets is something that potentially adds dollar value to a project.

Some freelance projects can be valued against the cost of a full-time employee. But ask yourself: "Does the client I'm working for have the capability to hire someone full-time for the service I'm providing? Would they know how? Would they be able to attract the right talent? And: would it make any business sense for them to do that?" If the answer is "no", then please, think of it this way: no more than 50% of the value you're providing comes from actual code; the remainder is business value you create by being available to solve a problem for your client in the precise scheduling and format they need.

When you grok this, you see why your rate has nothing at all to do with full-time salary for a comparable tech position. No full-timer can do what you're doing. By definition. Because they're only available full-time.

Also: please, for the love of all that's holy, don't bill hourly. You don't have an hourly rate. You have a day rate, for projects so small that you don't charge by the week.

bdunn 6 hours ago 4 replies      
Justifying your costs by basing what you charge against the costs of a similar employee is a good start, and in my experience it's how most of us start (the old "divide by 2000" trick). I like that Andy makes note that you need to include overhead (prospecting, writing proposals, ...), which a lot of new freelancers tend to miss.

However, I think the big takeaway is realizing that the formula presented establishes a minimum threshold. It shouldn't be used to figure out what you charge. My rates are way north of what the a developer/marketer would command on the open market, but I don't contextualize my costs against the equivalent costs of an employee; instead, I anchor my costs against the upside that a successful delivery of a project would yield for my client.

The single best way to substantially make more money consulting is to stop selling commodity services (web design, Ruby programming, whatever), and to truly consult. Provide your clients with a way to bridge the problem they face with the solution they desire, and charge accordingly.

reubenswartz 4 hours ago 0 replies      
The most you can charge is the perceived differential value you provide. So, if the client perceives that you will make them an extra $1M compared to the alternatives, you can charge, say $300K. If you have to do 100 hours of work to do that, don't say that your rate is $3,000/hr. Just say that the $1M costs $300K.

You don't always get a clearcut case like that, but the key, as @bdunn mentioned, is that you need to help the customer in a way that other commodity providers can't. Writing Ruby code or working with Photoshop is a commodity. The market sets the rate for those services. But if you can use those skills to reduce costs, increase web conversions, etc, you can provide increased differential value.

I've helped a lot of small business owners work through these scenarios, and then come up with numbers that "don't feel right." They don't want to charge what they're worth. Then we have a conversation like this:

"I agree with you that it's worth it, but I just don't feel like I can charge that much."

"Why not?"

"It just doesn't feel right?"

"You believe in helping your clients, right?"

"Of course."

"Would charging the higher rate help you provide more resources and increase the customer's return?"


"So why are you so hung up on your own feelings that you're hurting your clients by not charging them properly?"

OK, it's a bit silly, but it's a real conversation I've had many times. When they are convinced in their head but not in their gut, I ask them to literally practice talking about price in front of a mirror. Do it so that it sound confident, comfortable, and unapologetic. Your customer is getting a great deal! Of course they should want to do this. Steve Jobs never apologized for the fact that a Mac had a higher price than many PCs-- it was just a great deal if you wanted to do certain things.

Feedback is that practicing in front of a mirror really helps. ;-)

gk1 8 hours ago 11 replies      
I was nodding my head in agreement All the way until the How much would I charge? part. Heres something to consider for other freelancers reading this:

1. We all know what a pain it is to bill hourly. The way-off estimates, the tracking, the reporting Why not bill on a daily/weekly/monthly rate? I know what youre thinking: But how will my client know how much work Im doing?! In the past year that Ive been doing this, the ONLY people to ever ask me that were other freelancers, NOT clients/prospects. Clientsthe good ones, at leastdont CARE how many hours youre spending to do X and Y, as long as youre doing a damn good job at X and Y and adding significant value to their company.

2. Why tie your rates to your cost of living? (And number of vacation days, and tax rate, and other arbitrary multipliers/divisors) Your rate should reflect the value you bring to a company. (Yes, figuring the value of your work for a company could be tricky, but thats another matter) Using living expenses to calculate your rates makes as much sense as pricing milk by the size of the cow that produced it. Charging based on value will almost always get you more than charging based on your thrifty living expenses. (And if it doesnt, then either youre underestimating your works value or your clients are way too small.)

jtbigwoo 7 hours ago 3 replies      
I really wish everybody who was a freelancer had a chance to work for six months in a big-time consulting firm (e.g. Computer Science Corp, Accenture, Sapient, or IBM Global Services). Get an up-close look at how much companies will pay for developers, project managers, and designers. Medium-sized or larger companies have no problem paying $175+ per hour for average developers who have the right experience. If you're any good, you'll soon see that a big company will pay $200+/hour for you.

Sure, there's extra overhead involved in those big firms, but there's no way you'll come out of that experience thinking you're only worth $50/hour.

edw519 8 hours ago 3 replies      
Nice write-up, but a little too complex for me. I've taken a slightly different approach that I've never had to "talk myself into"...

I got the gig by charging something we both felt comfortable with. Then I raised my rates 10% every 90 days. No discussion, no debate, no permission, just, "Starting October 1, my new rate is x." If they said no, I'd walk away. That never happened.

I remember the first time I crossed $100 using this method. My client said, "Now we'll have to start treating you like a real consultant." I said, "OK."

morganvachon 7 hours ago 2 replies      
For someone like me who is a "jack of all trades master of none" type of freelancer, this formula works fairly well. I have the advantage of having a full time job and freelancing on the side for extra spending/saving money. But if I ever do make the great leap to a full time freelancer, this formula tells me that I can make it for only a little more than I charge now.

Granted, I'm in a suburban/semi-rural area, so I may actually have to raise my rates to make a livable wage on my own, since the client pool is smaller, and this is part of why I haven't already made the leap. Admittedly, the "imposter syndrome" the author spoke of is probably a factor as well; deep down I know I'm good at what I do, but every time I have to research something I end up feeling like a know-nothing. That's something I'll have to overcome not only if I want to set forth on my own as a full time freelancer, but even if I want to advance as an employee.

amadeusw 6 hours ago 3 replies      
I've never freelanced and I'm just about to start (in an SDE-equivalent role), but getting the first job is definitely the hardest. I really need your advice!

The back story: My startup is experiencing a quiet period and won't be making money for a few months. I need to make my rent and pay for food, yet have the flexibility to continue working on my startup.

I considered asking for a low rate just because of my dire need to make money. This brilliant article makes me reconsider my worth (I've turned down 100k offers to pursue my startup), though. What's the best thing to do in my sitauation?

I've built a portfolio and I've considered four approaches that I will take. Which ones should I focus on? Is #2 too much of a long shot?

1. Reach to my network (mostly recent grads, though)

2. Apply to freelancing agencies

3. Reach out to companies that are hiring, have a great cover letter which explains my situation and propose working on a contract basis (is that too much of a long shot?)

4. Reach out to companies that seek freelancers (but how do I find these?)

Thanks for advice!

scrozier 1 hour ago 0 replies      
I don't see anyone pointing out that the impetus for this blog post was a fixed-price contract that went bad. OP's solution was to raise his rates. This is an apples and oranges discussion. Charging higher rates is not the solution to the pain of fixed-price contracts. And for all but the simplest of projects, fixed price is ill-advised.
luckyisgood 7 hours ago 0 replies      
Charging more is only half the work on escaping feast and famine cycles when you're a freelancer or an agency. A while ago, I wrote a post to my inner circle of email suscribers about one sure way I've found that works when it comes to charging more (here's the email article: http://logit.createsend.com/t/ViewEmailArchive/r/3991C04F56F... - hint: saying "just double your prices" does not work for everybody, like andy_adams discovered himself; I've got a better way - increase prices gradually and with every project.)

The second half of being satisfied with your (freelancing or agency) career is having abundant recurring revenue. I recently analyzed revenues, market size and the number of employees per company for ten or so related industries: graphic design, web design, domain industry, hosting industry, digital advertising agencies, advertising agencies, website creation software industry... The numbers told me that graphic design and web design companies cannot even afford to hire a second employee - on average. All the other industries, who are known for relying heavily on recurring revenue, had much healthier business indicators. For example, the market in the U.S. for web design / web development is seven times bigger than i.e. the domain name business, but domain name companies are in much better shape, financially.

I've described my experience with implementing recurring revenue services in my webdev agency in the book I wrote: https://www.simpfinity.com/books/recurring-revenue-web-agenc... - I now firmly believe that charging more + having recurring revenue is the answer to most freelancers' troubles. As soon you upgrade your attitude to charge more for your work, you're ready to start thinking about earning money from existing clients, every month. We've managed to pay for up to 90% of our monthly expenses solely with recurring revenue.

trjordan 6 hours ago 0 replies      
Here's another way to look at it. When you're selling yourself as a freelancer, the last step in the sale is to make the business case. At that point, you've probably already proved that you could do something for the company, and the business case answers the question, "Why pay this much for this service, right now?"

There is more than one way to build a business case. They include:

- ROI. How much money will the business make? What fraction of that will you charge?

- Opportunity cost. What timeline would it take to get somebody else to do this? How much is that worth?

- Productivity / knowledge. Will your work transfer knowledge to people internally? What is the alternative to not hiring you?

- Flexibility. If they didn't hire a contractor, what is their equivalent of "freedom cost"? Reminding employers that they can fire you at any time is a positive for you.

You may even be able to find others. It's not a trivial task, and it'll take some digging, but keep your ears open. You can always find ways that different companies rationalize spending money, and using one company's framework to help another justify the spend can make things go much more smoothly.

lucaspiller 6 hours ago 1 reply      
> If I told you about freelancers charging $20,000/week or $350/hour, what would you say? I know what I said: Sure, they can get away with that, but I couldnt because.

How do you raise your rates from $88/hour to $350/hour? That's 4x the rate, it seems like a whole different ball game to me. I have a few regular clients, and there is no way they would be able to afford that, so is it time to ditch them?

raphinou 6 hours ago 2 replies      
This seems to be for the us and unapplicable to europe (factor 1.25 seems too low), right?
treve 6 hours ago 1 reply      
I have on occasion simply raised the number of hours in the equation, instead of my rate. Strictly for fixed-price projects though.

They get the same bang for their buck, but lower hourly rates is for some customers easier to swallow.

twelfthnight 8 hours ago 0 replies      
This is great. The simple model to estimate minimum hourly rate is useful and I believe would create a convincing argument in negotiations of rates. (Of course, one wouldn't have to be so conservative in the estimates in that case).
kaolinite 5 hours ago 0 replies      
Fantastic article. The company I work for is trying to solve this exact problem with the tool that we're building (http://getbilly.co/ - not ready yet but launching soon) - trying to get freelancers and businesses to value their time more and charge what they're worth.
blendergasket 7 hours ago 1 reply      
This is great. Thank you so much! I was curious if you had any advise on figuring out per project rates? I'm pretty new at making Wordpress sites but I make nice, handsome, responsive and very usable sites. People I have made sites for so far have wanted a single fee for the whole project.

I'm not so good at estimating the number of hours that go into a project yet. Do you have a rule of thumb aside from estimated hours * rate?

GhotiFish 3 hours ago 0 replies      
A technique a friend taught me to make good prices.

"What number would you not say no to? Assuming you wouldn't enjoy the work."

It's helped me make reasonable, and sometimes unreasonable offers, which are good starting points for negotiation.

eof 4 hours ago 0 replies      
I did not read this article and am only commenting on freelancing in general.

It requires a good understanding, and importantly, articulation of the problem; but flatrate for projects is the way to go. You can charge more and make the client happier. So many software projects fail; that many clients are happy to pay 50-200%+ premiums for the insurance that if you fail to deliver they won't need to pay.

It is hard/impossible to do this when you first start out; but generally speaking if you realize you are selling "peace of mind" rather than whatever it is you are doing; you will make a lot more money and have happier clients.

throwaway90446 6 hours ago 3 replies      
Little bit of reality for you, HN.

All due respect to patio11, who has made an excellent career for himself, but outside of the usual suspects (NYC, SF, Chicago), the kind of rates he talks about will get you laughed out of almost all client negotiations.

Spending Moore's Dividend (2008)
7 points by mlrtime  5 hours ago   discuss
New Startup Sets Out to Bring Google-Style AI to the Masses
45 points by cyphersanctus  6 hours ago   23 comments top 6
xixixao 3 hours ago 1 reply      
Having examples just executable from the website is invaluable, especially to excite more people. For fun:

Relatedness of Sentences (beta)

1 - not related at all

5 - an almost perfect paraphrase

Two men are taking a break from a trip on a snowy road

Two men are taking a break from a trip on a road covered by snow 4.05

Two men are taking a break from a trip on a road covered by rocks 4.13

Two men are taking a break from a trip on a road covered by mushrooms 4.23

Two men are taking a break from a trip on a road covered by hobbits 4.27

Plenty of work ahead :)

lispm 3 hours ago 0 replies      
> deep learning, teaching machines to recognize images and understand natural language using software that operates a bit like the networks of neurons in the human brain.

'understand natural language'?

Very far from it...

> I saw the movie, where the main actor's wife was so angry - but I was having a great day

Result 99% negative...

> he was hit hard

> he played a hit single

Relatedness 4.5, from 1-5

hashtree 5 hours ago 0 replies      
A solid group of talent, welcome to the club. I am interested to see where the "deep learning" start-ups end up in ten years time with such a wide-array of problem sets and industries.
tonydiv 2 hours ago 0 replies      
Having spoken to Sven a few times, I think they are targeting industries/applications more specific than their website appears :)

If anything, I would consider this to be a competitor to Context Relevant and Alchemy API instead of Clarifai.

danvoell 5 hours ago 4 replies      
Am I the only person who is worried that AI start-up companies are going to use AI to insert "deep learning" into every pitch deck possible?
slntdth7 3 hours ago 0 replies      
Wait til the machines rise against us
Prison book ban ruled unlawful by High Court
100 points by danseagrave  9 hours ago   69 comments top 5
mabbo 7 hours ago 7 replies      
I've said it before, and I'll keep saying it: if the purpose of prison is to rehabilitate people, then we're going about it in a very strange way.

Lock people away in a cell for a decade, deny them a lot of basic human rights (like books) then when they're good and messed up in the head from it, let them lose on the public. If they re-offend, well, that's clearly a sign that they are just a bad person. Best lock them up again.

KaiserPro 7 hours ago 1 reply      
And ofcourse Mr Grayling has done his best to destroy Judicial reviews that highlight this kind of problem.

Basically if you want to challenge a law, a judicial review is the only way you can go about it (going through the political process, writing to an MP and the like leaves you open to the whim of the press and "public opinion")


If this law goes through only those with significant amount of cash can ever challenge illegal laws. Combine that with the changes to legal aid, mean that only the rich will be able to attain justice.

Oletros 8 hours ago 0 replies      
I'm glad about that overturn of the ban
arethuza 8 hours ago 3 replies      
I wonder if there are any schemes making e-book readers available in prisons - no danger of smuggling in drugs in a stream of bytes!
Shivetya 8 hours ago 4 replies      
The one issue brought up which I think it a legitimate concern is, how do you insure the book is not used for improper purposes. Those would be but not limited to, concealment of prohibited objects, drugs, and messages?

You can screen for some, but I would expect all parcels to have some limited form of search. I would tend to think a better option would be to fix the availability of library services to prisoners, perhaps even only allowing access to the delivered books in a library setting; books are kept by the system in the library for the prisoners.

I understand that idea that reading is fundamental, but controlling what is in their environment is a big part of maintaining a safe and secure environment.

Pluto: a first concurrent web server in Gallina
53 points by p4bl0  8 hours ago   9 comments top 3
toothbrush 5 hours ago 0 replies      
Reminds me of Lemmachine, which was written in Agda (since i'm a Haskeller i gravitated towards that). I wonder if Lemmachine's still alive... /me looks

edit: not so much.

mpu 6 hours ago 3 replies      
It's unclear what the grand scheme is here. Why using Coq instead of OCaml? How are the theorem proving of Coq used and to prove what? What do we expect from a web server anyway?

It is probably an interesting project but I don't see anything else on their website that witnesses more than the fact that they can write monadic code.

mcguire 4 hours ago 1 reply      
"Note: Pluto is also the only planet discovered and undiscovered by the Americans."

As an aside, the way that's put is just...sad.

       cached 5 December 2014 23:02:03 GMT