hacker news with inline top comments    .. more ..    24 Nov 2014 News
home   ask   best   5 years ago   
2 years with Angular
203 points by robin_reala  4 hours ago   71 comments top 17
jasim 3 hours ago 4 replies      
I recently wrote about my experience with Angular in a different forum. Sharing it here:

I worked on Angular last year building an app with a few complex views. The initial days were full of glory. Data-binding was new to me, which produced much goodwill towards the framework.

Things started falling apart as I had to inevitably understand the framework in a little more depth. They practically wrote a programming language in the bid to create declarative templates which knows about the Javascript objects they bind to. There is a hand-rolled expression parser (https://github.com/angular/angular.js/blob/v1.2.x/src/ng/par...), new scoping rules to learn, and words like transclusion and isolate scope, and stuff like $compile vs $link.

There is a small cottage industry of blogs explaining how Angular directives work (https://docs.angularjs.org/guide/directive). The unfortunate thing is that all of Angular is built on directives (ng-repeat, ng-model etc.); so till one understands it in depth, we remain ignorant consumers of the API with only a fuzzy idea of the magic beneath, which there is a lot of.

The worst however was when we started running into performance problems trying to render large tables. Angular runs a $digest cycle whenever anything interesting happens (mouse move, window scroll, ..). $digest runs a dirty check over all the data bound to $scope and updates views as necessary. Which means after about 8k-10k bindings, everything starts to crawl to a halt.

There is a definite cap on the number of bindings that you can use with Angular. The ways around it are to do one-time binding (the data won't be updated if it changes after the initial render), infinite scrolling and simply not rendering too much data. The problem is compounded by the fact that bindings are everywhere - even string interpolation like `{{startDate}} - {{endDate}}` produce two bindings.

Bindings are Angular's fundamental abstraction, and having to worry about its use due to performance issues seems quite limiting.

Amidst all this, React feels like a breath of fresh air. I've written a post about what makes it attractive to me here: http://www.jasimabasheer.com/posts/on-react.html.

Compared to Ember, neither Angular nor React dictate as rigorous an organization of files and namespaces (routes, controllers, views), and have little mandatory conventions to follow. But React is as much a framework as Angular is. The event loop is controlled by the framework in the case of both, and they dictate a certain way of writing templates and building view objects. They can however be constrained to parts of the app, and so can play well with both SPA and non-SPA apps. The data models are plain Javascript objects in both (it is not in Ember), which is really nice.

Google recently released a new version of their developer console (https://console.developers.google.com) which is built on Angular. So the company is definitely putting their weight behind the framework. However, Angular 2 is not at all backwards compatible. That was quite unexpected. If I had known this going in, I would have never used it for the project. But it felt like such a good idea at the time...

pygy_ 1 hour ago 0 replies      
This post reminds me of these other two [0, 1] that ultimately lead Leo Horie to create Mithril [2], a tiny (5 KB down the line) but complete MVC framework that also eschews most of the criticism raised by the OP.

The Mithril blog is also worth a look, it addresses a lot of concrete scenarios with recipies to solve common front end problems with the framework. For example, here's a post on asymetrical data binding [3].

0. http://lhorie.blogspot.fr/2013/09/things-that-suck-in-angula...

1. http://lhorie.blogspot.fr/2013/10/things-that-suck-in-angula...

2. http://lhorie.github.io/mithril/

3. http://lhorie.github.io/mithril-blog/asymmetrical-data-bindi...

swombat 1 hour ago 3 replies      
This seems to be the summary of every tech flame war ever, and applies rather well here:

A: I've used tech X in a lot of Y contexts, and I find it's not great. I will generalise slightly imply that tech X is not the panacea that it has been presented as.

B: Yeah? Well, I've used tech X in a lot of Z contexts, and I find it works fine! You're wrong! You're using it wrong! Maybe you're not wrong in context Y, but for most other contexts X is still the best tech!

C: I haven't used tech X at all, but here's my opinion on it anyway.

oinksoft 48 minutes ago 0 replies      
I've been using Angular on-and-off in professional settings since 2012. Angular is a framework obsessed with testability that treats usability as an afterthought. That said I've found Angular to be more than flexible enough to meet the needs of your typical CRUD apps, and generally enjoy working with it.

One thing I agree with the author about is the importance of expertise for a successful Angular project. Some specialized knowledge is needed to get a decent fit and finish, and the results can be horrible without that.

Strongly discouraging globals goes a long way towards improving code written by inexperienced engineers, but Angular's provider system is still not clearly documented with practical examples, which makes those engineers more likely to shove everything into the unavoidable Angular constructs (controllers, directives, $scope).

The middling quality and small availability of third-party Angular libraries is a problem. I believe that greater awareness/better tooling for ngDoc would be a tremendous help there. Best practices are not well-presented anywhere in the Angular world, particularly for designing reusable Angular libraries.

The other big problem is the project source code which I find poorly organized and documented. If you want to get into the guts of Angular for debugging purposes, good luck!

wldlyinaccurate 2 hours ago 3 replies      
I've worked on Angular projects of varying sizes -some as large as 30KLOC (products where every page has enough interaction to justify an Angular controller)- and I can never find myself agreeing with these articles.

Have I just drunk too much kool-aid? Or is it possible that with the right team, the right architecture, Angular can actually be a really great framework to use? The common theme for every large Angular project I've worked on is that the teams have leaned towards a more functional design where state is rarely used. This has always seemed to encourage smaller, decoupled modules which don't suffer from many of the problems that the author mentions.

But hey, it's probably the kool-aid.

hassanzaheer_ 3 hours ago 3 replies      
While most of the arguments presented in this article are somewhat valid but I hope with the release of Angular 2.0 majority of the issues will be addressed (though does it make sense to make such drastic changes in the upcoming is another debate and already taken care of at: https://news.ycombinator.com/item?id=8507632)

I'm currently working on a comparatively large webapp built in Angular and it was after about 7 months into the project that we started realising it's pitfalls, and it was very difficult to abandon it then.

So we worked it around by:1) using one-way binding (or bindonce to be exact) to reduce watches2) avoiding un-necessary $apply() and using $digest() carefully if required3) using ng-boilerplate for scaffolding4) defining our own style guides/coding conventions/design patterns to overcome Angular's bad parts5) frequent code-reviews that made sure new team members are upto speed with the above techniques

luckily we haven't ran into much issues after that :)

hokkos 1 hour ago 1 reply      
I've got to build an SPA and I'm trying to choose between Angular and React, can you guide me a little,the app will :

- create a big form based on an XML schemas, the form will be used to generate valid XML with the schemas

- some schemas can be really big with more than 3000 elements, the whole thing won't be shown in full to the user directly but probably folded

- because it is based on XML Schema, it must have interactivity to make some elements repeatable, and groups of nested elements repeatable, some elements with bounds, some maybe draggable to reorder them, everything an XSD can do...

- it will also some kind of polymorphism where you can choose the children element type and have the corresponding schema showed

- it will also show a leaflet map, with some interaction between the form and the map

- there is also a rich text editor where you can arrange xml objects between formated text

I fear that angular won't be fast enough for that, but his support for forms seems better, I've tested JsonSchema forms generator like https://github.com/Textalk/angular-schema-form and https://github.com/formly-js/angular-formly the first one is slow when editing 3000 items the second seems fast when editing, and slow when it generates the json. I've done some angular tutorials and their concepts don't stick in my head. I've tested React and their concept stick easily in my head but there is less native support for forms.

I had just decided to go with angular partly because of all the hype around it, but I see the article and others as a bad omen and I want to go with react now. Any advise ?

tomelders 47 minutes ago 0 replies      
I'm currently enjoying angular after having spent a year and a bit working with it exclusively. I am keen to try out Flux and Mithril, but I've not had the time nor the opportunity. But as it stands, we're deploying several large projects into very demanding organisations that are stable, performant and easy to manage. We as a team owe lot to Angular in terms of our productivity. We're also a great team and that counts for a lot too.

The thing I would like to add to the debate is this: We've all learned that Angular is hard. It's a complex beast with it's own nuances and idiosyncrasies. It also offers plenty of ways to do things you probably shouldn't do (i'm looking at you expressions). But more than that, with Angular in the tool box, people push themselves to deliver products vastly more complex than would be feasible without it. And these two issues collide all the time. Learning a framework + the desire to deliver more; One should follow the other, but people tend to attempt both at the same time.

I personally don't think there's anything "wrong" with Angular, but people have to acknowledge that despite the marketing hyperbole, learning Angular means setting out on a long and difficult journey that will require the developer to rethink a lot of what they know about building web stuff. But that's web development in a nutshell. It's a different gig every year, and within an alarmingly short amount of time, Angular will probably be replaced with something better suited to the tasks that try to accomplish the thing we want to accomplish with mere HTML, CSS and Javascript.

There's also a lot to be said for how you organise your projects and what tools you use (eg Require or Browserify etc etc), but that's a very different kind of conversation.

xpto123 4 hours ago 4 replies      
I confess I am an Angular fan.

But this article is not Angular specific at all, it stays on a very high-level. Replace the word Angular with any other web framework and the article would still make perfect sense.

Not that the article does not have some value, just that it has very little to do with its title.

dynjo 58 minutes ago 1 reply      
We built a pretty complex app with Angular (https://slimwiki.com) and have had nothing but great experiences. The main issues are no guidelines about the right/wrong way to do things, it needs to be more opinionated.
shubhamjain 2 hours ago 1 reply      
I can't speak of Angular since I haven't used it but one problem that is recurring with use frameworks, in general, is that thinking or getting used to "their" way takes a significant amount of time and seeing the continuous change of technology, I am not sure that time is justifiable in the longer run.

Take example of rails. I was trying to learn it sometime ago and was really amazed how it has a process for nearly everything. Migrations, asset pipelines, generators, and very extensive command line. Sure it does make it seem like "Once I learn it, it will be so much easy to make the next app" but it is easy to realize after sometime that you have to cross usual hurdles of Googling everything, learning these processes, facing issues, digging out new ways of debugging to finally be good at it.

My idea is that frameworks should be minimal which only ensure a basic working architecture and everything else should be extensible (via packages).

prottmann 1 hour ago 0 replies      
The problem is not Angular specific, every Framework is designed to solve a certain problem in a certain way.

But most developers think, that when they learn once a Framework, they can use it for any kind of project.

When i read "xxx is really cool and fun" iam really careful. Most people create a "Hello World" and then THEIR favorite framework is the greatest thing in the universe and they communicate it to others.

Take a framework, live with the mistakes, until the next "better" framework appear... and it will appear, and the next, and .... ;)

jMyles 3 hours ago 7 replies      
If Angular is not The Thing (a premise which I have no trouble believing), then what is a Good Thing to perform the task of, for example, consuming Django Rest Framework endpoints and making a frontend of them?
praetorian84 1 hour ago 0 replies      
As someone who has thus far only used with Angular for smaller projects, seeing performance raised as a concern is a bit of a concern for ever using it in a serious project. Would still like to see some numbers to back up the anecdotal evidence.

It's also hard to motivate starting a potentially large project in Angular right now, knowing that v2 is on the way that is basically a new framework.

gldalmaso 3 hours ago 3 replies      
"And whar are no-no factors for angular?

    Teams with varying experience.    Projects, which are intended to grow.    Lack of highly experienced frontend lead developer, who will look through the code all the time."
I am greatly interested in learning what is the alternative that would be a 'yes-yes' in these bulletpoints.

aaronem 1 hour ago 0 replies      
You know, I'm just going to say it:

Angular is the Rails of Javascript.

That probably sounds like a derogation. But behold: I offer nuance!

They're both big and powerful, and capable of rewarding dedicated study with enormous power. Thus they develop a devoted following whose members often do things lesser mortals find little short of wizardry.

They're also both built to be friendly and welcoming to the newcomer, and offer a relatively short and comfortable path from zero to basic productivity. Thus they trigger the "I made a thing!" reward mechanism which excites newbies and leaves them thirsting for more.

They also, in order to go from newbie to wizard, involve a learning curve like the north face of K2.

In both cases, it's a necessary consequence of the design decisions on which the platform is based, and those decisions, by and large, have sensible reasons behind them -- not, I hasten to note, decisions with which everyone will (or should) agree, but decisions which can be reasonably defended.

But that doesn't make it a good thing. When people start off with "I made a thing!" and then run smack into a sheer wall of ice and granite, initial excitement very often turns into frustration and even rage, as on display in some comments here in this very thread.

(I hasten again to add that I'm not judging anyone for being frustrated and angry over hitting that wall -- indeed, to do so would make me a hypocrite, given my reaction to hitting that wall with Rails a year or so ago.)

Further compounding the issue is that, often enough, wizards who've forgotten the travails of their ascent will condescend to say things like "Well, what's so hard? Just read {this book,that blog post,&c.} and it's all right there." Well, sure, for wizards, who are well accustomed to interpreting one another's cryptic aides-memoire. For those of us still toiling our way up the hill, not so much.

I will note, though, that while I hit that wall (hard!) with Rails, and in the end couldn't make it up, I haven't had the same problem with Angular. The sole significant difference I can identify, between the two attempts, is this:

When I took on Rails, there was no one else in the organization who knew (or should've known) the first thing about the platform. When I had a problem with Rails, I faced it all alone, with only my Google-fu, my source-diving skills, and my perseverance on which to rely. For a while I did well, but in the long run, for all but the most exceptional engineers, such expenditure of personal resource without resupply becomes unsustainable.

When I take on Angular, I do so with the support of a large team, composed of the most brilliant and capable engineers among whom I have ever had the privilege of working. When I have a problem with Angular, I have a dozen people at my back, at least one of whom is all but guaranteed to have encountered the exact same situation previously -- or, if not this precise permutation, then something very like it, from which experience more often than not comes precisely the advice I need to hear, to guide me in the direction of a solution.

Of course, whether this is really useful to anyone is an open question; I think it's a little facile, at least, to say "Oh, if you're having Angular problems, all you have to do is find a team of amazing people who mostly all have years of Angular experience, and work with them!" But, at the very least, if you're going to be fighting through the whole thing all by your onesome, maybe think about picking up a less comprehensive but more comprehensible framework, instead.

cturhan 3 hours ago 0 replies      
As author says in the comment, these are valid for Angular 1.x so I'm hoping that angular 2.x will be more carefully designed framework.
PolarSSL is now a part of ARM
33 points by fcambus  1 hour ago   9 comments top 3
_stephan 39 minutes ago 1 reply      

Who knows, maybe a part of the "exciting plans for 2015" is to release PolarSSL under a liberal open source license...

clopez 1 hour ago 3 replies      
What exactly it means that PolarSSL (a crypto library) is now part of ARM (a CPU architecture) ???
ctz 19 minutes ago 0 replies      
I guess this is part of ARM's IoT push.
My boys love 1986 computing
88 points by interesse  4 hours ago   24 comments top 7
scarygliders 3 hours ago 2 replies      
There is something about 1980's-era computers which make them so much more accessable to kids than today's systems.

It's probably to do with how much simpler they are - if I can put my finger on it, I'd say it's because they don't have lots of Things which can tempt you to distraction; for example - if they had a web browser of some kind back then, you'd be tempted away from learning to program in BASIC, or learning how to load that simple-but-fun game.

They generally were single task machines that allowed you to focus on one Thing.

I'm looking at my Linux desktop right now on this machine. I have a web browser I'm using to type this reply. It also has 8 other tabs open - more tabs will be added later as I continue on my search for knowledge. I have a Konsole terminal open with IRC sessions to multiple servers and channels open. I have PyCharm loaded. I have a VM running, and more to run later. All vying for my time and energy. A child using this machine would be overwhelmed.

Even a Raspberry Pi can distract the user of it in the same manner as my desktop.

Perhaps it's time to reintroduce kids today to the CoCo2's, the VIC-20's, the C64's, the Spectrums, the ZX81's and so on. Perhaps getting kids to use single-task-at-a-time systems to learn instead of the distraction-inducing tech. of today, would be a very good idea.

When my son was 2.5 years old, I put in front of him an ancient old Compaq laptop running Debian. It had Tuxpaint running on it, and I just put it in front of him and let him go on it. Within a short space of time he was using Tuxpaint like a "pro", and then he learned how to power the machine up, and type in his login name and password. Sure, you can do this with today's systems, but they do make it so easy to provide tons of distractions.

willvarfar 2 hours ago 6 replies      
I initially gave my two girls an old desktop computer with a linux on it. They started to use it before I showed them how and they started figuring stuff out without me and soon I was sitting riveted in the background watching them discover things and trying to learn about computer UIs and work out how they thought.

Here's an old blog post about it: http://williamedwardscoder.tumblr.com/post/19500788060/my-te... - I think its a fun read.

Fast forward to now; that blog post is hopelessly out of date!

I gave them old-but-decent laptops and, eventually, internet access.

As soon as they had internet access they stopped tinkering and exploring and started only using the laptop to watch repeat episodes of childrens TV.

And now they often want to use their mum's iPad - to play music and watch TV - but they are completely utterly uninterested in tinkering with any PCs.

Its sad but its true and I wish I knew what to do about it.

amelius 10 minutes ago 0 replies      
I love 1986 computing too! Everything was so simple. Just a single thread of execution to worry about.

Then again, Javascript is also single-threaded essentially.

Thus, from another point of view it looks like we're stuck in the 80s...

yitchelle 2 hours ago 0 replies      
When my son was about four years old, I gave him an old electric typewriter which I got from Freecycle. I also gave him a stack of paper and some ink ribbon.

Within days, he was "typing" away, loading new paper, working the lever to return the carriage to the start and replacing the ink ribbons. By the end of the month, it was in pieces as he tried to dismantle it to see out how it works. Luckily, I was supervising him so that he does not electrocute himself. The mind of a young child is an amazing thing to watch.

edw519 53 minutes ago 0 replies      
Me too.

I wrote a program in 1986 to automatically generate forms and reports from a 3rd normal form data base.

A few weeks ago, I loaded it onto a client's server, and it compiled and ran flawlessly. Right now, it's delivering the same value it did 28 years ago.

That old stuff may not have done everything that by today's technology does, but a lot of what it did do has sure stood the test of time.

bencollier49 3 hours ago 1 reply      
I completely agree with this. A lot of the games seem to appeal more too, because they've been written simply, with straightforward rules.

The BBC Master (which my children love) was also a terrific platform to learn to program on. It's just a shame that a lot of the disk drives and disks haven't survived very well.

fit2rule 2 hours ago 0 replies      
My kids absolutely love our 8-bit rig!!

Fire up some Defence Force, Harrier Attack, Doggy, or Zorgon's Revenge, from the good old days! YAY, 8-bit party! Space 1999, 1337, Pulsoids, Skool Daze .. STORMLORD! W00t!


What's really great, is that the 8-bit days are not over. I see this now, with my kids getting attracted very much to programming on the 8-bit machines. "10 PING:WAIT 10 20 GOTO 10", represent!!

Kill init by touching a bunch of files
90 points by omnibrain  5 hours ago   55 comments top 8
pilif 1 hour ago 3 replies      
There are so many things in here that tempt me to comment about, so here goes:

1) For me, this is a prime example of why I personally like programming environments with exceptions. If libnih could throw an exception (I know it can't), then they could do that which would allow the caller to at least deal with the exception and not bring the system down. If they don't handle the exception, well, we're were we are today, but as it stands now, fixing this will require somebody to actually patch libnih.

Yes. libnih could also handle that error by returning an error code itself, but the library developers clearly didn't want to bother with that in other callers of the affected function.

By using exceptions, for the same amount of work it took to add the assertion they could also have at least provided the option for the machine to not go down.

Also, I do understand the reservations against exceptions, but stuff like this is what makes me personally prefer having exceptions over not having them.

2) I read some passive aggressive "this is what happens if your init system is too complicated" assertions between the lines.

Being able to quickly change something in /etc/init and then have the system react to that is actually very convenient (and a must if you are pid 1 and don't want to force restarts on users).

Yes, the system is not prepared to handle 10Ks of init scripts changing, but if you're root (which you have to be to trigger this), there are way more convenient (and quicker!) ways to bring down a machine (shutdown -h being one of them).

Just removing a convenient feature because of some risk that the feature could possibly be abused by an admin IMHO isn't the right thing to do.

3) I agree with not accepting the patch. You don't (ever! ever!) fix a problem by ignoring it somewhere down the stack. You also don't call exit() or an equivalent in a library either of course :-).

The correct fix would be to remove the assertion, to return an error code and to fix all call sites (good luck with the public API that you've just now changed).

Or to throw an exception which brings us back to point 1.

I'm not complaining, btw: Stuff has happened (great analysis of the issue, btw. Much appreciated as it allowed me to completely understand the issue and be able to write this comment without having to do the analysis myself). I see why and I also understand that fixing it isn't that easy. Software is complicated.

The one thing that I'm heavily disagreeing though is above point 2). Being able to just edit a file is way more convenient than also having to restart some daemon (especially if that has PID 1). The only fix from upstarts perspective would be to forego the usage of libnih (where the bug lives), but that would mean a lot of additional maintenance work in order to protect against a totally theoretical issue as this bug requires root rights to use.

angry_octet 3 hours ago 2 replies      
This is a great example of how bad many open source projects are at accepting contributions from 'non core' developers. The patch is just rejected, when it actually looks pretty valid to handle all cases of return value from a kernel interface. While it might not be a perfect solution, accepting it with suggestions for additional improvements could have led to those improvements.
rikkus 4 hours ago 4 replies      
.NET's FileSystemWatcher documentation says that there's an internal buffer of a finite size, and that if (when) it fills up, you're going to be told about it and must do a complete traversal of the directory you're watching. No-one has invented a better way to deal with this, so that's what you need to do.

Many developers ignore this, so it's not really surprising that this has happened with inotify too. It's mentioned that a patch wasn't accepted, but it was with good reason - it doesn't fix the problem (by traversing the directory).

Havvy 3 hours ago 1 reply      
"libnih" - I have no clue what this does, so I immediately read it as 'lib not invented here'. Otherwise, yay...more unstable software.
shaurz 3 hours ago 1 reply      
It's pretty dumb that pid 1 has any assert()s in it at all. Or libraries for that matter.
nodata 2 hours ago 1 reply      
Writing to /etc/init.d requires root access. If you are root, you can bring down the box as it is.
SwellJoe 4 hours ago 2 replies      
It's worth noting that the root user has any number of pathological use cases that can bring down the system. This is but one of them. Interesting, but not particularly dangerous or likely to be triggered in any normal circumstance.
xfs 4 hours ago 4 replies      
TL;DR: He overflowed inotify queue of /etc/init which is the Upstart configuration directory being monitored. Upstart doesn't deal with the overflow, exits, and causes kernel panic.

The bug is not fixed because in order to trigger it you need root to spam file operations in /etc/init, which implies bigger problems elsewhere. If you have root and want to see panics, just echo c >/proc/sysrq-trigger.

Reverse Engineering a Keyboard to Play Snake
63 points by rcuv  4 hours ago   3 comments top 3
unwind 1 hour ago 0 replies      
This was very educational.

I find it equally hilarious, scary and wonderful that it's now cost-effective to build a keyboard whose brain is a 72 MHz ARM Cortex-M3 with 127 KB of program space. Of course the manufacturer also proudly tells you this, that the keyboard is powered by an ARM. It's also fun that the firmware of the keyboard is protected (partly by the old-faithful XOR trick, even!) against IP theft.

Too bad they didn't include a hub in there, but I guess someone thinks doing so would introduce scary latency, or whatever. Gamers can be a sensitive bunch.

boomskats 1 hour ago 0 replies      
It's awesome to see the writeup on this. Ran into the video a few days ago on /r/mechanicalkeyboards and was surprised that it didn't got more attention than it did! I think I might end up buying one of these CMs now...

If I knew what I was doing I'd try to write an OSC interface into a DAW. Things like transport control or mapping the 10 columns of keys to solo/mute/group buttons on a mixer, or trying to rig it up so that 1QAZ 2WSX 3EDC 4RFV are the 16 keys on a drum machine...

m00dy 10 minutes ago 0 replies      
Nice Hack!
Show HN: Workshape.io A talent matching service for developers in Startups
30 points by GordyMD  1 hour ago   18 comments top 6
GordyMD 1 hour ago 2 replies      
Hi there I'm Gordon and I am a software engineer! I have worked in the startup space for the 4 past years in London and San Francisco. Over this time I, like many others, have experienced the way in which the world of tech recruiting works. I have grown frustrated by the number of times I have been contacted about roles not suited to me and how recruiters regularly pitch me roles based on technologies I used in previous employment. I have also experienced recruitment from the employers point of view where I had to build a team out in San Francisco which was equally frustrating.

I believe the systems that we employ to help us find a job/recruit falls short of our expectations and is in need of some desperate rejuvenation and optimisation!

5 months ago I arrived back from San Francisco and teamed with 2 former colleagues who shared a passion in trying to improve this experience for both developers and hirers. I would like to invite you to be one of the first people to see and use Workshape.io - a talent matching service for Startups.

The key premise to Workshape.io is that we are a matching service that focuses on what you want to do in your next role - more specifically: what tech you want to work with and how you want to spend your time as a developer. We feel when you are open to another role your aspirations should be recognised as one of the key components to matching you to a role.We are focussed on rolling out in London right now and currently have roles from companies such as Shazam, Spotify, Qubit and Moo.com online.

We are very early stage right now and would really welcome your feedback on the experience, thoughts on the site and how we match you to roles.

Thank you for your time

snlacks 37 minutes ago 2 replies      
I have nothing bad to say about this.

Does it take into account relationships between tags? Like if someone puts in CSS3 will it put CSS in the listing?

Is there going to be a "questions" section? :P

500and4 30 minutes ago 0 replies      
Nice idea and really well executed. I run http://www.zonino.co.uk with a couple of other chaps and we think similarly about the need to bypass recruiters. Get in touch if you'd like to have a chat about the wild London startup scene!
kornakiewicz 36 minutes ago 1 reply      
No matches so far (I got more on tinder, lol), but extremaly useful, well done. Altough it would be nice if one could also explore offers that don fit him perfectly, especially when there's no others. It always good to know what market needs.
sprthompson 1 hour ago 0 replies      
Really slick set-up, novel but so intuitive - Great work.
sdickert 1 hour ago 1 reply      
Stop writing stateful HTML
5 points by enyo  12 minutes ago   discuss
Can Qt's moc be replaced by C++ reflection?
19 points by ingve  2 hours ago   3 comments top 2
saboot 26 minutes ago 0 replies      
Another similar use for reflection I am waiting for is a way to effortlessly serialize data. Specifically for physics experiments I have several PODs from different acquisition devices. Instead of serializing them all myself, it would be great to just pass them onto a function to do that for me. Also useful because people will inevitably ask "I don't like file format <X>, can you save it as <Y>?".

CERN's ROOT software alleviates some of this issue with their dictionary builder for their own format.

plq 44 minutes ago 1 reply      
Qt's meta object compiler (moc, a code generator that lets C++ kind of emulate Objective-C's message passing) is mighty handy for transparently queuing (instead of directly calling) signals that cross thread boundaries. I don't see anything in this post that mentions it.

See the relevant bit from Qt's documentation for more information: http://qt-project.org/doc/qt-5/why-moc.html

On Linux, 'less' can probably get you owned
218 points by adamnemecek  10 hours ago   94 comments top 16
viraptor 8 hours ago 2 replies      
This is probably a good reminder of something else: using selinux/apparmor/tomoyo/... can save you from many situations where you'd be exploited otherwise. For example as a response to this you can set a policy on lesspipe and all its children so that they cannot access the internet or write outside of temp directories.

Whatever library is used by lesspipe - you're safe as long as the output terminal and your kernel are safe.

userbinator 9 hours ago 3 replies      
I'm also not sure if the automation actually scratches any real itch - I doubt that people try to run 'less' on CD images or 'ar' archives when knowingly working with files of that sort.

This is a trend not uncommon in GNU software -- features added by someone who at some point thought it was a good idea, but probably didn't even bother using them much beyond an initial test to see that they are somewhat working. Most users likely think of 'less' as nothing more than a bidirectional version of 'more', and not as the "file viewer that attempts to do everything" that it seems to actually be. It's also a little reminiscent of ShellShock.

steakejjs 8 hours ago 2 replies      
This research lcamtuf has been doing with AFL is really important.

One thing that it is proving (exactly as a lot of people expected) is, we don't have any idea where security bugs (think the next heartbleed or shellshock) are going to show up, we have no idea how good the software out there is (meaning it is bad), and most of the time we don't even know what's running on our own boxes.

If these basic things we use hundreds of times a day (less, strings) have huge flaws, we have a lot of work ahead of us.

Animats 7 hours ago 1 reply      
Last month, it was "file", which turned out to have a parser for executables which can be exploited via a buffer overflow.

Probably the best exploit in this line is crafting JPEG files which cause buffer overflows in forensic tools and take over the machine being used for forensics.

We need an effort to convert Linux userspace tools likely to be invoked as root or during installs from C/C++ to something with subscript checking.

username223 8 hours ago 4 replies      
The first time I accidentally ran "less" on a directory and it piped some version of "ls" into itself, I was mildly annoyed. The thing's supposed to page a text file on a terminal. Since then, I've had to think twice before invoking it to avoid this "helpful" behavior, and I'm not surprised that it came back to bite people.
upofadown 1 hour ago 0 replies      
I checked on Debian Jessie and the /usr/bin/lesspipe script runs entirely off the file extension. So there is no issue with less itself. If someone sends me, say, a malicious doc file I would have to type "less blort.doc" to get owned by catdoc. The only time i would ever type that is if I knew that less would invoke catdoc, that I actually had catdoc installed on the machine and for some reason I wanted to use catdoc to look at a doc file.

Less only installs a mailcap entry for "text/*". A mail reader that could not handle plain text itself would not be much of a mail reader.

That also means that it is kind of stupid to have less display non-text things. Still not a real security issue.

fsniper 6 hours ago 0 replies      
I think this is one side effect of development. We are trying to implement any feature that would be nice to have into every software.

LESSOPEN or LESSPIPE is a feature that is already achievable via manual means. But automation is the king so it's a nice feature to have it in the software implemented.

If we could just stop and move on whenever software is capable of doing what they are intended for as smooth as possible, many of these issues would diminish to exist.

pmontra 6 hours ago 0 replies      
I checked what I have on my Ubuntu 12.04.

$ env|grep LESS


LESSOPEN=| /usr/share/source-highlight/src-hilite-lesspipe.sh %s

Safe, as long as source-highlight isn't buggy.

I also checked my .bashrc and found this

# make less more friendly for non-text input files, see lesspipe(1)

# NO! I don't want this!

# [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"

So yes, lesspipe was the default and for some reason I commented it out. I vaguely remember at being annoyed about less showing me something different from the actual binary content of the files.

mrmondo 8 hours ago 0 replies      
That is bad default behaviour on Ubuntu's (and centos'?) behalf. I have confirmed this is not the case in Debian.
reubenbond 8 hours ago 3 replies      
Less really is more
guard-of-terra 5 hours ago 1 reply      
That's why

  alias less=/usr/share/vim/macros/less.sh

vezzy-fnord 9 hours ago 0 replies      
This was also mentioned in one of the pages of The Fuzzing Project, linked on HN just a short while ago as of this comment: https://fuzzing-project.org/background.html
zobzu 6 hours ago 0 replies      
but also vim.and irssi.and gpg.and a bunch of other day to day linux programs nobody bothered to review very thoroughly.

Just any case anybody still believe its just java, flash, openssl and bash that suffers bad vulns (oh oh oh).

fleitz 9 hours ago 3 replies      
Am I missing something? Is lesspipe run as root? What could you execute via lesspipe that you couldn't from the command line?
Glyptodon 8 hours ago 1 reply      
Should this show when I run env or not?
jacquesm 7 hours ago 2 replies      
rm /bin/less

Problem solved.

Any binary utility that I haven't used in a 6 month period can get lost. The problem is that there are probably a hundred or so more issues like this hiding in /bin/* and /usr/bin/* and wherever else executables are hiding.

Is there a way to retrofit 'can shell out' as a capability flag not unlike the regular access permission bits?

Lavatory Laboratory: How sanitation is following the cell phone model
11 points by dnetesn  1 hour ago   1 comment top
spacecowboy_lon 6 minutes ago 0 replies      
The only problem is these Alternate methods are qute good for small low desity vilages but tend not to work very well at scale like the rapidly growing cites in china and india.
Semantic UI 1.0 released
41 points by rtcoms  3 hours ago   7 comments top 5
UncleCarbs 17 minutes ago 0 replies      
Very pretty, but is it really semantic? "A button can have different sizes. A button can have different colors." </soapbox>
ckluis 1 hour ago 0 replies      
At first blush I was like great another UI framework, but going through the kitchen sink - they seem to have added all the normal UI patterns like Cards - seems like an excellent framework for banging out a prototype.
andyfleming 1 hour ago 1 reply      
Here's the discussion from last time this was posted:


priteshjain 1 hour ago 0 replies      
I have been using this and love this UI framework. +1
skratlo 53 minutes ago 1 reply      
Great, the site kept my CPU cores (and cooling fans) very busy, I accidentally clicked on the download link, which brought me 2.5 MB zip file. This thing ain't light. But gosh! 10k stars on github? are you people crazy?
31C3 Ticket Shop Has Opened
33 points by PaulSec  3 hours ago   7 comments top 3
hobo_mark 1 hour ago 3 replies      
I am conflicted whether to go, on one hand I have nothing better to do in that period, it's cheap, and it's the only time of the year I get to meet 'my people', on the other hand, last time I went (29C3) most of the talks were about privacy and ethics bla bla, rather than the hard technical stuff I went there for in the first place.

If even Ange Albertini has had all of his three proposed talks rejected, the rest this year better be damn good!

I hope they won't make again those ridicolous creepercards from two years ago, some people seem to never have grown out of highschool.

unwind 2 hours ago 1 reply      
Firefox won't take me to the actual ticket purchasing page, claiming a certificate problem:

tickets.events.ccc.de uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. (Error code: sec_error_unknown_issuer)

That's unfortunate. Doing s/https/http/ doesn't work either. Of course I guess all expected customers know how to add an exception to their browsers, but I couldn't be bothered when just browing out of curiosity. :)

Dosenpfand 1 hour ago 0 replies      
Prices are:

Supporter 140: 140.00 EUR

Supporter: 120.00 EUR

Standard: 100.00 EUR

Business Platinum: 750.00 EUR

Business Gold: 600.00 EUR

Business Silver: 450.00 EUR

Members of the CCC e.V.: 80.00 EUR

Up-and-coming: 25.00 EUR

Starfish are dissolving up and down the West Coast
50 points by austinz  12 hours ago   23 comments top 2
awjr 2 hours ago 2 replies      
So if starfish eat pretty much anything that filters water (muscles), what effect would the Fukushima Accident (March 11th, 2011) have on their radiation levels? Could such an event have triggered a mutation in the virus or could the starfish be hypersensitive to slightly higher background radiation?
gd1 2 hours ago 5 replies      
Clicked link. Hit Ctrl-F. Typed "clima". Saw this:

"I totally think climate change is involved," Lahner says. "I just dont have evidence yet."

Closed tab.

Git's initial commit
282 points by olalonde  13 hours ago   93 comments top 20
jordigh 10 hours ago 0 replies      
Well, while we're looking at FIRST POSTS, here's Mercurial's, self-hosting a month after git, and like git, also created to replace bitkeeper:


The revlog data structure from then is still around, slightly tweaked, but essentially unchanged in almost a decade.

afandian 4 hours ago 1 reply      
My god... the comments. Looks like the reddit culture (i.e. fun for in jokes but not particularly professional)
stinos 1 hour ago 0 replies      
Maybe I've been drilled too hard by a couple of programming gurus, but I immediately noticed there are quite a lot of repeated yet unnamed magic constants in the (otherwise pretty clean) code. According to wikipedia [1] the rule to not use them is even one of the oldest in programming. Curious what kind of profanity Linus would come up with when confronted with this :]

[1] https://en.wikipedia.org/wiki/Magic_number_%28programming%29...

brandonbloom 12 hours ago 2 replies      
I love checking out very early versions of projects. You often get to see the essence before the real world came in and ruined the beauty of it.
jeffreyrogers 11 hours ago 1 reply      
Interesting fact about Git is that it was self hosting in two weeks, IIRC.
hyp0 10 hours ago 1 reply      
It's so short.

The readme is the best explanation of git I've seen.

d0m 9 hours ago 1 reply      
I've read so many git tutorials, I wish I had seen that README file before.
fivedogit 12 hours ago 1 reply      
Jackcor 2 hours ago 1 reply      
Did all of the inital commit code is written by Linux Torvalds ?
DodgyEggplant 4 hours ago 0 replies      
This is a great lesson in writing focused & succinct specs, when one clearly sees what his/her program is going to do.
royragsdale 8 hours ago 0 replies      

If you want to see the commits going forward from here.

zabcik 11 hours ago 3 replies      
Why are there multiple main() functions? I've never seen this style before. Is it multi-process?
hw 8 hours ago 2 replies      
Does Github offer an easy way to get to the first commit of a project? Traveling page by page back in time is time consuming (yeah, i did that)
dirtyaura 5 hours ago 0 replies      
I only realised reading the README that git is a great lesson in branding.
benihana 11 hours ago 7 replies      
Is there a reason there aren't any braces around single-line if statements? Is that a C thing? It seems kind of inviting to bugs to me.
Fizzadar 12 hours ago 1 reply      
Great to see the original command set, and the title of course: "GIT - the stupid content tracker"
EGreg 3 hours ago 1 reply      
Linus wrote:

*+Side note on trees: since a "tree" object is a sorted list of+"filename+content", you can create a diff between two trees without+actually having to unpack two trees. Just ignore all common parts, and+your diff will look right. In other words, you can effectively (and+efficiently) tell the difference between any two random trees by O(n)+where "n" is the size of the difference, rather than the size of the+tree. *

Um, What?

justintbassett 10 hours ago 1 reply      
I wonder what the first commits for big sites/projects look like?
byteCoder 12 hours ago 4 replies      
Following the tradition of sports, I propose that commit id e83c5163316f89bfbde7d9ab23ca2e25604af290 be officially retired.
tempodox 2 hours ago 0 replies      
Code comment about git:

  stupid. contemptible and despicable.
That sums it up quite well. Every day I pay thanks to The One Who Programmed Me that my workflow doesn't put me in need of that shitload of crap that is git. I pity those who do need git.

An Introduction to Tensors for Students of Physics and Engineering (2002) [pdf]
36 points by brudgers  4 hours ago   5 comments top 3
gjm11 38 minutes ago 0 replies      
This document tells an important lie: it says (e.g., at the top of page 8) that every rank-2 tensor is an outer product of two vectors. This is flatly untrue and is an easy misconception for a student to acquire even if the thing they're learning from doesn't explicitly say it.

I skimmed through looking to see if at some point the author says "oh, by the way, I told you a lie earlier", but it doesn't look like he does.

(If anyone reading this has difficulty believing that there are more rank-2 tensors than outer products ("dyads") of two vectors, note that in 3 dimensions you can specify two vectors by giving 6 numbers, but it takes 9 to specify a rank-2 tensor because it can be represented by an arbitrary 3x3 matrix.)

krsree 2 hours ago 1 reply      
Clear discussion of the basic concepts. Coincidentally, I was reading this two days back and before this link was posted.
return0 1 hour ago 1 reply      
> inner product of a matrix and a vector

how does that work?

Getting started with JetBrains Nitra
43 points by bleakgadfly  5 hours ago   9 comments top 5
skrebbel 1 hour ago 0 replies      
If you're interested in stuff like this, be sure to also check out Rascal:


The focus is slightly different: Rascal focuses more on automated code transformations and less on being a syntax highlighting service for editors. But Rascal is remarkably powerful and surprisingly accessible.

Basically, Rascal allows you to build Lispy macros in any language. Or to easily parse-and-transform new languages. Or to design entirely new languages and transpile them into something existing. To drive the point home, CoffeeScript, Nimrod and Sass could've easily been built with Rascal.

sqs 2 hours ago 2 replies      
Interesting. It looks like it doesn't support type information yet, but it's coming in milestone 2. I started an open-source project called srclib that's creating toolchains to type-analyze and dependency-analyze source code in multiple languages: https://srclib.org/. It might be of interest to folks working with or using Nitra.
mataug 2 hours ago 1 reply      
Downloaded without realising this was a c# project, and requires visual studio. I'm on linux dammit.
CmonDev 4 hours ago 1 reply      
Lack of PCL support makes it's applicability quite limited in today's multi-platform world. Hope they will fix this flaw before releasing.

Awesome project nevertheless.

jimmcslim 4 hours ago 0 replies      
I've pondered whether this could be used to build better navigation and refactoring tools for Delphi than the official Embarcadero ones, which are frankly dire.
GitHub dropped Pygments
137 points by mkonecny  10 hours ago   85 comments top 14
abhinavg 8 hours ago 4 replies      
I honestly don't see a problem here. They decided to change a backend library for a non-essential system in their product. Most services don't ask for permission or make announcements when they make changes like this.

The approach seemed to be, if things break, people will report it and well fix it.

While this may not be the best approach, the number of languages supported is too high for a person to check each one manually. Generally, I imagine they wouldn't expect a change like this to break anything significant.

[people] use it as a portfolio. [..] To suddenly doink the appearance of peoples portfolios is unfortunate.

It is very unlikely that syntax highlighting errors in GitHub will affect someone's chances of getting a job.

Sure, this switch could cause some issues but they don't seem to be severe enough to kick up a fuss over.

isomorphic 8 hours ago 8 replies      
This sort of thing feeds my paranoia about GitHub being a giant single point of failure in the open-source world.

I know the argument: Someone, somewhere has a copy of each repo checked out, so we (the nebulous "we") could reconstruct everything from the diaspora of ".git" directories.

It just bothers me to think how dependent OSS has become upon GitHub.

Matthias247 3 hours ago 0 replies      
I don't know about pygments but my experience with writing a custom highlighter for Sublime Text (aka Textmate, Atom and what Github seems to use now) was that it is not really a good and reliable system for highlighting.

It is really easy to highlight simple things (keywords, numbers, ...). However when it comes to more complex scenarios (e.g. where the type of a word depends on the previous one) then the singleline regex based mechanism shows it weakness. Due to that many language support plugins will yield wrong results when you start to split things like function declarations over several lines, even though it's perfectly legal in the languages. Some things can be worked around with the start/end regexes, but nesting those multiple levels deep can get quite akward and I don't think that they were thought of for things beyond braces and multiline comments.

Therefore I don't know if Githubs move here is a really good choice. However I think their main motivation might be that this file format already has such a big ecosystem due to Textmate, Sublime and Atom and the parser has a high performance so that they went for it.

latkin 6 hours ago 0 replies      
F# highlighting is also now totally gone with this change. Code is highlighted with some random lexer than doesn't even understand // comments. Rather frustrating, given that a lot of F# development is centered around GH, and GH themselves use F# in a couple of places.

Browsing the issues list, this isn't just "fringe" languages, either. Perl, PHP, Go, and Clojure all appear to have regressed to some degree.

cespare 9 hours ago 5 replies      
I assume the new syntax highlighter is way, way faster than pygments. It's written in C++ rather than Python. (Atom uses the same grammar format, but a Node implementation.)
slantedview 7 hours ago 1 reply      
It's worth noting that even syntax highlighting on common languages like Java is currently messed up on Github. I hope it's fixed eventually, but kinda lame to take something that wasn't broken and break it.
hawkice 8 hours ago 0 replies      
GitHub: a product so close to our hearts that even extremely well meaning changes that really do help in many cases can sting sometimes.

I normally wouldn't understand this type of thing (others say they don't see the problem and it's quite clear where they are coming from), but in a way I _do_ see the author's point of view. When you build something people really care about, any change, no matter how minor, has the opportunity to impact someone. That's why we all build things, isn't it?

alayne 8 hours ago 1 reply      
Racket is a fringe language. Github has about as many Prolog repositories as Racket repositories.

If Racket syntax highlighting was causing performance issues that were noticeable to Github, performance must have really sucked. Why should Github let Racket drag down its capacity?

shurcooL 6 hours ago 0 replies      
Interesting coincidence, as I've just changed the highlighter in my Go library for offline rendering of GitHub Flavored Markdown [1] 23 hours ago.

[1] - https://github.com/shurcooL/go/commit/6aad35a0a60fd67927f446...

frontsideair 5 hours ago 0 replies      
I understand the author's exasperation, but the post is surely filled with loaded words and wild speculations. It may get the message across, but this is not the way to start a conversation.
_pmf_ 6 hours ago 0 replies      
GitHub is not your personal vanity site.
ttwwmm 6 hours ago 0 replies      
Rendering of reStructuredText disappeared in GitHub Enterprise ~6 months ago... I wonder if the reasoning was the same. More limited resources in the GHE VM environment?
ivan4th 8 hours ago 0 replies      
Seems like Common Lisp syntax highlighting suffered, too.
jarcane 9 hours ago 1 reply      
It is still difficult for me to express my opinion of this move without simply resorting to strings of profanity. Frankly, I suspect Atom has more to do with it than anything.
Magnus Carlsen Repeats at World Chess Championship
238 points by sethbannon  15 hours ago   70 comments top 10
dmourati 12 hours ago 6 replies      
Carlsen came to play an exhibition at my previous company. He played against 8 players simultaneously. Two had been chess-club players and competitively ranked. One employee played Carlsen to a draw. This earned several gasps from his handlers who had never seen such a thing. They asked what she was ranked and she said she wasn't with a wry smile. Truly something special to behold.
sethbannon 13 hours ago 1 reply      
For those that weren't following closely, this rematch was nothing like the last world championship between Carlsen and Anand, when Carlsen won in a blowout. Carlsen didn't seem to be in top form this time, and Anand clearly came better prepared than the last match. Two games stood out as being particularly fascinating for me:

Game 3, which Anand won: http://en.chessbase.com/post/sochi-wch-g3-the-tiger-roars

And game 11, in which Carlsen clinched the title: http://en.chessbase.com/post/sochi-g11-in-dramatic-finale-ca...

ramkalari 8 hours ago 0 replies      
Anand doesn't like playing long games. Carlsen doesn't give an inch even when he is down. Regardless of rating differences, Anand's game was in bad shape in 2013 and it just continued in the world championship. He is playing much better these days. It is just that his game doesn't match up very well to Carlsen's relentless style. That said, he had his chances this time like how he predicted before the match. He just couldn't take them. I'm always reminded of Federer-Nadal rivalry when I watch these two play.
pk2200 11 hours ago 0 replies      
Game 6 was the turning point of the match. Carlsen made a horrific blunder, and if Anand had noticed it, he very likely would have won and taken a 1-game lead in the match. Instead he lost, giving Carlsen a 1-game lead.


mbell 12 hours ago 4 replies      
I found it interesting that there was a stream on twitch.tv that had over 10,000 viewers for this. It's how I found it, not being much a chess follower but still finding the game interesting. I was a very surprised to see so many viewers.
cheepin 14 hours ago 3 replies      

But seriously, I was following this like a TV show, kind of hoping Anand would win for the extra drama it would create.

Congrats to Carlsen, clearly an amazing player but...


sharkweek 14 hours ago 0 replies      
It's been a real treat watching Carlsen play these past few years in his rise to fame - He has a fundamental understanding of endgame far above anyone else playing at his level. It's fun watching the latest generation of players add to the skill of the game, with Carlsen atop that list.
Jakeimo 13 hours ago 0 replies      
mhartl 14 hours ago 3 replies      
It's fascinating to read about the blunders both competitors made. One of the many strengths of computer chess programs is that they are virtually guaranteed never to make such mistakes.
known 7 hours ago 2 replies      
Bell Curve for Anand.

Anand should gracefully retire from chess and passon the batton to younger generation.

Serial, Podcastings First Breakout Hit, Sets Stage for More
116 points by benbreen  12 hours ago   47 comments top 19
netcan 3 hours ago 2 replies      
I've heard that when TV started spreading people thought it would never overtake radio. "Who has time to just sit and watch something?" You can listen to audio while you do other things.

Podcasting is probably my main "media" for the last 5 or so years. I listen to 15-20 hours per week as I commute, exercise, wash dishes

Right now. As a marginal, unprofitable, poorly understood medium podcasts are amazingly "free." At least some of that stems from podcasting being broken.

Anyway, if anyone is looking for some fertile space to innovate, podcasting is it. Discovery is completely and utterly broken. Even just getting a podcast that you know the name of can be borderline impossible for less savvy users. Podcasting apps kind of suck too. Podcasts struggle to get interaction, which is important for discovery.

abdurraheem 8 hours ago 0 replies      

I grew up not far from Woodlawn High, attended a neighboring school. After hearing about the trees that were planted in Hae Min's memory, particularly the second tree which was planted behind the school where "Lee liked to flop after an exhausting practice session with the lacrosse or field hockey teams" I recalled doing exactly that every summer after baseball practice (summer league was at woodlawn high) and I was curious to know if the tree giving me shade all those years was the one planted in her memory. So I decided to stop by after work and take some photos with my phone while I was around.

Side note I also went to Sunday School with Adnan's brother - or at least saw him around the campus - did not know any of them personally however. My only memory from the trial days is my parents telling me "See what happens when you have a girlfriend!" (grew up in a similarly conservative household, personally very liberal)

researcher88 9 hours ago 4 replies      
I actively try to avoid 20/20 and Dateline crime mysteries. There are so many of these and they are so engaging, and often without any resolution beyond a "he said, she said". The only conclusion I can draw from them is that crime is often incredibly random and irrational, and everyone has credibility problems, real victims sometimes lie to exaggerate and the justice system is deeply flawed.

But I feel that it's disaster porn; gripping us by appealing to the worst of us.

I was excited for Serial until I realized what it is. I feel for the wrongly convicted, if that's indeed what happened, but I'm not watching/listening to be an activist against injustice. No, I'm just distracting myself without learning anything by absorbing scintillating details and pondering irrelevant mysteries.

username223 8 hours ago 4 replies      
This seems to be one of a spate of recent articles, e.g. [1]. For a skeptical take, see Marco Arment [2]. Then again, he could just be expressing sour grapes because people with actual skill, money, and time are invading his turf.

FWIW, I have listened to a lot of NPR via podcast for many years, and see a podcast-only version of it as a natural step. At this point, the amateur "three nerds and a microphone" stuff is background noise for when I can't sleep.

[1] http://nymag.com/daily/intelligencer/2014/10/whats-behind-th...

[2] http://www.marco.org/2014/11/16/why-podcasts-are-suddenly-ba...

jklp 11 hours ago 1 reply      
For some strange reason I thought this was part of Alex Blumberg (also from This American Life)'s Gimlet Media - which he started in August this year, and which he's documenting at http://hearstartup.com.

Anyone know if the timing of Serial's debut was co-incidental, or if there's something happening between the two ex-TIL producers that we don't know about?

ckuehl 10 hours ago 1 reply      
A quick plug: for those who enjoy Serial, there's an incredible amount of high-quality discussion about it on the serialpodcast subreddit [1].

The very high level of engagement the show has with some listeners is surprising. I don't think it's typical, but many posters talk about dedicating hours to reading case documents, re-listening to episodes, doing independent research, etc.

[1] https://www.reddit.com/r/serialpodcast

jwilliams 10 hours ago 0 replies      
If you're in the mood for a podcast recommendation, check out Welcome to Night Vale http://commonplacebooks.com/ - https://en.wikipedia.org/wiki/Welcome_to_Night_Vale
kbenson 9 hours ago 0 replies      
I think Patreon would work really well for this. I know they can support both monthly an work-released based donation types (with per-month maximums), and being able to pledge a certain amount per episode would make this easy and me very happy that my money was going towards actually producing content.
armandososa 11 hours ago 0 replies      
I love Serial. But I love more the idea that I live in a world where such a show can exist and be successful.
wanderingstan 6 hours ago 1 reply      
A friend and I are exploring some ideas and around next-generation podcasting (and audio entertainment in general). Early prototype stage. There is a lot of room for innovation, and the market is growing--as this article explains. If you're in the SF area and interested in this area, drop me a line.
Animats 6 hours ago 1 reply      
We've had "podcasting" for well over a decade, the iPod is dead, and this is the first "breakout hit"?
kosmopolska 3 hours ago 0 replies      
Here in Sweden, the last episode of Filip & Fredriks Podcast[1] was recorded in front of a 16 592 people audience this past June.

To be fair they had quite the career on television before that, but podcasting seems to have become a thing here.

[1] http://www.filipandfredrik.com

kenjackson 11 hours ago 1 reply      
I gave them $20. I'm curious how much money in donations Serial can get.
alanfalcon 10 hours ago 0 replies      
I'd love some data behind the claim that this is podcasting's first breakout hit, beyond the comparison to This American Life's slower initial growth in the form. Is this a legitimate claim to fame, or sensationalism from the New York Times from a writer who seems to be largely dismissive of podcasting in general?
supercoder 11 hours ago 4 replies      
I know this will get me downvoted but I can't understand the fascination with Serial. Overall I found it quite a boring listen. Too many side paths that amount to not much at all, feel like it could have done with a harsh edit.

But it's production qualities are high quality and so anything that leads people to create more content at a similar level the better !

dankohn1 10 hours ago 1 reply      
Serial has been an immensely satisfying listening experience. I now look forward to Thursday mornings for it to come out.
spacecowboy_lon 4 hours ago 0 replies      
First breakout hit not so sure I heard nightvale name checked on the bus 3 months ago.
palidanx 10 hours ago 0 replies      
In addition to serial, I began listening to http://www.wnyc.org/shows/deathsexmoney/ which reminds me of a more poignant Terry Gross.
hartror 6 hours ago 0 replies      
This popped up while I am listening to Serial. But I am a bit of a podcasting junkie so not surprising.
Systems software research is still irrelevant
33 points by mbrubeck  8 hours ago   6 comments top 4
notacoward 1 hour ago 1 reply      
A lot depends on how "systems software" is defined. Very few people have much use for a whole new kernel. Even little pieces of new kernel code are less useful than before, as various functions (e.g. storage) can be handled just as well in user space. So if "systems" == "kernel" then relevance really has been declining. On the other hand, if one considers "system" to include user-space storage and networking, distributed consensus and service discovery, package and system management, JIT and GC in language runtimes, etc. etc. etc. then we're actually in a bit of a renaissance.

The fact is that kernel/user hasn't been a very interesting boundary for a long time. There are hundreds of systems-programming problems that can be solved on either side of that boundary, except that if you try to solve them in the kernel debugging will be harder and you'll get embroiled in the constant turf wars that are the hallmark of a declining specialty. The idea that only kernel hackers do systems programming needs to die.

donavanm 2 hours ago 1 reply      
Hrm. I think there are some decent xounter arguments. I will agree that pure academic systems research is being superceded by industry development or academic commercialization. As a trend I think commoditization, parallelism, and management are where effort (and sucess) has gone, as opposed to novel research. Make it cheaper, wider, and possible to manage.

Influential counterpoints: distributed systems management/configuration/orchestration, distributed datastores (CAP, CALVIN, Dynamo, etc), system languages (Go, Rust), distributed processing (tilera, niagra, larrabe, gpus), network fabrics instead of trees, &stream processing jnstead of batch (kinesis, millwheel).

codeulike 1 hour ago 0 replies      
He seems to think progress would look like "a new wave of built-from-the-ground-up operating systems sweeping away all the old ones every decade or so". Say, if Plan 9 had become dominant in the 90s, would we by now be replacing it with something else? What would everyone moving to new operating systems every 10 years gain us?

There has been mind boggling progress since the 90s, just not in the direction he expected.

justincormack 3 hours ago 0 replies      
I think there is a revival of interest, as mass scale automation is changing the way we use servers, and security issues are finally changing the desktop/mobile. I found a lot of interest when I organized a conference [1] which is tomorrow (videos will be available), which si encouraging.

[1] https://operatingsystems.io/

Binary Search Is a Pathological Case for Caches (2012)
40 points by ot  7 hours ago   1 comment top
lorenzhs 2 hours ago 0 replies      
This article is a great example of the very tangible effects caused by hardware properties that are often disregarded. Very interesting read!

Also, I can't help but note the (probably non-existent) connection to 3-way partitioning for quicksort (which is consistently ~10-15% (iirc) faster than traditional quicksort). It is interesting how we seem to assume that surely factors of two must be the best choice, computers working in binary and all. But sometimes this is true only in theory, but not in practice (binary search), sometimes it's good but not optimal (quicksort), and sometimes it outright is the provably worst choice (dynamic arrays)!

Craigslist DNS redirected due to hacked registrar
43 points by MikeyJck  7 hours ago   7 comments top 3
Animats 5 hours ago 1 reply      
Their registrar is Network Solutions. The last update to "craigslist.org" there was at 2014-11-24T03:26:40Z, so something did happen to the records at Network Solutions recently.
crummy 6 hours ago 1 reply      
I'm Google's DNS (, and can't load images on Craigslist right now, after flushing my DNS.
iLoch 5 hours ago 0 replies      
Ah strange, I noticed cl was down earlier - I figured it was a fluke and/or something wrong with my own internet.
When G.M. Was Google: The art of the corporate devotional
22 points by digisth  7 hours ago   2 comments top 2
simula67 1 hour ago 0 replies      
Very interesting article, if you ignore the unnecessary jabs at Google, youngsters and the uncorroborated praise for GM.

But there is an interesting point to note here : GM is not fundamentally the same as Google or other software companies of today. Companies which make physical goods, make more money through making more and more of the same thing. If they have done it many times, the process is well-understood and can be managed by a specialized person ( who specialises in management ) in a central way and that is probably more efficient.

Software companies make money from making new and different things. To make a new copy of a software once it is written, all it takes is Ctrl + C and Ctrl + V, or make a new web request. These type of companies are probably more efficient if you simplify the communication structures enabling more collaboration. The well-understood parts ( such as Amazon warehouses, Apple's supply chain etc ) can still be managed using the old management style. The creative organizations can still use the new management style.

You can see this style variation when the founders are ousted from the company and management is handed off to MBA folks ( Apple with John Sculley etc. ). Maybe by the time the founders from the new age tech companies have to pass the baton, the management schools of thought will catch up to this ( they already seems to be doing so ) and things won't be so bad as the author predicts.

hackuser 5 hours ago 0 replies      
Summary: A very interesting essay providing context for Silicon Valley's management style. Google is not as different from GM as we might think, and is trying to solve many of the same organizational problems. GM was the great innovator in management at one time and there is a long history of celebrating the current management fad as freeing employees, and as being generalizable.
Handbrake 0.10.0 released
152 points by chuckreynolds  15 hours ago   47 comments top 11
xfalcox 16 minutes ago 0 replies      
Hey guys, someone has a good list of presets to share?
Zeebrommer 2 hours ago 3 replies      
At times I do need an audio/video converter, and it is always a bit of a pain sifting trough all the dubious and low quality freeware programs. Although Handbrake is not really dubious, last time I tried it wasn't very user friendly or easy either. Anyone know why it seems to be hard to make a nice, easy 'convert arbitrary format to something commonplace' program? And since this gets upvoted, is Handbrake considered the current standard?
ulfw 7 hours ago 1 reply      
"QSV is only supported on Windows""libav AAC encoder as the new default for Windows""HandBrake now offers BiCubic scaling on Windows via OpenCL"

Sad that the Mac doesn't seem to get many of the goodies anymore.

rossy 10 hours ago 0 replies      
> In addition, we have added the FDK AAC encoder for Windows and Linux as a optional compile-time option.

It's a shame that libfdk-aac is also GPL incompatible. It's hands down the best free AAC encoder, but a version of FFmpeg/libav/Handbrake that's compiled against it is not allowed to be distributed.

jmnicolas 2 hours ago 0 replies      
Handbrake could really use some TLC on the GUI. It's so confusing (at least for me) that I was only using it in CLI mode. This ended up being a good thing : I made a script to only convert at night when electricity is cheaper.
e40 15 hours ago 2 replies      
When I used Handbrake, I always had audio sync issues. I could never figure out what I was doing wrong. That was on Windows. Now that I'm on a Mac, perhaps it'll go better, though I don't have the need to use it much anymore.
cheng1 11 hours ago 4 replies      
Love the software.

However, still can't do batch convert on a OS? You can convert a whole folder with a few clicks in the Windows version.

1ris 14 hours ago 0 replies      
Still no opus?
quink 15 hours ago 2 replies      
While still waiting for Daala I've already started encoding my videos in H.265 separately from handbrake, using mplayer. It's been going very well, H.265 will be awesome once it really starts hitting the mainstream. The bitrates are magically small and x265 is pretty fast. mkv container and Opus audio.
higherpurpose 15 hours ago 1 reply      
HandBrake uses HTTPS, but Sourceforge doesn't...great.
marco1 11 hours ago 1 reply      
XMedia Recode [1] can do all this as well. Looks quite similar as well, by the way.

But of course, it's not open-source, while Handbrake is.

[1] http://en.wikipedia.org/wiki/XMedia_Recode

A Minecraft world that has been played for 3.5 years
232 points by rocky1138  18 hours ago   83 comments top 25
zalzane 17 hours ago 2 replies      
Pretty big world, but it doesn't even compare to 2b2t in scale.

For those who havent heard of it, 2b2t is an anarchy survival server that's been around for about the same period of time, 3-4 years, with no resets. Virtually the entire map from the spawn point to 5km from spawn is a desolate wasteland littered with ruins griefed bases, castles, and megastructures.

With the introduction of the hunger system everything got a lot more interesting, requiring new players to make a mad scramble from spawn and try to find some source of sustenance. It's not uncommon for new players searching for food to duck into a 2-3 year old base that's been long abandoned but has a few precious pieces of bread left in a chest.

Typically players will build their bases anywhere between 10-500km away from spawn, and when they do, they build some of the most impressive bases I've seen in the game. One favored hobby of many regulars is to go hunting for these gems that have usually been abandoned years past.

Google it and you can get a good idea of exactly how old the map is, but the pictures really don't do justice to the absolute carnage of spawn.

ykl 18 hours ago 3 replies      
Way awesome! It's really amazing what people can build in Minecraft even in a short period of time, let alone a long stretch.

Here's a map from a server I play on. This map represents only 6 months or work (we reset our map every 3-6 months) and was done 100% in survival mode: http://nerd.nu/maps/pve11/#/61/64/-42/-2/0/0

stevebmark 12 hours ago 4 replies      
Seeing things like this, and others in the thread, honestly depresses me. I've wasted months of my life building in Minecraft, huge structures that I can do nothing but look at, alone. Months where I lost social interaction and self maintenance. Nothing good ever comes of these worlds, and they are self destructive to their creators.

Minecraft, Reddit, Imgur, Facebook, all things you will eventually have to block from your life if you want to achieve anything real. Don't let it consume you. The only winning move is not to play.

Macha 17 hours ago 1 reply      
Server worlds can get impressive pretty quickly. One of my wallpapers is this render from one of the old reddit server worlds: http://i5.minus.com/im5gOe.jpg

On the other hand, my single player world which is about 4 years old isn't nearly as impressive.

ashark 17 hours ago 2 replies      
The ring roads are the most impressive part, IMO.

Between material-gathering, going to the work site, clearing the path ahead (largely avoided here by elevation, but still) and actually laying down the road one... block... at... a... time... roads are very slow to build in Minecraft. They take a large amount of investment before they start to pay off, unlike most buildings, and they can't really be fully appreciated except from map views like this site, so they're kind of low-reward for the people doing the work, too.

I only run a vanilla server, so I don't know how mods might affect that. Were these tool-assisted in some way? I just can't wrap my head around that time involved if not.

cdr 16 hours ago 1 reply      
Stuff like this is pretty fascinating. I've only really played Minecraft singleplayer and intermittently. I almost always end up starting a new world with each update, too, since it bugs me not having the new blocks/structures/etc in already created chunks. I've never even made it as far as doing the enderdragon, usually getting caught up building some ill-advised structure or clearing some endless cave network. I wish the devs would slow down with the new features, the complexity is kind of out of control at this point.
Hawkee 11 hours ago 0 replies      
Here is a world I've been running for 2.5 years. Spawn has been moved several times, so the world spans tens of thousands of blocks. If you look carefully you can find large settlements from continent to continent, http://treestop.com:8123
robinhoodexe 18 hours ago 1 reply      
Very impressive. How many days in total have been spent playing this world?

Also, some gems in there[1].

[1] https://i.imgur.com/RFDD5lZ.png

rakoo 18 hours ago 1 reply      
I have a really hard time believing you built those concentric pathways by hand and not through some level editor...

Very nice world otherwise.

nacs 15 hours ago 0 replies      
Another large server map that's from a 1+ year old map that's 12000 blocks wide. Players' towns are marked:


Mandatum 18 hours ago 2 replies      
Wow, the performance of this website is really good. How is this so fast?
jostmey 18 hours ago 1 reply      
I know almost nothing about mine-craft. How many people played in that arena? I ask because it would be kind of sad if one person did all of that. However, it says something pretty cool about humanity if it was constructed by random people passing through, which is that people will naturally work together. In other words, no centralized set of rules are required in order for people to collaborate together to build a (virtual) city.
edem 13 hours ago 1 reply      
The server has enormous lag because of this post. I managed to find a castle full of melons just a minute before I would have starved. I guess I'm lucky.
emilioolivares 12 hours ago 0 replies      
There are some very impressive things created in Minecraft. This one of them: Westeros (Game of Thrones) built in Minecraft. Simply amazing, I bought the game just to check it out:


jchonphoenix 18 hours ago 1 reply      
As someone who's never played minecraft, my first response is "I wonder if I can build Jurassic Park in this thing?"
rocky1138 18 hours ago 2 replies      
Neat bit of trivia: Notch's wife, ez, was kind enough to visit once and leave a sign. "ez was here" :)
NAFV_P 17 hours ago 1 reply      
This could be an extreme version of "Where's Wally".
Narishma 18 hours ago 3 replies      
A black page? The website doesn't seem to work in Firefox.
rocky1138 17 hours ago 0 replies      
Another bit of trivia: I set the map to do a full render a day or so ago and it's still going. The map is at 1657200 tiles and counting.
Immortalin 12 hours ago 2 replies      
Doesn't anyone on HN play feed the beast modded minecraft?
edem 16 hours ago 1 reply      
The link is not working for me :(
mkaroumi 17 hours ago 1 reply      
My question:

How many hours/day did you sit with Minecraft? (or whoever has made this)

notastartup 16 hours ago 1 reply      
I wonder wouldn't such map invite trolls and griefers that would just come to destroy everything in sight?
alexperezpaya 14 hours ago 0 replies      
I can't even imagine how much penises they had build in this time
timetraveller 17 hours ago 1 reply      
What a waste of time.
How Would You Redo the Google Interface? (2004)
25 points by tmslnz  13 hours ago   5 comments top 4
somehnreader 5 minutes ago 0 replies      
Showing random surprises on the main page is related to what they are doing with the doodles, I think thats quite nice.

The physical Google button is a somewhat terrible idea, I think someone tried something similar a while back http://bit.ly/1rZkPdO

shawabawa3 3 hours ago 1 reply      
Wow, those are all terrible.

Although I guess the first guy's idea was ok (even if the design itself was terrible) as google actually do something similar now

jamesdelaneyie 1 hour ago 0 replies      
Jenny Holzer's piece is by far the most interesting and opinionated, and still relevant to current discussions on security, leaks et al. Love her take on this.

Sherpard and Davis' are trite in both concept and execution. IDEO's is prophetic, but they're a pretty good forecasting company. Truly disappointed with Sherpard and Davis' though, walkover commercial designers.

thomasfoster96 2 hours ago 0 replies      
The first one was just an early-2000s Google Now. Actually, scrap that idea, it's a beige text iGoogle.

Shepard Fairey's one isn't too bad. I can imagine Google having been like that, even if it never was.

Pinc: Oculus Rift for the iPhone
43 points by sunnynagra  8 hours ago   22 comments top 9
codeshaman 3 hours ago 5 replies      
Can't wait to see wives and husbands coming back home to their kids after work and instead of twitting whatsapping and facebooking at the table during family dinner, the whole merry family puts on these headsets and dissolve into VR, eating soylent green (http://en.wikipedia.org/wiki/Soylent_Green), rendered as roasted chicken in full 3D splendor.

So what if the weather is insane outside, no big deal if we have to wear masks to breathe, we can always buy clean water at the supermarket and outside reality will be just fine. We'll spend our life inside these programmed virtual worlds, were we can be Gods instead. Fuck reality, we're going Virtual.

I know it's not the most popular idea, but I'm just saying...What is the purpose of this, except really cool entertainment ?

Why is it that so many extremely smart people work on entertainment instead of some real save-the-world-because-we're-fucked kind of problems ?

kape 4 hours ago 2 replies      
Interested to see how this compares to Samsung Gear VR (http://www.samsung.com/global/microsite/gearvr/index.html) which combines Galaxy Note 4 and Oculus Rift and is made by Samsung & Oculus together.
ztratar 5 hours ago 1 reply      
This is extremely interesting. Pretty impressive, actually. I wasn't expecting this level of quality.

I see this as a great opportunity and industry in the future. I'm surprised Oculus themselves and Facebook haven't teased any sort of OS layer on top of the VR environment.

netcan 2 hours ago 0 replies      
I don't get the online shopping thing. What does VR contribute here? I'm sure if VR really gets going, retail will be a part of it but going there at the demo stage seems really weird.

BTW, what are the basic applications of VR headsets.

IE, a smartphone's basic jobs are text based communication (SMS, whatsapp, twitter, email etc), calls, camera, web browser, media players, games There are an unlimited number of the jobs and different people use different things, but well, online shopping isn't one of the big ones.

What are VR headsets for? Do we have any better understanding of this now than we did ten years ogo?

I'm not sure if this is practical, but my mind goes to 3d movies. IE, there's a movie that you watch by walking around and listening to different conversations and seeing different things.

fsloth 5 hours ago 0 replies      
The presentation looks impressive. How well does the optical control system for fingers work for interaction and typing?
steeve 1 hour ago 0 replies      
As a long time Oculus owner, I like that they put the screendoor effect on the video
micheljansen 4 hours ago 0 replies      
Really interesting to see this coming from a digital agency / consultancy.

What I would really like to know, is how far along development of this thing really is at the moment. The "preorder" button leads to an IndieGogo page which mentions expected delivery in June 2015 and aside from a really well executed landing page with impressive videos and interface shots, there is little detail about the state of the project.

CmonDev 4 hours ago 2 replies      
Single phone vendor tech is a bit 2012-style. Time to catch up.
gambiting 4 hours ago 1 reply      
It's interesting that they are using the Polish "" character in their logo. Reading it out with the "" character pronounced correctly makes it sound a bit funny.

This is how it should sound:http://www.forvo.com/word/%C4%87/

Why Plan 9 is not dead yet and what we can learn from it (2005) [pdf]
163 points by nisa  18 hours ago   68 comments top 10
stinos 3 hours ago 2 replies      
I only encountered Plan 9 once, and then only the header inclusion scheme it promotes for it's C code. Which IIRC goes like 'header files must not include other header files hence every source file has to include the headers for all declarations used in that source file and in the headers it includes'. Which means when for any header file you want to include in your source you have to go figure out which headers are needed to pull in all declarations. For every single soure file. I really failed to see how any possible benefits of that would outweigh the cons.
drsintoma 16 hours ago 10 replies      
I up-voted this in "new" in the hopes of an interesting debate from smarter people than me about why Plan 9 didn't succeed or what would it take for a new OS to break the status quo. But 2 hours in, 0 comments. Is there really nothing to say anymore about this?

Okay. here's a question: how would the world look like today had Android started based on Plan 9 instead of Linux?

frou_dh 17 minutes ago 0 replies      
When I checked out Plan 9 it felt like like walking around a dim and dusty abandoned palace. Eerie thing.
lettergram 15 hours ago 1 reply      
Interesting, one of my buddies (abdge on HN) ported Plan 9 to Gentoo in 2011.


It doesn't really seem "dead" (though close to it), it's interesting to me that even 6 years after this post Plan 9 was still being ported.

To be honest, a fair amount of the good features have been ported from Plan 9 into Linux. I don't bother using Plan 9 personally because it doesn't have a lot of the support I need, and many Linux distros support what I need.

However, if there more general support I would definitely use it, it's pretty slick.

pmoriarty 14 hours ago 1 reply      
From page 18:

  - But it [Unix] lacked an active data model    - Farber suggested this in 1978 and we all thought it made no sense    - He was right, as usual
Does anyone know what this refers to?

Any articles on it I could read?

erik14th 14 hours ago 0 replies      
I guess this might be interesting on the matter of why new operating systems didn't succeed:


Bottom line is unix is good enough and no one is interested in the effort/risk it'd take to develop something significantly better.

technofiend 9 hours ago 0 replies      
Have you ever gotten in-between a bunch of siblings fighting? They're all mad at each other until an outsider comes in and then suddenly there's a unified front against the new guy.

I blame the ultimate demise of commercial unix on infighting between HP, Sun, Cray, Digital, Tandem and everyone else who had a better idea of what UNIX should look like: i.e. how their UNIX was perfect in every way and safe to lock into because it addressed all your needs.

I remember impassioned arguments about whether HP-UX or Solaris was the better server to host Sybase or Oracle because the developers of said products developed on one first and ported to another, or so it was said.

When I tried to show Linux to my peers in the early 90's it was "Meh, proprietary hardware is better than PC hardware, why would we use that?" Eventually as we all know commercial Linux won the day over pretty much everything else.

I also tried to show people Plan 9 and was again shot down by people who lacked vision, but this time they were right. The only thing it seems most UNIX family members agree on is Plan 9's the red-headed cousin every UNIX vendor hates.

For me it had echoes of Apollo's open namespace concept on their OS. But as others have stated a) it didn't have any compelling reasons to adopt it over standard UNIX for an average business, b) it had a bunch of UI quirks that only a mother could love.

I still think we have things to learn from Plan 9 if you take the open namespace concept and wrap it in a container. My company (a Fortune 50) is getting rid of laptops, privileged remote access (no root over VPN) and even desktops (most everyone's hosted on virtual desktops now.)

Why not give me access to a desktop wrapped in an encrypted container? I boot their OS, it establishes contact with a server that verifies my boot-disk is uncorrupted and then downloads whatever I need to work inside the container, but once I'm done it's all destroyed until next time?

I can operate inside my employer's namespace but once the access is gone there are no local traces? shrug

Anyway back to Plan 9, it was good. It wasn't great enough to make anyone switch.

Lerc 15 hours ago 1 reply      
Architecturally I very much like the look of plan9. From a user interface perspective it's the pits. The UI seems to be anchored to the work-flow of the initial developers.

The last problem is a hurdle that that all good operating systems with fewer than a hundred million installs face. Driver support.

Plan9 with a Wayland style compositor supporting hardware acceleration could be a base for some cool new directions in UI. A Raspberry PI running Plan9 with a spiffy accelerated compositor and plan9ish file-framebuffer-windows would be enough to convince me to dive in and have a play around.

gulfie 8 hours ago 1 reply      
We can learn that doing the right thing technically is often not done in business.

QNX is another great example of awesomeness squandered.

eruditely 16 hours ago 1 reply      
Plan9 is not dead because Urbit.
Geologic Atlas of the Moon
22 points by pepys  12 hours ago   1 comment top
grouchysmurf 1 hour ago 0 replies      
V8 Moving to Git
137 points by TheHydroImpulse  18 hours ago   42 comments top 10
cenhyperion 12 hours ago 3 replies      
If you're still using Google Code for any projects I'd really recommend moving it to another service at this point unless you have a compelling reason to stay. There's some good evidence at this point that's leading me to believe Google Code is next on the chopping block for dead Google services.
random_ind_dude 6 hours ago 1 reply      
The commit message "Fix mozilla expectations after regexp change" looked interesting to me and I took a look at the change.


Looks like V8 runs Mozilla's JS tests too. I didn't know that. Does Mozilla do something similar to ensure that V8's tests work in Mozilla(Firefox) as well?

Osiris 4 hours ago 2 replies      
I find it interesting that they commit directly to master. I didn't see any branch merges at all. What is their workflow if they don't use branches? Do developers work offline on a fix then squash all their changes into a single commit and push it to master?

Generally having so many devs working off the same branch at the same time can be a bit problematic. My philosophy is that master should be for branch merges only.

TheAceOfHearts 16 hours ago 3 replies      
I'm curious if they'll be moving to GitHub as well. It seems that would be another nail in Google Code's coffin.
JoshTriplett 14 hours ago 1 reply      
I look forward to seeing the rest of Chromium follow. Hopefully it'll move from Reitveld to Gerrit and repo in the process. Reitveld has SVN support, but it and its command-line tools are otherwise more painful than Gerrit in every way.
amelius 15 hours ago 0 replies      
I hope they also modify their build process a little.

Right now, when you want to build V8, it automatically downloads all the tools it depends on. This makes it difficult to store one particular version of the library and all the tools it depends on.

Bahamut 15 hours ago 0 replies      
I thought V8 has been in the process of moving to git for a little while? It has some weird git/svn hybrid setup currently I believe.
tn13 15 hours ago 0 replies      
This is good. Github mirrors of the repo are already available. Even though this does not help contributors it would help people like me to play with v8 lot more.
baldfat 11 hours ago 2 replies      
I got 4 down votes them saying: "Git doesn't equal github"

My post: It is good to see that a company can accept that there is a better tool, github.

They were WRONG Google is moving to githubhttps://github.com/google

Lots of repos are hours old thanks down voters.

baldfat 16 hours ago 2 replies      
It is good to see that a company can accept that there is a better tool, github.

I remember when I thought sourceforge was so good. Now I silently weep when I need to go there for anything.

       cached 24 November 2014 14:02:02 GMT