hacker news with inline top comments    .. more ..    23 May 2016 News
home   ask   best   3 years ago   
Roundcube Webmail 1.2.0 is released with PGP support roundcube.net
110 points by weeha  3 hours ago   22 comments top 10
xvilka 3 minutes ago 0 replies      
I recommend to try also Mailpile [1], which was built with security in mind.

[1] https://www.mailpile.is/

xrorre 24 minutes ago 1 reply      
Here's an old XSS exploit for Roundcube from 2013:


I still use RC despite the long history of XSS attacks against it. Luckily RC uses progressive enhancement, so it still works with JS turned off. I just assume emails can still execute JS in 2016? Perhaps it's wrong of me to use RC with JS turned off as a preventative measure, but you have to adore that user interface! It's the only reason I choose RC over other self-hosted email web apps (and there are few to choose from in this space). I like the simplicity of Squirrel-mail, but Roundcube looks and feels too good not to use.

rmoriz 2 hours ago 0 replies      
I wish someone would do this for S/MIME. S/MIME has native support in many MUAs, even Mail.app on iOS. http://smime.io/


embik 2 hours ago 1 reply      
For everyone who (like me) wondered what happened to "Roundcube Next", they released a statement 8 days ago[1] about it. Sounds like they had personal problems getting in the way. Glad to see the project is still alive.

[1] https://www.indiegogo.com/projects/roundcube-next--2#/update...

benbristow 2 hours ago 0 replies      
Rainloop already has this.http://www.rainloop.net/

Been using it for a year or so now, it's fantastic and has never let me down.

arviewer 2 hours ago 6 replies      
I guess this means you have to upload your private key to the server. I always wonder what happens when the key is copied and used by someone else. Can you revoke the key? What happens to sent and received messages from the past? Do you still need the old key (private or public) to read those? Is there a private master key that can create a private sub key that can be used to upload to that server?
bechampion 2 hours ago 1 reply      
ha i remember long time ago chasing for the perfect webmail system.. before gmail of course.Horde,Roundcube, squirrel god... I've never found the perfect one!
Sephr 2 hours ago 1 reply      
It's nice to hear about the server-side PGP support (searching!), although it's unfortunate that the client-side solution, Mailvelope (or more specifically, the OpenPGP.js library it uses), still doesn't support any ECC algorithms.

Fortunately Google's End-to-End extension does support ECC algorithms (no idea if it integrates with Roundcube though), but it seems like it still isn't ready for production distribution on the Chrome Web Store yet.

zby 2 hours ago 1 reply      
If PGP was managed by the browser we would be able to sign everything we post on the web, not just the emails.
tiatia 1 hour ago 0 replies      

But I prefer Afterlogichttp://www.afterlogic.com/

Wish I was open source/freeware

A look at what's coming to PHP 7.1 learninglaravel.net
93 points by debrabyrd  3 hours ago   52 comments top 8
TazeTSchnitzel 1 hour ago 3 replies      
I'm more excited by some features not detailed in the article.

Return type declarations, which were introduced in PHP 7.0, are being enhanced somewhat.

First, it's now possible to declare nullable return types (previously all return types did not permit null):

 public function getFoo(): ?int;
(You can also use ? on parameter and property types.)

Second, we now have a void return type for functions where there is no useful return value:

 public function handle(): void;
(Disclaimer: I wrote the void return type RFC and implementation, so I'm biased here.)

We're also getting type declarations for class properties:

 class Person { public string $name; public int $age; public bool $isEmployee; }
There's a few other things as well. The list() syntax (destructuring assignment) has been shortened to [] and now lets you specify keys (disclaimer: I was involved in both of these). Also, trying to add non-numeric strings together with + now produces a warning (disclaimer: also me).

maaaats 2 hours ago 2 replies      
sideproject 2 hours ago 3 replies      
Off topic, but curious - Is there any discussion on whether there will be a support for a good concurrent programming in PHP in the future? (e.g. channels in Go & Elixir) - of course, you might say "you're using the wrong tool.."
xiaq 1 hour ago 1 reply      
> Catching multiple exception types

> Support class constant visibility

Reminds me of what Rob Pike once said, "[Java, JavaScript (ECMAScript), Typescript, C#, C++, Hack (PHP), and more] are converging into a single huge language".

naranha 1 hour ago 2 replies      
PHP looks more and more like Java.
CiPHPerCoder 38 minutes ago 0 replies      
Not mentioned in the article: Several RFCs being voted on or discussed right now.

Also, https://wiki.php.net/rfc/libsodium

codeulike 1 hour ago 4 replies      
... Note the syntax isnt the usual double pipe ||operator that we associate with or, rather a single pipe | character ...

... PHP 7.1 introduces visibility modifiers to constants ...

... With PHP 7.1 it is now possible to specify that a function has a void return type, i.e. it performs an action but does not return anything ...

... Example four contains numeric values, so everything else is stripped out, and the sum of those are used to calculate the total of 10. But it would still be nice to see a warning, as this may not be intended behaviour ...

Good to see PHP continues to introduce new inconsistencies while striving to break existing code by fixing old ones. Best of both worlds!

edit: OK this was uncharitable and technically some of them are not inconsistencies, just things I thought were weird. I know PHP is widely used and considered useful and will stay with us forever. But I found those snippets interesting.

universal.css github.com
94 points by jchampem  3 hours ago   35 comments top 20
chrismonsanto 1 hour ago 0 replies      
The effect of the joke is lessened when it is labeled as a joke, doubly so when every comment here copypastes the line saying it's a joke.

See also the latest reprinting of "A Modest Proposal," which kindly has "SATIRE -- DO NOT BELIEVE" in large caps on the front and back covers.

aplummer 2 hours ago 0 replies      
This is great. Something similar was on hacker news without the "of course this is a joke" qualifier:


roddux 1 hour ago 0 replies      
>Bootstrap V4 recently introduced spacing utility classes like m-t-1 (which translates to margin-top: 1rem!important), and we thought we'd expand this idea to more classes.

Which is worse; when it's done as satire, or seriously?

sunnyshahmca 10 minutes ago 0 replies      
I know it is a joke. I have been a contributor to the Webkit project and these kind of jokes really scares me to death.
ojii 2 hours ago 0 replies      
At under 5MB, it's quite lightweight compared to other modern tools too, nice!
usmanshaikh06 2 hours ago 0 replies      
Is this a joke?

Of course it's a joke. Use semantic CSS class names.

ryannevius 2 hours ago 1 reply      
This reminds me of Tachyons[1], except Tachyons (supposedly) isn't satire.

[1] http://tachyons.io/

yAnonymous 1 hour ago 0 replies      
They're making fun of Bootstrap, but having classes that allow you to define margins and padding quickly by adding a class is actually really helpful. Of course that shouldn't be expanded to every possible property.
Randgalt 1 hour ago 0 replies      
The real joke is how screwed up client side programming is. Here's a library that's an insider/hipster joke but it's only obviously a joke to hipster/insiders.
megalodon 2 hours ago 0 replies      
Delightful, cringe-entailing humour; this definitely made my day.
nachtigall 1 hour ago 0 replies      
You'd think this is a joke, but have a look at this comment at https://hacks.mozilla.org/2016/05/css-coding-techniques/comm...

> Now that were writing almost all of our html in modular fashion, I have found mix-n-matching pre-defined css classes works the best. i.e. class=inline-block bg-bbb text-333 padding-5-15

blowski 49 minutes ago 0 replies      
Funny, but before we all get on our high horses:

1. Bootstrap is partly for prototypes and quick interfaces where front end best practices don't matter.

2. If you're using a preprocessor, you can include Bootstrap's classes and rename/combine them to something semantic.

3. Something can be a good idea when done in small quantities, and a terrible idea when taken to extremes.

syzygykr 1 hour ago 0 replies      
cornflowerblue was a nice touch.
fortytw2 2 hours ago 1 reply      
> Where is the documentation?> You don't need documentation.

What constitutes self-describing code is wildly different depending on the person. I mean, really?

EDIT: I definitely missed that this is a joke :'(

babby 2 hours ago 0 replies      
I've done this before. The performance on IE, firefox, safari and mobile is complete shit yet suprisingly good in Chrome.
mobiuscog 57 minutes ago 0 replies      
These frameworks get everywhere.
jstoja 2 hours ago 1 reply      
In the end, why not directly have the JS reading the class and generating only what's needed? That would be very cool!I'm still having some doubts about the maintainability (duplication, isolation...) of such styling btw...
smegel 35 minutes ago 0 replies      
So I have to choose between a several Meg download vs using JavaScript to render styles?

No thanks.

Honestly I thought bootstrap was the only css I would ever need, and this hasn't changed my mind.

fiatjaf 19 minutes ago 2 replies      
People really waste their time on these jokes.

I have a lot of ideas for small side projects that could be good (or probably not, but at least I wanna try them seriously) and can't get time to implement them, and people who have this time waste it writing universal.css.

AWS Lambda Is Not Ready for Prime Time datawire.io
196 points by sebg  8 hours ago   58 comments top 24
southpolesteve 6 minutes ago 0 replies      
I can empathize with the OP. Using Lambda in production still has some pretty rough edges. You basically have to use one of the deployment tools/frameworks out there[1]. There are also just certain things it won't do (someone else mentioned binary data). Learning some of the limits takes time and it is not always super clear.

But my personal experience says Lambda is ready for prime time. We use it in production. ~15 million API calls per day. Mostly user facing HTTP requests. Even with the rough edges I would prefer to use it for any new web development project. It feels like at least an order of magnitude reduction in risk/complexity of scaling and deploying code. Its not "zero" but that is huge for me and for my team. We spend more time shipping.

[1]We wrote one: https://github.com/bustlelabs/shep but there are plenty of others mentioned in the comments.

jedberg 6 hours ago 4 replies      
We built Kappa[0] to handle our production Lambda deployment. It was built by the same person who wrote Boto and the AWS command line.

Here is an example[1] of a program built in Python that uses Kappa, and here is a video tutorial[2] on how I deploy that program with kappa.

Obviously I disagree with the premise. It's true that it is more difficult to use than other technologies and you'll certainly pay the pioneer tax, having to develop your own tooling, but it's ready for production traffic.

Error handling is ok, could be better (it takes a while for the cloud watch log to show up).

The real big problem is testing. It's really hard to test if you have more than one function because there is no mocking framework (yet). It's fairly easy to deploy and test with a test account, but local testing still needs to be solved.

[0] https://github.com/garnaat/kappa

[1] https://github.com/jedberg/wordgen

[2] https://www.youtube.com/watch?v=JtLLkCt-lPY&feature=youtu.be

philliphaydon 57 minutes ago 0 replies      
AWS Lambda is not ready for prime time, but not for the reason the author states.

Lambda are unpredictable which is probably its biggest downfall. You can get Super fast deployment and execution one second, the next you're getting random execution failures out of your control.

Lambda often feels like its unsupported by AWS. It took them over a year to support the latest version of node.

Java perf is terrible and support should either be dropped or fixed. Go really should be supported out of the box.

Responses from lambda are not flexible.

Deployment of lambda is frustrating, and the inability to execute a lambda from SQS is even more frustrating.

Could go on.

adzicg 5 hours ago 2 replies      
I think this is a bit too harsh. Sure, it has a few rough edges, and not everyone's idea of prime time is the same, but Lambda is perfectly suited for lots of use cases that do not depend on very low latency -- for example, file conversions, async tasks where users can wait a few seconds such as payment processing, automating incoming e-mail workflows etc.

We've been running APIs on it for six months with no issues, and are now in the process of moving the entire backend from heroku to Lambda. So far, no major issues.

Regarding Lambda being a building block, I actually like that. Werner Vogels points out [1] that one of the key lessons from 10 years of AWS is that the market wants primitives, and that the higher-level toolkits emerge from that. A ton of third-party helpful stuff is already out, such as apex, serverless and so on. We built a toolkit[2] that lets people use Lambda and API Gateway from JavaScript very similar to lightweight web frameworks (eg api.get('/hello', function (request) {...})

Documentation is there, just not in the most sensible places, and the whole pipeline is optimised for Java processing (eg Velocity VTL for API Gateway transformations allows people to do everything they need, as long they know the Java collections API executed below).

[1] http://www.allthingsdistributed.com/2016/03/10-lessons-from-...

[2] https://github.com/claudiajs/claudia/

chisleu 7 hours ago 2 replies      
When US east was freaking out last year, we found Lambda was unreliable. For days ahead of the outages, Lambda would take many hours to respond to S3 events.

When the events eventually came in and fired an event, the logs reflected the time, hours before, when it should have happened.

Sadly, I don't have a support contract so I couldn't get any help. The forums just assumed I was doing something wrong, until the outage which was linked to dynamo IIRC.

We moved on from lambda as well.

messel 1 hour ago 0 replies      
(repost from the blog comment thread. Would dig any feedback with HN readers who use lambda in production)

Here's the setup I'm hoping to leverage lambda for: light workers. I have kue.js/redis for submitting jobs to, and creating workers. My subscribed worker listeners will simple trigger lambda.invoke with json packages (no need to call or support http endpoints etc). No need for api gateway either.

I'm starting with apex.run as a deployment tool and writing/running all tests locally. I assumed local testing is doable by hitting the same func exports with mocked inputs - this could be off.

See any big hurdles with that usage plan?

As an aside, I've got a backlogged task to explore serverless, and their moving away from cloudformation shouldn't be an issue (assume terraform?)

jmathai 7 hours ago 0 replies      
Lambda does have a ton of rough edges and documentation is one of them.

But we spent the better of 4 weeks figuring all of this out and automating it. Once automated it's pretty brilliant.

It's not just the automation of getting a lambda function and api gateway working together (though that's a royal pain). It's building the tools to develop and test locally (which we've also done).

The service we created to automate everything is called Joule (it's not ready for prime time but you can kick the tires here; https://joule.run - it supports node, python would be easy if we ever get around to it). Docs are here https://joule.run/docs/quickstart

Anyways, the point is that it's possible and pretty amazing once you start deploying microservices using Lambda.

DDNS using Lambda and Route53 and Joule - https://medium.com/@jmathai/create-a-serverless-dynamic-dns-...

Group Text Message Channel using Lambda and Twilio and Joule - https://medium.com/@jmathai/create-a-group-text-channel-in-u...

Sources on Github for the above Joules



Edit: Here's a Joule that takes an area code, looks up the city name by parsing Google search results and using that to get a creative commons photo from 500px.


Source (ugly but functional) - https://github.com/jmathai/area-code-500px/blob/master/src/i...

ThomasRooney 6 hours ago 0 replies      
I've been building a production project fulltime with the [AWS Lambda/API gateway/S3 Website/Route 53/..] stack for about 3 months now. The idea of a holy grail type serverless application (high resiliency, high performance, low price, low complexity) was too attractive for me to ignore. Here's my two cents:

1) Documentation is bad, but not insurmountable. There's enough usage of these platforms now that you'll get pretty far searching for and adapting open source code.

2) Error handling is fine once your code is running, but getting execution there (and the response out again) can be painful.

3) Once sufficiently automated, all these woes go away.

This automation could be done with a framework, however I was skeptical of giving something like Apex[1] or serverless[2] access to my AWS account. Instead I've hand-written terraform[3] for all of my deployment. The documentation isn't great, but there are enough examples out there now to make it possible to glue something working together. I started with this project[4] and wrote a bunch of bash and terraform templates to make it extensible.

The main issues that I have run into haven't been with Lambda itself. Once your code is running, you can build appropriate error responses for all the edge cases. However the AWS API gateway seems to expect you to configure precisely what you want as an input to your lambda, and precisely how the lambda response maps to a HTTP output (the defaults are sane, but overriding isn't easy). I started with the Javascript AWS SDK on my frontend to just invoke the Lambdas directly, and have managed to ignore these problems but for my API integrations, which have ended up a bit more complex than expected.

[1] https://github.com/apex

[2] https://github.com/serverless

[3] https://terraform.io/

[4] https://github.com/danilop/LambdAuth

ramon 59 minutes ago 0 replies      
Obviously I disagree with the premise, you can work easily with Lambda and there's a lot of debugging methods you can use specially the cloudwatch attached to the functions.

It's more than ready for prime time! It's an awesome tool.

apatap 7 hours ago 0 replies      
Amazon API Gateway supports OpenAPI (http://swagger.io/) via Console and AWS-CLI.


This approach allows to avoid a lot of the mentioned Amazon API Gateway hassle.

RAML (http://raml.org/) seems to be also around the corner. (https://github.com/awslabs/aws-apigateway-importer)

"API First" is in general quite promising.

neo2001 3 hours ago 1 reply      
Don't want to hijack the thread (sorry) but I've been working for the last 10 months in a tool which might be of the interest of people reading this article.

This tool will be opensource in the following weeks, but at the moment is in closed beta. I'm looking for people interested in giving it a look. If you are interested, drop me an email to me[at]jorgebastida.com and I'll invite you to the repo.

tl;dr version: Dead simple Lambda+Kinesis+Dynamodb+S3+CloudWatch+Apigateway over Cloudformation with support for Python, Javascript, Java, Go etc... Lot's of examples including Telegram, Twilio, Slack... and quite extensive documentation (which I think is the key of adoption of a technology like this).

lambdacomplete 3 hours ago 0 replies      
Most of the issues described in the article are solved by projects like [0], [1] and [2]. [2] was particularly easy to work with (tried it with a standard installation of Django CMS and it was working very well, only in US regions) and the advantage is that you can still work locally as you would usually do, even keep a testing environment for CI on a managed server, and just replace the production server with it.

[0] https://github.com/serverless/serverless

[1] https://github.com/Miserlou/Zappa

[2] https://github.com/Miserlou/django-zappa

desdiv 2 hours ago 2 replies      
I was planning on porting some short-lived but computationally intensive jobs to AWS Lambda. These run once per day so cost isn't an issue at all. The whole plan fell apart when I found that Lambda doesn't support more than 2 concurrent threads[0].

Is there any service out there that does this? Basically I'm willing to pay $1 to rent a c4.4xlarge instance for just 1 minute. Keep in that the hourly rate of c4.4xlarge is only $0.621 on US East, so I'm willing to pay a huge premium here.

[0] http://stackoverflow.com/questions/34135359/whats-the-maximu...

emilong 7 hours ago 1 reply      
Fair point in the article about SEO for technologies being an issue when trying to find help. "Lambda" has a good chance of leading to irrelevant results, even if there was a lot of experience with it documented online. It's a good reminder to try for high entropy names.
SideburnsOfDoom 2 hours ago 0 replies      
Seems similar to the concerns that we have on AWS lambda around the difference between proof of concept and software that is fit to run at scale in production. More specifically: From a lambda function, ow do we get the metrics into our statd server and the errors into our ELK stack?
AReallyGoodName 7 hours ago 1 reply      
It can't do binary data.

You can redirect to a file the Lambda function writes out. But that sucks.

AWS Lambda is the perfect use case for something like dynamic image sizing. Except if you use it for that you'll force all your users to do a redirect when fetching images. No easy way to clean up when you do it that way either.

e2e4 5 hours ago 0 replies      
I've been a very happy lambda user (using Scala); serverless architecture is really nice. There are some remaining issues (e.g. cold start; although should not be an issue once you actually have decent amount of traffic); many of the issues are getting resolved rather quickly (e.g. API Gateway integration has improved dramatically). I am not aware of any alternatives that come even close to lambda especially if it needs integration with many other services.
fibo 5 hours ago 0 replies      
I used lambda to implement a loader from s3 to redshift. Serverless context and the ability to be triggered on event (i.e. on s3 file creation) are great but I found same problems described in the article.

I can suggest to give it a try to tj project apex.run, it saved me a lot of time.Also error and debugging are difficult but remember there is an ec2 instance at the end behind lambda. I just mocked the lambda function and debugged on a real server.

tmsam 7 hours ago 2 replies      
Has anyone used hook.io as an alternative? I have only built toy projects with it, nothing substantial, so I can't speak to important things like error handling, but so far I really like it and find it substantially easier to use than Lambda.
djhworld 5 hours ago 1 reply      
I'm still very suspicious of 'serverless' architectures and people trying to hammer Lambda into fitting that ideal.

Lambda seems best suited for data processing tasks, one off 'cron' style jobs and synchronous request/response tasks that don't execute frequently

psiniemi 7 hours ago 1 reply      
We've used serverless framework https://github.com/serverless/serverless for automating it. Still has some kinks, but should be good to enable automating.
secoif 4 hours ago 0 replies      
> "Lambda is not well documented"

AWS in general is not well documented. Well-written drivel mostly.

My guess is that there's little to no feedback between documentation writers & developers actually trying to use that documentation to achieve real outcomes. Same could be said about many of the AWS UIs.

nicoster 7 hours ago 0 replies      
read the title, I thought performance might be one of them. fortunately it's not. I wrote a micro service and deployed to Lambda, it seems good. the documentation is poor, but anyway we figured it out. for error handling, we didn't put a lot of effort, if there's an error, the request simply fails. since it's not a core service, the behavior is okay. btw, we're using nodejs, the developing flow is okay - with the help of serverless
api 7 hours ago 1 reply      
(4) lock in to a closed mainframe when you can easily duplicate the same design pattern with open stuff.
Project Murphy An imaginative bot answering What-if questions projectmurphy.net
27 points by coolvoltage  3 hours ago   9 comments top 6
aardshark 2 hours ago 2 replies      
What if you didn't have to sign into any of those services to play with Project Murphy?
gearhart 31 minutes ago 0 replies      
So this takes a sentence of the form "what if <x> <predicate> <y>", discards the predicate, gets a photograph of x and y from a tagged up database, identifies faces from them and merges the face of x into the place where the face was in y.

Well done Microsoft.... very imaginative.

iverjo 44 minutes ago 1 reply      
Some example images:

"What if Barney Stinson was a woman?": http://imgur.com/l7yVhwj

"What if Harry Potter looked like the artist Savant?": http://imgur.com/ePuWVsd

"What if Mark Zuckerberg was 10 years old?": http://imgur.com/75qGJQC

s17tnet 2 hours ago 0 replies      
Really? closed/mobile-app only service?

What if people stop demolishing open web ?

wodenokoto 1 hour ago 0 replies      
How do I go from this landing page to actually chatting with Murphy? I have messenger installed on my android phone, but clicking the button opens a page telling me to download messenger.
brospars 1 hour ago 0 replies      
Some pictures are pretty fun like "What if John Cena was a singer ?" but others don't work that well..
So You Wanna Buy a Telescope: Advice for Beginners (2015) scopereviews.com
183 points by Tomte  12 hours ago   56 comments top 22
aortega 5 hours ago 6 replies      
I disagree with the astrophotography thing.

Even with a telescope your human eyes are limited to planets and the moon, that's it. Forget about seeing galaxies with the naked eye, unless you have a 1-meter diameter telescope.All other objects are basically points of light. Artificial satellites are really fun to see, but also points of light.

But a 2006 DSLR (EOS400) that I found in the garbage, connected to a cheap newtonian telescope can capture 1000X the detail your own eyes can see. Is your telescope too small? no problem. Just increase the exposure or ISO, you cannot do that with your eyes.

I've seen comets moving in real time. Tracking satellites, resolving detail on the ISS, the moons of Jupiter and Mars, incredibly detailed nebula, and a freaking quasar, all this from my window in the middle of the city, with the highest light-pollution.

One advice I have is, try to get a scope that's no more than 15 kg. More than that, and it's a hassle to move.

And get a computerized robotic mount. They are expensive (~800 us$), but awesome. No point of getting a 15-century instrument today.

lake99 6 hours ago 2 replies      
The advice here seems dated, even for 2015. Here's my advice:

1. Install an astronomy app on your mobile phone. Start with free ones, and see where it leads you. I recommend Skeye. This is good for going out by yourself and looking up things you see in the sky. For a PC, Stellarium is the best I have seen.

2. Get a camera capable of taking long exposure RAW photos, and get a tripod. There is so much hidden within the range of our FOV, that a telescope or binoculars are overkill. You'll use the camera for other things, so that won't be a special-nights-only investment. A basic DSLR should be fine, though I am curious about how well generic travel-zoom cameras do. If you have such a camera already, and played around with it, let me know.

Telescopes come after this stage.

> What about Astrophotography?

> Don't.

Sorry, this is shitty advice. His friend blew thousands of dollars on the hobby. I didn't.

> and untold thousands of rejected images

In the days of DSLRs? That's like saying I'm no good at computer games because of all the "bullets" that didn't hit my intended target. Lame. To put it bluntly.

> 8) Avoid any thoughts of astrophotography.

Fuck off.

ackfoo 3 hours ago 1 reply      
I used to love to go out at night with my 10" reflector and see a point of light magnified to... a point of light. For variety, I would look at binary stars to see... two points of light. Sometimes, I would search for nebulae and galaxies to see... a fuzzy patch of light. Optical astronomy is such fun.

Now, alas, the light pollution is so severe that all my points of light and fuzzy patches are severely washed out.

So I sold my telescope and took up scuba diving. It was all fish and coral, fish and coral, until a few years ago. Now it's no fish and fuzzy white patches that used to be coral.

The former coral looks like nebulae and galaxies through light pollution in my 10" reflector.

dwc 9 hours ago 0 replies      
1) Spot on. If you won't bother to learn at least a few prominent bits of the sky first then a telescope will be fun about twice and then gather dust.

2) Subscribe? Er, see if you can buy individual copies first. I dropped my subs because, well, the articles all started to become pablum. Probably worth picking up a few issues when you start out.

3) Yes, join a club or tag along. There's no better way to get an idea of what it's about than to meet some enthusiasts and look through their scopes. And a really friendly and helpful crowd, in my experience.

Binocs: I bought 10x50s, and I wish I had bought 7x50s. The higher magnification is nice and all, and I can hold them steady enough...for a little bit. You can use 7x50 without strain for a lot longer. Also, yes, do buy binocs. You'd be amazed what you can see with a pair of 7x50s that you can't see with naked eyes. It's a great, cheap, convenient way to get started.

Telescopes: buy one that you can afford and that you will use. Like my binoc choice above, I bought something actually nice but big enough that it was a pain to haul around and set up. Save that for later. Buy something that you will pull out on a whim and have a nice evening. This is a very individual choice, of course. If you live in a fairly dark place and the farthest you will take your scope is to your back yard, by all means get that light bucket. :)

Overall very solid advice in the article.

SoulMan 8 hours ago 1 reply      
I get this question all the time - "I am want to get started with Astronomy - which telescope should I buy ?"

Its so important to make people realize , one does not need to have telescope right way. A decent pair of binocular (and probably a book with sky charts) is enough to spot the 1st few objects in the sky. If the hunger for celestial objects then one can go for the telescope else there is high chance the telescope will eat dust at home.

The other problem with hastily buy a telescope is that we end up buying a beginners one with low budget which is is usually no better than good binoculars of same power. (magnification and light gathering). It is netter wait and have a decent budget to buy a good telescope later on.

verytrivial 1 hour ago 0 replies      
I have dabbled with simple whole-sky night photography with a DLSR and wanted to do more (yes, going straight against the no astrophotography advice from the article!). But as the article shows, there are simply so many options and permutations of equipment!

I would love a simple (and very approximate) web page with a few sliders and check boxes that said I have budget range X, and want to do Y and Z. It will say: "you need the following ... and these will be nice ... " And you can click "I have these", and it will go "Okay, then go and add this."

This goes for a few other grouped-purchase hobbies/undertakings. I entered similar befuddlement trying to get a reasonably sound field recording setup. There's a lot of opinion in these areas, perhaps a general "recipe" engine would be good to allow people to submit their codified advice.

DigitalJack 8 hours ago 0 replies      
I got An etx125 for a gift a number of years ago. The tripod is a pain to work with, but otherwise I really like this scope.

I just have one low power eye piece, I think the 26mm plossl. Saturn looks pretty much like the simulated 100x photo.

I'd like to get a Barlow and a higher mag eye piece for looking at craters on the moon.

I've messed with photography. It's a lot of trouble. Most of the time the easiest thing is to use my phone to take a picture through the eye piece.

wmblaettler 9 hours ago 1 reply      
Bought the recommended Orion XT8 Dobsonian a year ago for my 10 year old daughter (ok, and for myself!). It is super easy to setup (minutes), captures plenty of light and has been a joy to view through. I have observed the moon, Saturn, Jupiter and its moons and several Messier objects - even tracked a few satellites as they transited overhead.
arey_abhishek 5 hours ago 0 replies      
I agree with most of the points here. There should have been an additional section on choosing the right mount. There's the cheap AZ mount which makes it easier to locate an object but is too painful to use while tracking it.The expensive EQ mount takes a little bit of practice and frustration before you can use it well. It makes it difficult to find an object, but tracking an object is a breeze.

Go for a stand AZ/EQ with a motorized GOTO unit if you can afford it. It 'll save you a lot of time looking for objects.

astronomonaut 4 hours ago 0 replies      
Excellent advice. I just sold off most of my astronomy equipment except for my first telescope - an orion xt8 - and some mid-range plossl eyepieces. If I did it all over again i'd buy everything used off astromart.com (no affiliation - it's just the best place to find good used astononomy gear). Seriously though - join your local astronomy club, try out members' telescopes and use their telescope library if they have one.
Roboprog 9 hours ago 0 replies      
I've had several small telescopes, but my favorite turned out to be my first: a 4.5 inch newtonian reflector on an (no motors) equitorial mount, with upgraded 35, 25 and 10 mm focal length plossel eyepieces.

A small refractor on a camera tripod is too frustrating, and a small cassegrain on a computerized mount often is too much fuss to align and set up.

dreamcompiler 7 hours ago 1 reply      
One thing he didn't talk about is clock drives. They're essential for photography, but they're also good for keeping the scope on target if you want to let other people look through the scope. Especially at higher magnifications, the thing you're looking at will move out of the field of view in a few seconds without a clock drive. The one disadvantage of Dobsonians is you can't put a clock drive on them (well you can, but it requires two motors instead of one. And high magnification is not really the point of a dobbie anyway).
brendangregg 7 hours ago 0 replies      
Binoculars are handy, but I wouldn't get anything cheaper than the Nikon Aculon series (usually $80-$120; they used to be the Action series). Sharp optics, wide apparent field of view...
cridenour 7 hours ago 0 replies      
After attending and then volunteering at my local observatory for a few months, I pulled the trigger on a Nexstar Evolution 8. Tons of fun to let people choose what to look at on the iPad and the tracking keeps me from having to make adjustments every minute when we zoom in on Saturn to see Titan.
maratc 2 hours ago 0 replies      
I am visiting the USA soon and thought of getting a telescope, but not a 42lb/20kg monster. What's the best I can do for 10 kg/$300 or so?
joshumax 8 hours ago 0 replies      
After being trapped in this hobby for years now, I've finally settled on (for the time being) a nice 10" RC from GSO/Astro-Tech, as well as an ES80ed and SVQ100-3SV and a Losmandy Titan mount. While not specifically a mount for beginners, it is a nice choice considering the RA and DEC axis comes apart to reduce weight and it doesn't use a proprietary mount control protocol like some companies -- Looks in the general direction of a certain high-end red-colored mount manufacturer
sankoz 6 hours ago 0 replies      
I wish I had come across this article earlier. Recently bought my first telescope: a 6" reflector with EQ mount (Celestron Astromaster 130 EQ). While the images are superb, I find it very hard to find my desired objects in the night sky. The finderscope on the unit is either useless, or I don't know how to use it correctly.
jharohit 8 hours ago 0 replies      
I am a first time buyer and spent months comparing scopes, talking to people who owned my top choices, etc. Finally settled on Celestron Astromaster 140EQ. It is just fantastic to setup and quickly get seeing - an important factor for first-time hobbyists. Also, a must-buy is the Celestron Astromaster lenses kit which just increases the quality and quantity of views immensely!
nxzero 2 hours ago 0 replies      
Infrared: Most amazing experience I ever had looking at the average clear night sky was wearing a pair of night vision goggles. Seeing all the extra stars (100x more) in the sky as I turned my head around allowed me to see the sky in a way I never would have imagined possible. Highly recommend it.
skybrian 7 hours ago 4 replies      
Why is astrophotography hard?
max_ 6 hours ago 1 reply      
I was expecting one that you could connect to a computer.

Can someone give me any recommendations?

zerr 7 hours ago 0 replies      
Yes, that's quite an elitist response... Just don't... listen to this advice. You only live once. Try what you want to try, enjoy with what you want to enjoy. You can always "don't" (i.e. stop) if you won't be happy, and do some other things which makes you happy...
Show HN: Wsta a cli for working with WebSockets written in Rust github.com
33 points by esphen  6 hours ago   2 comments top
nfriedly 28 minutes ago 1 reply      
Can I use this to send an initial JSON message and then pipe audio data from my mic?

(I wouldn't be surprised if this is possible with standard UNIX magic... But I just don't know how.)

Monument Valley in Numbers: Year 2 medium.com
225 points by Impossible  14 hours ago   83 comments top 12
fasteddie 12 hours ago 10 replies      
Monument Valley, one of the most polished and loved premium mobile games ever, has made $15m in its lifetime. Meanwhile, there are multiple F2P games with over $1b in revenue.

Now, $15m is fantastic, especially for a small studio. But high-production value F2P games from bigger studios cost around $3-5m to make, not including marketing/UA, so its pretty clear to see why studios aren't investing in premium titles at all, unless they are ports of existing content.

mevile 11 hours ago 7 replies      
I don't understand why Android continues to do so badly in comparison to iOS revenue numbers. As an Android user it's very disappointing. Bigger user base, but almost inconsequential when it comes to revenue. I'm surprised game developers support Android at all, I want the games that are on iOS for my phone but even I'm not sure it's worth it.
exolymph 11 hours ago 0 replies      
Monument Valley is one of the first mobile games I paid for, and I'm glad that it provided such an excellent experience -- unparalleled by anything I've tried since -- because even though other games I've bought since haven't been as beautiful or as fun as Monument Valley, it made me feel okay about buying games in general. If Monument Valley hadn't been good, I never would have ponied up for Smash Hit or any of my other favorites.
shimfish 2 hours ago 0 replies      
It seems that a buried nugget of info here is how seemingly pointless Amazon Underground is in terms of revenue. Admittedly MV seems a bad fit for it because of its low replayability.

Also, what's with iDreamSky? They basically give the game away for free to China for...what? Or is that part of the 6% "other"?

shahzeb 12 hours ago 3 replies      
Alright then. Time to crack open XCode and get to work on my incredibly mediocre game which will not even pull 0.000003% of these numbers. :-)
moomin 5 hours ago 0 replies      
Just to state the obvious: buy it. It's a great little game.
glasz 2 hours ago 0 replies      
the infographic tells me we'll get new levels once more, at most, and that's it. dead.
Terribledactyl 4 hours ago 0 replies      
I found it interesting that when only one of the stores had a special, the other store would also benefit.
superbatfish 12 hours ago 1 reply      
The link lists interesting stats about their revenue. Does anybody know if they've published estimates on their costs (team size, developer hours, etc)?
derFunk 5 hours ago 0 replies      
Very well deserved. This inventiveness has to be awarded. I'd be interested in even more numbers. What did they spend etc..
maxpert 6 hours ago 0 replies      
Shucks even Amazon made more than Windows
mathattack 7 hours ago 0 replies      
Awesome game. It's one of the few that I was ok with my kids expanding their iPad allotments to play.
Balde: a microframework to develop web applications in C rgm.io
179 points by ashitlerferad  13 hours ago   60 comments top 19
rafaelmartins 9 hours ago 4 replies      
Hi guys, I'm the author of this framework, and it seems that someone shared it here because I started this blog post series this weekend: https://rgm.io/post/balde-internals-part1-foundations/

This is why most of the documentation is marked as TODO, but the API docs are reasonably up-to-date.

If someone has interest on it or is willing to help, please let me know :)


simple url shortener example, using redis: https://gist.github.com/rafaelmartins/9f8392a8909e62820ae0

"complete" app, with templates and stuff: https://github.com/rafaelmartins/bluster

jaromilrojo 1 hour ago 0 replies      
I used my own C code generator for CGIs for years, along mongoose for other tasks. Nowadays I'm using https://kore.io and I'm VERY happy with it, great developer experience, well written and understandable code, minimal in dependencies and no bugs so far. I'm using it quite heavily in a project, it was easy to add templating and other amenities to it. Highly recommended.
azov 5 hours ago 1 reply      
I would guess most people use C web frameworks for embeddability, not performance. The use case is to add a web-based UI to some C/C++ application or device, think configuration UI of your router or something of this sort.

My go to solution for this is libwebsockets. Balde seems to have some nice features, but SCGI + external webserver requirement makes it difficult to embed. I'd also question whether GLib is a reasonable dependency for HTTP microframework.

qaq 10 hours ago 2 replies      
"With balde you can serve hundreds of requests per second"is it supposed to be "hundreds of thousands"?
ktRolster 10 hours ago 1 reply      
It has potential, but the documentation is full of TODO, so it's hard to say much about it. For example, I was trying to figure out how it handles unicode, and also how it handles memory management.

For comparison, ribs2 ( https://github.com/Adaptv/ribs2 ) is a framework in C that handles garbage collection for you. But it's not really a comparison because Balde doesn't have the documentation, unfortunately.

isuckatcoding 10 hours ago 2 replies      
This looks very interesting. This being C, I assume the performance is really good (although yes I know it's not always the case).

However, C is one of those "double-edged sword" kinds of languages. What kind of trade offs between performance and "safety" would one be making here? Are they worth it?

ninguem2 8 hours ago 0 replies      
Balde means bucket in Portuguese. I thought that was an accident, but looking at the logo on the website I guess it's intentional.
sdsk8 33 minutes ago 0 replies      
Rafael, congratulations on this project, i love seeing Brazilian projects here.
catmanjan 11 hours ago 0 replies      
Looks neat, although I think I'd find myself mistyping "blade" a lot.
eatonphil 9 hours ago 3 replies      
I've been looking into web frameworks in C recently. The offerings aren't awesome. It's hard to find a good BSD-like-licensed library.

Frontrunners included Kore and Crow (another microframework). I went with Kore for a while but it has a pretty terrible API for actually writing web applications. I couldn't take Crow seriously. I ended up going with fastcgi because it was the simplest to wrap using the Scheme FFI.

Others included Lwan (gpl) and Mongoose (gpl).

cordite 49 minutes ago 0 replies      
Seems more promising than g-wan
Eun 1 hour ago 0 replies      
A comparison in terms of speed between blade and cppcms would be nice.
mirekrusin 27 minutes ago 0 replies      
Why not vala (or genie)?
kev009 7 hours ago 1 reply      
I really wouldn't want to expose GLib to this level but to each their own. kcgi is an interesting C API with security taken seriously: http://kristaps.bsd.lv/kcgi/
est 6 hours ago 1 reply      
Looks like it's using NULL-terminated strings. Is the framework secure? Is it compatible with (all kinds of) Unicode and data with NULL in it?
moron4hire 31 minutes ago 0 replies      
"Why would anyone want this?" -- There are people who know C--and it's associate library ecosystem--really well and are very comfortable with it. They may have projects where it would not be a productive use of their time to have to learn a new language and a new set of libraries just to get a little code online.

I know, shock and horror, wailing, gnashing of teeth. /s

Is it just me, or has HN gotten really bitchy towards people's projects lately (and by "lately", I mean "the last year")? There are real people behind these projects. They did a thing and it might not be your cup of tea or anything in your experience bubble, but that doesn't make it stupid or misguided.

gbog 5 hours ago 0 replies      
I remember that one big lesson learned after heartbleed bug was that we should not develop in C anymore. To me, this has probably many exceptions, but still should be especially true for web dev.
_RPM 10 hours ago 1 reply      
So this has an embedded HTTP server then? Looks cool.
lillesvin 7 hours ago 0 replies      
Why not? If we only developed for what we already do, then we'd still be doing the same stuff we did years ago.

Besides, web apps in C aren't completely unheard of.

How Windows Everywhere finally happened arstechnica.com
150 points by nikbackm  13 hours ago   88 comments top 12
dingo_bat 8 hours ago 3 replies      
I don't like the Universal apps or whatever they're calling them this month. They are limited, slow, crashy and buggy. They regularly just disappear after a sleep and wake. I have been left questioning my sanity wondering if I closed Messenger.

Also, they are clunky and slow. For example, switch to a desktop with a universal app open and for the duration of the desktop slide animation, the app window will be completely blank. And then suddenly all the content will flash into sight after the slide ends.

The start menu itself is slow and will miss your first few keystrokes if you try to start a search too quickly. The notification bar is sometimes laggy to open, and so are the universal style notifications.

Sharing across universal apps sometimes works, other times it just opens the target app and sits there. I dunno if this is an app problem or universal platform problem.

I wonder if improving Windows Forms would have been a better strategy. Instead now Forms is seen by many as a dead platform, and UWP is so limited/slow that nobody is developing for it. When soundcloud is smoother/less buggy running on Chrome than as a standalone UWP app, what is even the point of having a software ecosystem, apart from a web browser?

Maarten88 10 hours ago 7 replies      
Operation successful, patient dead...

It's kind of tragic that much of this effort may never pay off: in the process of merging platforms and restructuring API's (that started with Vista and is still going) Microsoft lost the connection to many of its developers, who went to build Web, iOS and Android apps. WPF, Silverlight, WinRT, XAML, UWP: there were too many changes, rewrites and deprecations. Windows Phone, brilliant as it is, did never reach mass adoption and all the effort that went into unifying it with Windows has probably been wasted.

I think developers prefer a crappy but stable technologies with a proven market over uncertain and fast-moving technologies. It takes many years for a new platform to mature and settle down.

Microsoft really needs to stop introducing "upgrades" that require application rewrites in Windows for the next 3-5 years, and focus on getting the Windows Store going.

simonh 12 minutes ago 0 replies      
> Apple and Google will probably do something similar with their various operating systems, but Microsoft has managed it first.

I thought all of Apple's OS variants these days were based on the Darwin kernel and variously intersecting sets of the same unix components and APIs. Every now and then I see someone complaining that a new Apple device contains 'an entirely new OS' like Apple TV OS, but isn't that just a variant of iOS? It seems to me it's precisely because Apple have been determinedly pursuing this stragety even since the first iPhone OS that they've been so successful, compared to Microsoft's strategy at the time of having completely different code bases across product categories.

novaRom 4 minutes ago 0 replies      
It's a little ironic that in the summer they have achieved Windows everywhere, that it would coincide with my first summer of Windows nowhere.
sandworm101 10 hours ago 0 replies      
Windows FOR everywhere. No matter that Microsoft says, windows is still a choice. We have all seen shops where windows is just taken for granted, but every senior guy I know has some horror story about MS that makes them dream of a world without.

A typical story:http://www.zdnet.com/article/microsoft-no-longer-allows-admi...

Animats 9 hours ago 0 replies      
All platforms now have enough hardware to run a bloated OS. Even the dinky machines. Thus, we now have Windows and Linux everywhere, endlessly being patched because some feature completely irrelevant to the application has a problem.
wornohaulus 12 hours ago 3 replies      
What are the substantial benefits of getting onboard the UWP rain ? Will the platform code work faster.. will the deployment be faster ? what about the knowledge devs have gathered for decades. will the new way of doing things produce a more efficient and better running application ?
EdSharkey 6 hours ago 1 reply      
The article doesn't discuss it, but I thought the Kin was the forerunner to a lot UX of the Windows Phone 7. Was the Kin also a forerunner to the Windows RT platform change, or was it just a re-skinned Windows CE?
mmastrac 11 hours ago 2 replies      
> Microsoft has still arguably achieved something that its competitors have not. Rumors that Google will somehow merge ChromeOS and Android have been circulating for some time, but it hasn't happened just yet

Uhhh.. ChromeOS and Android are both Linux. This is just as merged as running a stripped down Windows kernel.

bitmapbrother 6 hours ago 1 reply      
I must have missed something because last time I checked it was still on the desktop, a failed phone platform and a game console with an installed base of less than 30 million. Meanwhile Android is everywhere from phones, tablets, desktops, cars, TV's and numerous IoT devices.
sidawson 11 hours ago 3 replies      
First immediate thought: Can you get virus/malware scanners for XBox?

Otherwise, it seems an immediate easy target for a new botnet.

ebbv 9 hours ago 3 replies      
> Microsoft promised developers that Windows would run anywhere. This summer, it finally will.

Bullshit. Versions of Windows that are all labeled Windows 10 are running in a lot of devices, and you can write applications that in theory can run on all of them (though practically writing one application that runs well on all of them may be somewhat harder if you want your application to be useful in any way.)

But that is NOT the same as "Windwos runs everywhere." in a number of ways.

The first and most obvious way is that you can't just take your existing Windows application and put it everywhere, which is what that phrase seems to imply. Nor can you buy one of these devices and immediately have access to all the library of millions of Windows applications that exist. No, if you're a developer you have to write a new universal app and embrace the Windows 10 App Store whether you want to or not.

And if you're buying a device, you better check twice that the software you want is available for the platform you're buying lest you find out too late your device has special requirements.

Secondly, we've had a version of Windows running on lots of devices for a long time. The only thing that's different now is the branding. They're finally all branded "Windows 10" even though in many cases (like Phone and X-Box One) the user and developer experiences are totally different. Is that helpful? I don't really think so.

This article might as well have been written by Microsoft's PR department. Totally not grounded in reality at all.

Small Modular Nuclear Reactors Overcome Existing Barriers to Nuclear scientificamerican.com
40 points by Osiris30  9 hours ago   21 comments top 4
shaqbert 3 hours ago 0 replies      
This is not really about micro reactors, but about making the monolith power plant more like a microservice modular thing. So that you can potentially prevent decomissioning after 40 to 60 years and keep it operating much longer.

The downside of nuclear power still remains though:- still high upfront costs (maybe lower that monolith plants, but still freaking high)- poisonous and radioactive fission products that are hard to deal with- perverse incentives between economic efficiency and safety (did I mention that a couple of those babies [1] here would have prevented the fukushima disaster)

[1]: http://us.areva.com/EN/home-1495/new-challenges-proven-solut...

Animats 5 hours ago 1 reply      
This is strange. There is no "containment vessel". That's the reactor pressure vessel. Usually, there's an outer containment vessel as well. Chernobyl didn't have one at all, which is why that was such a disaster. Fukishima's reactors had ones that were too small and didn't have enough expansion volume to diffuse the pressure after loss of coolant. There's a design assumption here that there will never be a meltdown.
tremon 2 hours ago 3 replies      
The article doesn't seem to mention what type of reactor this is. Are these still gen-III boiling water reactors, or is this a gen-IV type reactor?
unique_parrot2 3 hours ago 2 replies      
That's great. You know how expensive it is to get rid of radiactive material. It this comes true the big power plants can just dump the waste into the river and say "It wasn't me".

I would not want to live next to such a thing.

The Dark Art of Mastering Music pitchfork.com
33 points by jonbaer  2 hours ago   24 comments top 7
neilh23 2 minutes ago 0 replies      
It's hard getting something that sounds consistently good in all listening environments - I've seen cases where music is offered in digital format either as 'play anywhere' loud .mp3s or 24-bit high dynamic-range .flacs for DJs, but most commercial music is mastered for the poorest environments (in the car, over the radio).

Getting a good master is very important - I'm reminded of a story I heard about an album that was mastered by the artist, and pressed to CD with a high-pitched whine over the top which the artist hadn't been able to hear - like Aphex Twin's Ventolin, but unintentional.

bvm 20 minutes ago 1 reply      
> Traditionally, the marketplace has been radio, where a well-mastered song hits that sweet spot where you feel immersed in the music but not battered by it. If your song is poorly mastered, the logic goes, people wont want to buy your album.

Part of the problem here is that many radio stations will run incredibly aggressive limiters/maximisers across their output bus anyway, for the exact same reasons given in the following paragraph. Your average pop track is being compressed to hell and back at three stages: mixing (gotta make that synth subgroup 'pop'), mastering (gotta be louder than my rival pop-group) and transmission (gotta be louder than my rival station).

21 39 minutes ago 1 reply      
> Professional audio engineers operate out of carefully calibrated rooms designed to eradicate odd resonances and echoes, and their speakers cost more than your car.

That is not exactly true. I've seen many pictures from studios and most engineers use professional speakers (called studio monitors) in the $5K range.

However you can spend a lot more on acoustic treatments, they make a massive difference.

I like to think that color-grading, another mysterious dark art (to me at least), is the visual equivalent of mastering.

cr1895 1 hour ago 0 replies      
Relevant: the Dynamic Range database


radiowave 59 minutes ago 1 reply      
Mastering is a subject about which a great deal of mumbo jumbo is talked, but a very important point that this article does put across well is the necessity of making sure the music sounds good across a wide variety of playback environments. Unfortunately, this kind of "optimizing for all cases" comes at the expense of how good the music can sound in any one environment.

(For example, a lot of people will have heard recordings that produce incredibly detailed spacial perception, but only when listened to on headphones - and which sound pretty poor on loudspeakers.)

We got into this situation because of the economies of mass produced physical media, but we largely don't have those constraints any more. Since the cost of digital distribution is basically a rounding error, we could be producing different releases optimized for different settings and letting the listener choose which is the most appropriate for them.

dunk010 40 minutes ago 0 replies      
There's a good article from 2001 which explains this in depth: http://www.cdmasteringservices.com/dynamicrange.htm
amelius 1 hour ago 5 replies      
I personally think we should be able to buy music in its pre-mastered form. Yes, that will be a lot of data, but nowadays that is not really an issue anymore. The experience will be so much better than our current option, which is basically only the EQ settings on our amps.
How do you stop a randomized game from randomly being boring sometimes? arstechnica.com
63 points by yread  10 hours ago   47 comments top 13
fitzwatermellow 9 minutes ago 0 replies      
Fantastic question! Quick and dirty solution is simply to choose a distribution that isn't uniformly random!

Instead of starting with your random number generator, build up the discrete probability distribution function first. The histogram can have any shape you want.

So, for example, in Tetris, rather than choosing a uniformly random next piece, we give a 10% probability to a straight one, 20% to right-sided L, etc. Now we begin to see patterns that can make gameplay challenging or interesting. If a shape has not appeared in a while purely out of chance, alter the distribution. Keeping track of what cards have been played and what moves have been made forms the basis of a feedback loop that constantly evolves.

Olscore 3 hours ago 0 replies      
Discovered a quote from Sid Meier's the other day; he says "A game is a series of interesting choices."

From that perspective, the definition of a game and what constitutes gaming changes dramatically. It undercuts the question here. Preventing a game from being randomly boring is an odd question. A better question might be, how can we keep the player's decision-making interesting, regardless of what is generated? The article says it directly, but misses the golden quote which I think is essential. That quote offers up some really unique ways to define a game too. Very cool.

pjc50 3 hours ago 3 replies      
XCOM2 has a slightly controversial "randomness compensator": repeatedly missing increases your chance to hit, and being hit repeatedly by the aliens lowers their hit chance.

Various games (L4D is the first prominent example, there may be others before it) include "AI directors" that try to keep the player involved through balancing exiting events and periods of tension. This makes it a lot harder to have a "boring" game, or one where you get "unfairly" hammered by the randomness.

In fiction I'd trace this to Asimov's Foundation series, which is premised on "psychohistory" being a sort of fully deterministic fusion of economics and psychology enabling the development of future society to be predicted. Until a few books into the series where the "mule" is introduced, who is an individual powerful and unpredictable enough to throw off the determinism of historical inevitability.

Agentlien 4 hours ago 2 replies      
This really resonates with me, especially since he mentioned Master of Orion. I grew up with Master of Orion 2 and I can't even make an estimate of how many sessions or hours I played. Another game which I still play actively is Magic: The Gathering. I mention this, because I think the contrast between these two games highlights the subject at hand.

I have experienced more incredible stories unfolding in both MoO2 and MtG than I could possibly remember. Both are games where careful strategy and tactics combined with randomness can lead to some unbelievably epic moments. Here's the thing, though: in MoO2, despite a lot of random events, I can't remember a single time I played a session which wasn't exciting or didn't feel fair. In Magic, this happens routinely. Just Google "mana screw" or "mana flood" for more results than you can read, complaining about randomness leading to boring matches where you, as someone put it, "don't get to play Magic"[1].

I still play Magic on a daily basis, and I still love the game, but I've spent a lot of time damning the frequency of these boring matches and wondering if there was a simple strategy or rule change which would fix this, while still leaving the soul of the game intact.

[1]: https://www.reddit.com/r/magicTCG/comments/4c2f65/what_funda...

1123581321 8 hours ago 4 replies      
The context is that the author wrote a negative review of a space strategy game because he had poor experiences with it. This article seems to be a way of saying that he was wrong to say the game is objectively bad without denying his own bad experiences are an objective problem.

The solution is to tune the game and add additional mid-game content (the part of the game with the most room for dynamism), which the author points out is on its way.

One of the more interesting comments said that the game would be well served if its developers collected play through data and started prioritizing the game seeds that tended to cause the most interesting games, arguing that there would be enough seeds to hide the lack of randomness.

flashman 6 hours ago 1 reply      
From my experience so far, Stellaris will benefit greatly from performance and feature patches (I hear lots of complaints about grinding late-game framerates even in modestly-populated galaxies).

It does seem to suffer from a lack of gameplay options. I think a useful comparison is Civilization V at launch versus after its 'Gods and Kings' and 'Brave New World' expansions, both of which added many new gameplay avenues.

The game seems to be more organic than strategic, lacking crossroads where your decision is based on your priorities. The major exception is the research system, where the 'choose one of three' feature means selecting one option closes off the other two possibilities for an indeterminate period. But besides that, in my experience the game is about expanding like a slime mold and gradually wearing down your opposition (as conquest is the only viable victory type).

sanj 1 hour ago 0 replies      
Related: I wrote a mahjong game many years back (the tile matching version; not real mahjong).

It turns out you can have unwinnable boards if you build them purely randomly.

So I ended up having the app play the board in reverse to make sure it was winnable!

It still managed to hit stalemates, while playing backwards, but they were very rare. I just restarted instead of trying to avoid that situation algorithmicly.

jetengine 1 hour ago 0 replies      
There is a theory that multiplayer is more interesting than singleplayer because you are playing against unpredictable characters.

Someone toyed with this idea by letting the player of the game play against animals: https://www.youtube.com/watch?v=6DXiuzAQztU

navbaker 6 hours ago 2 replies      
Having had "play a 4x space strategy" on my gaming bucket list for quite some time, how does this game compare to others that I've had my eye on like Galactic Civ 3 or Sins of a Solar Empire?
Animats 5 hours ago 0 replies      
If the universe is a simulation, maybe this is why life sucks: User boredom if it doesn't.

"Why is God punishing us?"

raverbashing 6 hours ago 2 replies      
Well, sometimes you can't

Freecell is a good example, some games are trivially easy or too hard (not sure if there are impossible games)

agentultra 2 hours ago 0 replies      
Constraint propagation solvers seem to work well.
babuskov 9 hours ago 1 reply      
I was hoping for more, but it only talks about strategic video games.
99-problems in Java 8, Scala, and Haskell github.com
26 points by shekhargulati  2 hours ago   2 comments top 2
sean_the_geek 13 minutes ago 0 replies      
Warning:shameless promotion, here's my 'R' take on 99 problems, needs working on though!https://github.com/saysmymind/99-Problems-R
Electric cars are no longer held back by crappy, expensive batteries slate.com
89 points by jseliger  14 hours ago   134 comments top 11
contingencies 12 minutes ago 0 replies      
The reality here in second-tier Chinese cities is that most people already get around, if not always then at least often, by electric vehicle.

They've either never had cars, have bought cars but have problems parking them or getting anywhere due to traffic (many cities were not designed for cars, and now have millions of them clogging every available space on and off the road network), or just jump on the back of black-market 'taxi' e-bikes to zip about. E-bikes already carry a large proportion of people in Chinese urban environments. There is no way that e-bikes, subways and buses combined are not the dominant people-movers in the country, today. While I did see some e-bikes in Japan, they were nowhere near as numerous. China is leading the way.

Typical Chinese e-bike cost new is USD$500 or less. Battery replacement (good for 1 year or so) is currently about USD$150 or less. They do get stolen a lot, unfortunately.

sunstone 6 hours ago 4 replies      
Interesting article but the timelines seem way to conservative to me.

For example if batteries continue to improve over the next four years as they have over the past four years, according to the article then in 2020 the batteries will weight 1500 lbs, travel 1142 miles on a charge and cost about $10k. (Or a 500 mile battery for $5k) And the cost of electricity will be, at today's rates, $1 per "e-gallon"

Who wouldn't want that? Project this to 2040 and the numbers become just ridiculous but the article suggests just 35% market share by then. I don't think so. By 2025 the market will almost certainly be strongly in the favor of electrics.

pcr0 11 hours ago 6 replies      
I'm all for electric vehicles, but there are a few things that have bothered me about its mass adoption.

Right now, electrics comprise a tiny part of the automobile market share. Yet Tesla is already running into supply issues, particularly with lithium as there are only 3 companies in the world that do industrial-scale lithium mining, that too in a handful of mines around the world.

Secondly, I suppose we're currently in the honeymoon period for electrics. But what will happen 5-8 years down the line, when all the early mainstream adopters of electrics will have to replace their batteries? Is it feasible to recycle all those batteries?

calgoo 3 hours ago 5 replies      
What about making the battery packs "hot swappable". You pull into the station, the current pack is unloaded and a new one is loaded. I It can also have interesting effects on the battery upgrade you need to do every X amount of years. The packs could be owned by the gas station chain, so you can stop at any of their establishments and get a new one refiled.

These battery pack charging stations could then also be used as power offloading for local power generation.

KKKKkkkk1 12 hours ago 2 replies      
I've been shopping for a used car in the Bay Area, and it looks like used electrics are even easier to buy (and harder to sell) than former rentals. That's not a good signal.
dghughes 10 hours ago 3 replies      
Forget the batteries look at the cost of replacing aluminum quarter panel which is welded on. For the Model S it's supposedly $15,000+ just for the part then fixed at a Tesla approved bodyshop. That's nuts!

I did the math and for comparison on a $3 million Enzo Ferrari a hood (couldn't find a quarter panel) unpainted and used is about $10,000. http://www.ferrparts.com/en/usd/diagrams/front-hood-and-open...

I like electric but I can't own a car that will cost so much to repair. A $15,000 bill not including labor on a $70,000 vehicle 20% of the total vehicle cost is not worth it.

ktRolster 12 hours ago 0 replies      
According to the article, we are now held back by expensive, non-crappy batteries.

But the price is dropping, and eventually will be affordable.

omellet 12 hours ago 4 replies      
Now they'll just be held back by lack of a widespread charging infrastructure.
intrasight 9 hours ago 0 replies      
And if we go the Hydrogen route, then batteries definitely won't be what holds it back. But certainly other factors will.
vonklaus 12 hours ago 8 replies      
obviously Tesla is the front runner for spearheading the insudtry by virtually every metric & subsector (except maybe price). Look, I love Tesla and have been bullish on the company for a pretty long time, so I am biased, but why can't other car companies have quality cars and aesthetics.

* Fisker Karma: The car looked super sleek, was super sexy and from an engineering standpoint it was a heap of shit.

* Chevy Volt: Was an ICE car that had the worst of both worlds. Not particularly attractive.

* Rimac Concept One: Ok this car was fucking brilliant but it was like $1M USD.

* Prius: Pretty decent as far as hybrids go. Aesthetics are lacking what many might consider: the ability to not look like a cross between a golf cart and a mobility scooter.

* Chevey Bolt: Seems to have pretty amazing specs. Aesthetics are sub-par for sure.

Yes, Tesla's thing was making a compelling electric car, however other manufacturers seem to be doing alright in actually making the car. They just need to bend the metal in a way that doesn't look fucking horrible and Tesla will have a bit harder of a time of it.


Here is a quick album I made. Nearly every car looks the same:


edit 2:

Here[t] you can see Tesla's offerings. The roadster is omitted. However, of the 4 cars, they made: roadster and the models s,3,x,y* you would be hard pressed to confuse them for any of the cars in the above album. However, those in that album are very close to one another aesthetically.


* they haven't made y yet....

mtgx 11 hours ago 2 replies      
> There is no Moores law for battery storagethe power of batteries doesnt magically double every two years.

> Put another away, the battery pack in the 2017 Volt will cost less than 10 percent more than the one in the 2012 Volt. But it will be more than four times more powerful.

Intel wishes its chips still quadrupled in performance every 5 years.

> Were moving toward a world where more and more cars will either run primarily on gasoline but with an assist from powerful batteries or primarily on powerful batteries but with an assist from gasoline.

He's saying that in article talking about how cheap batteries have become and how in a few years they'll be $100/KWh? Within 10 years all EVs will have 100-150 KWh batteries. That's 400-600 mile ranges.

The only place where you'll see "hybrids" is some niches where batteries alone couldn't possibly make sense, but I have a hard time coming up with ideas for vehicles where batteries alone wouldn't be enough in 15 years and they're smaller than say a train or an airplane. Even buses should do just fine with a 200KWh battery for a 16-18h work-day.

> Earlier this week, Bloomberg New Energy Finance released a report arguing that by 2040, 35 percent of annual vehicles sales will be electric.

It will be at least twice as much by 2030. By 2025 few will want a car that isn't an EV, and the car makers will have no choice but to start competing in EVs as their primary cars.

Exploring Ultramarine: Notes from a two-day workshop on ultramarine ox.ac.uk
43 points by prismatic  11 hours ago   3 comments top 2
BetaCygni 4 hours ago 1 reply      
noir-york 43 minutes ago 0 replies      
Joy to read - thanks for posting!
Double-ended vector is it useful? larshagencpp.github.io
49 points by ingve  9 hours ago   38 comments top 11
twotwotwo 7 hours ago 0 replies      
Qt's QList is like what this describes: it tracks where the first and last used items are in the allocated region, so both prepends and appends are amortized constant time.

Peeking at the code, Qt's even using the same general approach described in the post ("When there is still a lot of room left in the buffer, we should move elements toward the middle, and not reallocate straight away."): it moves items into the middle if the backing array is under 1/3 full.

When QList first came out, I think the team said they got there by trying different implementations and measuring what worked best on average in the real app code they had. That at least suggests that to the author's question "is it useful?", somebody thought yes.

(The code I peeked at is at https://github.com/radekp/qt/blob/master/src/corelib/tools/q..., header at https://github.com/radekp/qt/blob/master/src/corelib/tools/q..., and docs (possibly for a newer version) at http://doc.qt.io/qt-5/qlist.html#prepend.)

agf 7 hours ago 2 replies      
This is an interesting article, and a "devector" is something I've never heard of or considered before, so I'll leave critique of the data structure itself to others. But I have some constructive criticism of how the data is presented. Since the graphs / performance comparisons are such a large part of the post, I think it's relevant.

First of all, and most simply, when presenting multiple charts please use the same colors for the same lines on each chart. Here, the charts without the "vector" line use a different color scheme, making it harder to compare them at a glance.

Second, absolute timings don't matter here, only relative timings do, so showing absolute time on the Y-axis doesn't really make sense. Instead, I would use percentage from a baseline of one of the data structures. I think choosing the "deque" line for that purpose would make the most sense, given that it's the standard data structure from double-ended access.

Here's a rough version of what that looks like for the first chart -- https://goo.gl/bg5mrN -- Hopefully it makes the relative speeds of the different solutions more clear (I only put in the data for "deque" and "devector").

nkurz 7 hours ago 4 replies      
I feel like it would be possible to leave the heavy lifting to the underlying page tables. Presuming you are running a 64-bit system, you have a enormous practically unused address space. Until you access the allocated memory it doesn't actually occupy any physical RAM.

The virtual address gets associated with the physical RAM in 4KB pages. So if you just allocate an a region twice as large as you will ever need, and start writing in the middle, you will never be wasting more than 8KB of real RAM. No copying, no reallocation, just keep track somewhere of the head and tail.

Other than the momentary panic of those who notice how much memory is being "used", what are the downsides of this approach? The 8KB is unlikely to ever be a problem. I haven't seen many people taking this approach to memory management, and other than stigma, I'm not sure why. Is it that everyone still wants to support 32-bit systems?

chmike 9 minutes ago 0 replies      
This is definitely useful because it gives us a double ended queue with efficient random access. I can't think of a use case right now, but I'm sure there are some.
chillacy 7 hours ago 1 reply      
It might be of interest to mention that std::deque is implemented as a linked list of arrays: http://cpp-tip-of-the-day.blogspot.com/2013/11/how-is-stddeq...
nightcracker 6 hours ago 0 replies      
I have already made this concept a year ago: https://github.com/orlp/devector.

I never finished the implementation though. Exception safe programming of standard containers is incredibly annoying (feel free to inspect the source code to see what I mean).

ajuc 6 hours ago 0 replies      
Interesting. I needed a similar data structure once and ended up using linked list (but it had exactly $VERTICAL_RESOLUTION big elements so the memory cost for poitners was negligible, and the most common operation was moving all elements left(up) or right(down) and linked list is hard to beat at that).
mrpopo 4 hours ago 0 replies      
Interesting implementation. Given that it seems to perform more moves than a regular vector, I'm wondering how much more efficient this would be if it could just memmove the whole block around (AFAIK, the spec doesn't allow it because of constructors/destructors for complex structures).
Someone 5 hours ago 1 reply      
foota 7 hours ago 0 replies      
Wow, that's a really neat data structure!
thomasahle 5 hours ago 2 replies      
Relationship between Education and Mental Health: New Evidence from Twin Study oxfordjournals.org
37 points by randomname2  9 hours ago   11 comments top 5
bolster 47 minutes ago 1 reply      
TL:DR "If there are socialfactors that systematically drive differences in mental health within pairs of MZtwins, our findings suggest that education is not among them."
atdt 3 hours ago 0 replies      
Full text on SciHub: http://sci-hub.cc/10.1093/sf/sow035 PDF
redxblood 8 hours ago 1 reply      
I don't see the surprise however.
jlg23 9 hours ago 4 replies      
The title is misleading, though the original is not descriptive. Suggestion: "Higher education does not improve mental health, twin-study proves"
kome 6 hours ago 0 replies      
sociology was right.
IBM is not doing "cognitive computing" with Watson rogerschank.com
368 points by Jerry2  11 hours ago   120 comments top 37
ACow_Adonis 10 hours ago 7 replies      
I almost want to up vote the article on principle.

I try to go to various "machine learning" and "AI" meetups around my city.

The most frustrating, but relevant, lesson I've learnt is to just stay away from everything IBM/Watson.

I can summarise every single bloody presentation they give because it's the following:

"Now, when we say cognitive computing, we don't mean any of that sci-fi and AI stuff, now here's a marketer and marketing materials that will explicitly imply that we're talking about that sci-fi and AI stuff for the next 59 minutes. There will be no technical information."

pesenti 9 hours ago 4 replies      
If you want to figure out what Watson can do and bypass all the marketing hype, you can just try all the services available at http://ibm.com/watsondevelopercloud.

I won't argue that the PR goes often too far, and that's a big debate we have internally (I work for Watson). But it's a pity that most of the negative opinions expressed here come from people who haven't even bothered to try any of the services we put out there or read any of the scientific papers that have been published by our team.

nl 9 hours ago 2 replies      

I'm no fan of Watson-the-marketing-term, but this sounds like the bitter remarks of a symbolic AI defender who is so sure that their way of doing AI is the only way that anything else is fraud.

Watson-the-Jeopardy-winner did "cognition" (which he implies means following chains of logical reasoning) as well as any other system that has been built.

See for example "Structured data and inference in DeepQA" or "Fact-based question decomposition in DeepQA[2].

It's true that the the Watson image analysis services don't use this. I'm guessing that is because they don't actually work very well in that domain.

[1] http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F...

[2] http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6177...

rm999 10 hours ago 3 replies      
This article mirrors some huge frustrations I've had in recent years as a long-time lover and pusher of machine learning. I spoke to a Watson booth employee for a few minutes at a machine learning conference a couple years ago, and almost right away had similar feelings. I don't like the term 'fraud' here though, 'insanely oversold' seems more appropriate. I looked more into Watson, and realized it's really just a large number of traditional (and not very innovative) machine learning algorithms wrapped into a platform with a huge marketing budget.

>AI winter is coming soon.

Perhaps, if so Watson is certainly evidence of this. It frustrates me to no end that machine learning has so much potential but is often lost in a sea of noise and buzz words (as much as I love deep learning, I'm almost tempted to lump that in here too given its outsized media coverage). Machine learning is in its infancy of impact, but the overselling by mediocre enterprise companies and ignorant press may shoot its credibility for years to come.

mark_l_watson 10 hours ago 3 replies      
I spent a lot of time in the 1980s experimenting with Roger Schank's and Chris Reisbeck's Conceptual Dependency Theory, a theory that is not much thought of anymore but at the time I thought it was a good notation for encoding knowledge, like case based reasoning.

Having helped a friend use Watson a year ago, I sort-of agree with Schank's opinion in this article. IBM Watson is sound technology, but I think that it is hyped in the wrong direction. And over hyped. This seems like a case of business driven rather than science driven descriptions of IBM Watson. Kudos to the development team, but perhaps not to the marketers.

Really off topic, but as much as I love the advances in many layer neural networks, I am sorry to not also see tons of resources aimed at what we used to call 'symbolic AI.'

headShrinker 21 minutes ago 0 replies      
I get his cynicism from an idealist engineer's prospective. It's a problem for anyone who has applicable knowledge that meets a marketing/branding agency. Watson was a new tool that could play jeopardy and IBM needed a way to sell the heck out of it. branding Watson as AI is the act of an increasingly desperate corporation.

While true AI is a decade or two off, with each AI winter, an increasing number of human jobs are displaced. This next wave promises to be devastating to human productivity and a boon for machine productivity. The effects are real even if the intelligence isn't. When true AI is birthed, it won't need to be marketed. All that will be left are a few trillionaires, and food lines for the rest of us.

About the future "Wealth will be based on how many robots you own and control."

kastnerkyle 9 hours ago 0 replies      
Thinking of "Watson" more as a catchall term for machine learning research at IBM is more useful than thinking of it as a unified platform (as the marketers try to sell it). This includes research efforts in speech recognition, NLP, and reinforcement learning as well as fun stuff like the "Watson chef". The underlying technology is almost completely different, but it still falls under the Watson umbrella.

In general, every company (startups and big co. alike) seems to be hyping their "AI" capabilities out the wazoo, but three years ago saying those two letters together was a death sentence. I don't know if this is a good or bad thing (hype is bad, but general interest is useful for the visibility of the field) but it is definitely a sea change compared to the last 5-10 years.

I am extremely skeptical of most claims these days, and am a bit worried about AI Winter 2.0 due to hype around largely mundane technologies. There are exciting things happening in the space, but these things are rarely hyped to the extent the more mundane results with corporate backing are.

todd8 10 hours ago 0 replies      
Patrick Winston taught my first AI course around 1974. Things have come a long way, but back then I was flabbergasted to see a program perform Calculus Integration. It seemed to be a task that took a certain amount of insight and problem solving. Professor Winston then proceeded to break down the program for us and to my surprise it easy to understand and wasn't very complex.

I'll always remember his comments at that point that AI is mostly simple algorithms working against some database of knowledge.

I'm not sure I would still make that claim today, kernel based support vector machines aren't all that straight forward and many of the cutting edge machine learning and AI programs are far from easy to understand. Still, there is a feeling of disappointment when the curtain is pulled aside and the great Oz is revealed to be nothing that magical.

greenyoda 11 hours ago 1 reply      
For those who are unfamiliar with the author of this article: Roger Schank is one of the early pioneers of AI research:


iamleppert 9 hours ago 5 replies      
I'm wondering why, as a company, IBM doesn't seem to be doing good in their core business, yet somehow they want us to believe they are at the forefront of the latest darling new technology in ML research and cognitive computing?

If they are unable to attract talent and innovate in their core business, how are they supposedly pursuing sophisticated AI, and the biggest question, is why?

What other products or innovations have come out of IBM Research? What is their overall reputation, and why should we believe them? Why don't they release Watson to the world, like Microsoft did with their twitter bot?

If I were a recent grad or even mid level in my career and wanted to work on as interesting projects as I could, I wouldn't be going to IBM. My first priority would be access to interesting and varied datasets, such as what can be obtained at Facebook, Google, Amazon, or another such company. A close second would be any of the players in the hardware ML industry such as Nvidia.

I don't understand what's so special about Watson, it all seems like marketing BS to me for a company in the death throws.

xrd 10 hours ago 1 reply      
Funny that he was Chief Learning Officer at Trump University. Doesn't diminish my feelings for him, but interesting.
hammock 10 hours ago 2 replies      
What if Watson analyzed 800 million pages of Dylan critiques and analysis, instead of 800 million pages of lyrics? I bet you could get to the anti-establishment theme. Maybe Watson was just given the wrong set of input data (garbage in, garbage out).
digital_ins 3 hours ago 0 replies      
I loved this article. Not just the term "AI", I've seen startups abuse the terms "machine learning" and "big data" to such an extent that it literally makes me cringe when I hear them.

How many times have you seen a TechCrunch article where the writer parrots the buzzwords the founder has thrown at them such as "x uses machine learning to sync your contacts with the cloud".

drcode 10 hours ago 1 reply      
I sure wish some mainstream journalists would look into the whole "Watson" marketing campaign and apply some fact checking to it.

I like AI projects as much as the next HN reader, but compared to efforts by the other players in this space (Google, Apple, Tesla, Amazon, etc) whenever I hear a new marketing push about IBM Watson project my "BS detector" goes into the red zone.

(That said, it would be awesome if I'm wrong and IBM really is making some genuine advances...)

tekni5 9 hours ago 1 reply      
What is with the poor grammar and spelling errors?

"Recently they ran an ad featuring Bob Dylan which made laugh, or would have, if had made not me so angry."

"Ask anyone from that era about who Bob Dylan was and no one will tell you his main them was love fades."

"Dogs dons but Watson isn't as smart as a dog either."


ramanan 10 hours ago 0 replies      
Reminds me of a similar article covering the work of Douglas Hofstadter, the author of GEB :

The Man Who Would Teach Machines to Think


matchagaucho 7 hours ago 0 replies      
AI winter is coming soon

It would appear so, IBM hype aside. From chat bots to image recognition and playing Go; the media is having a field day around the AI theme.

If this hype feeds Investor and Consumer expectations, the next round of AI startups are doomed to underperform.

ktRolster 9 hours ago 1 reply      
AI winter is coming soon.

AI winter will come if it isn't able to latch onto strong business cases. For the past few years, we've seen a slow uptake of low-grade AI, for example, Siri. As long as those sorts of things continue, the winter won't have a chance to set in.

It's true there is a lot of baseless hype around AI, but there's baseless hype around every new technology (probably around everything that catches people's attention). That said, if someone predicted that Watson is going to die, I would believe them, because they haven't seemed to get much business traction at all.

tps5 8 hours ago 0 replies      
I agree that there's a lot of AI hype and I suspect that we won't see all that much come out of it.

At the same time, there's a bit of a cop-out that goes on when we privilege our own cognitive processes over those of an AI just because our own minds are, more or less, a black-box.

I think he does a lot of that in this article. At the end of the day "human intuition" is just a filler until we figure out what's really going on.

dingo_bat 8 hours ago 0 replies      
>Suppose I told you that I heard a friend was buying a lot of sleeping pills and I was worried. Would Watson say I hear you are thinking about suicide? Would Watson suggest we hurry over and talk to our friend about their problems? Of course not.

Although the author may be right overall, this paragraph certainly assumes a lot, and is probably wrong. Computer systems have been able to make such correlations for some time now.

happycube 9 hours ago 1 reply      
The ~1990 AI Winter came about because DARPA money dried up after the (pyrric?) failure of the Japanese 5th Generation Project and the end of the Cold War. After those things not enough people could afford $xx+K LISP Machines anymore ;)

As long as there's a continual source of money there won't be another one - so basically as long as companies like Google and Facebook are profitable, and preferably money to be made elsewhere, things should be good.

DSingularity 10 hours ago 3 replies      
Most interesting statement is the last one. Author seems to think we are about to enter into another AI winter.

Seems odd given alpha go and recent success of deep learning.

leoh 5 hours ago 0 replies      
I've been impressed Watson/Bluemix and I think IBM is on an interesting track. Marketing is not always effective at conveying the particular and stunning work that engineers, such as those at IBM, accomplish.
koytch 3 hours ago 0 replies      
Apparently Watson believes Mr Schank has 'mixed' feelings towards it and IBM. Go to http://www.alchemyapi.com/products/demo/alchemylanguage, feed it the article link and see what happens :)
mathattack 7 hours ago 1 reply      
* People learn from conversation and Google cant have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.*

Am I the only shmuck who thought Siri came from Apple, not Google?

facepalm 49 minutes ago 0 replies      
His argument is that Watson supposedly doesn't have an opinion on ISIS. While I don't know if that is even true, it seems like a very weak argument. Even if it could only "think" in a very limited domain, it could still be useful.

The author mentions himself that today's 20somethings have never heard of Bob Dylan, yet uses Watson's alleged ignorance of Dylan to dismiss it. Yet 20somethings are thinking entities.

Mostly it sounds like sour grapes, because his 1984 book didn't receive the recognition he thinks it deserves.

bitmapbrother 7 hours ago 0 replies      
Usually when you call someone a liar you present plausible proof. No proof was presented. Instead all we got was an opinion of Watson's cognitive abilities from a former Trump University employee.
nxzero 1 hour ago 0 replies      
Really dislike the IBM meetups, they're full of marketing speak and people they have no idea what a load of bs the "sponsor" heavy presentations are. Meetup should ban groups like this.
ams6110 10 hours ago 0 replies      
They are not doing "cognitive computing" no matter how many times they say they are

Maybe not, but "cloud computing" is getting a little pedestrian. New meaningless names used to market things we've been doing all along are a good way to gin up some interest.

pavedwalden 9 hours ago 0 replies      
Well, this makes me feel a little better that I could never make sense of IBM's Watson advertising. Even after checking out their site, I couldn't figure out what it was for, much less what it was under the hood.
phtevus 9 hours ago 0 replies      
I couldn't concentrate on what the article was saying because there were so many grammatical errors; I kept catching myself re-reading sentences, replacing phrasing or inserting missing words.
balx 2 hours ago 0 replies      
Business Area Limited UK is seeking to expand its investments into innovative computer software projects to turn over about 78 Million USD in medical device , computer development and biotechnologies. If IBM is accepting external investment portfolios.
balx 2 hours ago 0 replies      
Business Area Limited UK is seeking to expand its investments into innovative computer software projects to turn over about 78 Million USD in medical device , computer development and biotechnologies.
tacos 9 hours ago 0 replies      
Microsoft SQL Server added various machine learning primitives to their SQL dialect. So not only can you query and summarize past data; now you can select from the future as well. Bayesian, NN, clustering, the same old flower matching demo, it's all in there. If you can jam enough numbers in you can certainly handwave that you're getting insight out. https://msdn.microsoft.com/en-us/library/ms175595.aspx

Of course basic data mining is certainly not where the latest research is, but it covers many cases I see on HN or talked about in big data pitch decks. Regardless, it all seems a lot less fancy when you can get the job done issuing SQL commands that wouldn't confuse anyone who learned SQL in 1978. The whole thing is oversold and now largely commodified, to boot.

If and when this stuff starts to show real results you'll certainly feel it. The first wave of successful connect-the-dot bots will open up so many discoveries that opportunities for human labor will swell. But it's not chess, Jeopardy and a way to mine medical records. That's all obvious corporate bullshit.

thesrcmustflow 9 hours ago 0 replies      
I was wondering how long it would take before someone actually called IBM out on this.
ACow_Adonis 9 hours ago 1 reply      
A pity? IBM is DIRECTLY responsible for me not doing so any more. I've just given up on it because you (IBM) waste my time and insult my intelligence.

And i sought you guys out and your company pissed in my face!

If every time you invite people to your party you serve nothing but turd sandwiches, don't bitch about how we didn't give you credit for the croquembouche you've got in the fridge out the back...

hackney 10 hours ago 2 replies      
But I saw it working in a Bruce Willis movie. I think he was a hitman-lol. Which is exactly my point. Computers will never be any smarter than the EXACT commands we as their 'creators' give them. In essence they will never be but an extension, a tool, of ourselves. But to say they can think? Not a chance in hell.
Fire-damaged Brazilian tortoise receives new 3D shell 9news.com.au
49 points by vladmiller  12 hours ago   19 comments top 5
JetSpiegel 1 minute ago 0 replies      
I was expecting a z shell...
jayeshsalvi 2 hours ago 0 replies      
Here's is cooler one. 3d printed prosthetic leg for a turtle https://www.youtube.com/watch?v=kkHHuVAaORE
aaronsnoswell 9 hours ago 1 reply      
Would have been cooler if he received a new 2D shell in my opinion.
Puts 8 hours ago 4 replies      
I actually think the right thing would have been to shoot the tortoise right where they found it. These kinds of things are more done for the selfish joy of experimentation then for the actual benefit of the animal.
oolongCat 9 hours ago 3 replies      
Aww they missed an opportunity to print a red shell, or a blue one. Wonder how an animal like a tortoise living with others would react to if it had a different colour shell.
My time with Rails is up solnic.eu
611 points by lreeves  20 hours ago   362 comments top 58
vinceguidry 17 hours ago 11 replies      
> Now, this line of code is not simple, its easy to write it, but the code is extremely complicated under the hood because:

Every time I hear someone complain about Rails, I ask myself if this person is really just complaining about how complex web dev is. I've never seen a complaint that passed this smell test.

Yes, Rails abstracts over a veritable shit-ton of complicated domains. The problem is, that complexity isn't going to go away if you decide not to use those abstractions. Every single one of that laundry-list of things that ActiveRecord::Base.create does will have to be implemented again, by you, badly, if you choose to roll your own.

It boggles my mind why developers don't take the web seriously as a domain, like it's not real coding or something. It's the worst instance of bike-shedding I've ever seen.

Rails isn't perfect, by any means. But Rails is not a black box, it's all free software. You are more than free to re-implement anything Rails does yourself and release it as a gem. Modularity and flexibility are built right in. It seems incredibly odd to leave Rails over this. For what, Java? Unbelievable.

matt4077 16 hours ago 7 replies      
If any of the frameworks glorified on HN were subject to the same scrutiny as Rails, Github would need a suicide hotline.

I've recently had the pleasure of working on a new, medium-sized node/react project. Now I was new to a lot of the technology, but I've learned a few and would think I have some experience.

It probably took me a week to get webpack running the way I wanted. The documentation is an atrocious mix of stuff that is outdated (older than 6 months) and stuff that's too new (webpack 2.0) without any indication as to the validity. There're six different ways to get hot reloading/replacement to work, with similar problems. The configuration's syntax is heavily influenced by brainfuck with a /\w/[{\[]/s thrown in for good measure. As a soothing measure, webpack doesn't complain about anything but parse errors with the configuration. The version I was using came with 12 different options to render source maps, none of which worked in chrome (since fixed).

And that's just the linker. I actually had presets to search for, i. e. ["express.js" from:<date-of-last-stable-release> to:<date-of-following-alpha/beta-release>]. node_modules once managed to consume 4TB of hd space (someone managed to create a circular dependency) and even in regular use gets to the point where I'm looking for the elasticsearch plugin for ls *<x>. If you have checkouts of several versions or projects, each one will have it's own version of left_pad & friends at around 230MB (just checked with a clean install of mxstbr/react-boilerplate) with no way of deduplication.

I enjoy playing with new technology (currently elm & swift) but I know that it's time "joyfully wasted" right now and don't try to justify it. There's no amount of type-safety bugs that could cost as much time as a new framework's kinks.

pmontra 18 hours ago 4 replies      
I used Rails mostly for small or medium sized projects, one or two developers teams, all of us freelancers. It's the perfect tool for this kind of work. We can deliver quickly, send invoices, move on to something else or come back later to add features.

Do projects get complicated? Yes, some of then. Mainly because most customers pay for features and not for refactoring or design, which they don't understand (remember, no in house technical staff). Try to sell them a new page with attached the price of a major refactoring... good luck with that. So sometimes features get kludged together because of budget constraints. I have projects with suboptimal UX (understatement) because they rationally decide to make tradeoffs between usability and costs. It's their privilege. I bet this would happen with any language or framework. Your customers could be more long sighted than mines, I hope they are.

So, would I move away from Ruby and from Rails? Yes, when customers won't like starting a project with Rails anymore. This means that other tools don't have to cost them more. Do I care about Rails architectural shortcomings? A little, but I appreciate its shortcuts more. They work well in my projects.

orthoganol 15 hours ago 7 replies      
I can't take his opinions too seriously because his 3 central gripes, which he wrote extensively about, boil down to:

1) Too much complexity is hidden by simple interfaces

Those simple interfaces make programming enjoyable and don't break, so...

2) Your app logic is going to be too tightly coupled to core Rails features like AR, helpers, controllers ("your app is Rails")

Yes, you are in fact using the features of Rails when you use Rails. Is this actually a problem?

3) ActiveSupport. "No actual solutions, no solid abstractions, just nasty workarounds"

ActiveSupport has saved me bajillions of hours and never broke anything for clients. I wasn't aware it was such a flawed library, if it is?

Rails is maximized for developer happiness by, among other things, making things simple (yes, which to a large extent means hiding complexity), giving you a bunch of Omikase core features that you can in fact use, and gives you a lot of sugary methods for handling things that are tedious. I understand people wanting to try other things and leave Rails, no problem with that, but his stated reasons reveal a lot more about him than about Rails.

matt4077 16 hours ago 3 replies      
I can't grasp the hostile attitude towards rails at HN. Even agreeing with (some) of the criticism, I'm deeply thankful to Rails, DHH and everyone else involved.

Because before rails 1.0, I worked in php.

Many people may actually be too young to remember, but the jump from php and anything else that was state-of-the-art at the time to rails was without doubt the biggest leap web development ever took.

The most meaningful advances actually weren't just technical but social. It would've been possible to write an excellent webapp before rails, but just about nobody ever did. The opinionated nature of Rails many are complaining about today was a revolution because it taught everyone using it good architecture. You can nitpick about any one of those opinions, sure. But back then it was common to join a project with >100kloc with directories containing password.php, passwprds.php, pass.v3-old.inc, large_image.head.2002-v1.jpg & employees.xls. The quality of almost any rails project I have seen is, in comparison, excellent. It'd say a new team member on a rails project was productive in less than a third of the time it took in non-rails projects at the time.

So, to anyone complaining: I'm pretty sure that looking into any of your strongest held believes on web dev, you'll discover that rails had significant influence in creating them in the first place. To do something like that, pretty much as just one guy on the beginning, should afford someone a certain amount of respect.

tptacek 19 hours ago 9 replies      
This is how Rails works, classic example:

You see a simple line of code, and you can immediately say (assuming you know User is an AR model) what its doing. The problem here is that people confuse simplicity with convenience. Its convenient (aka easy) to write this in your controller and get the job done, right?

Now, this line of code is not simple, its easy to write it, but the code is extremely complicated under the hood because:

- params must often go through db-specific coercions

- params must be validated

- params might be changed through callbacks, including external systems causing side-effects

- invalid state results in setting error messages, which depends on external system (i.e. I18n)

- valid params must be set as objects state, potentially setting up associated objects too

- a single object or an entire object graph must be stored in the database

This lacks basic separation of concerns, which is always damaging for any complex project. It increases coupling and makes it harder to change and extend code.

This feels like the core of the argument of the whole post. It does not seem correct. In isolation, the call to User#create seems magical. But there's not enough context here to criticize it. We don't know enough to say whether there's inadequate separation of concerns.

No matter how we handle "users", we are going to need to do all 6 things in the list the author presented. No matter how we structure those 6 things, there is going to be a single entrypoint function that kicks them off --- end users won't be pushing 6 different buttons to drive them. So: what's the better design here?

cynusx 18 hours ago 1 reply      
I think rails has plateaud in terms of improvements. The article is pointing out the tight coupling of activerecord with business logic which is the consequence of a belief that I strongly disagree with in the rails community. 'fat models, skinny controllers' has caused a lot of mayhem as minor database updates (or update intention) triggers a cascade of actions.

This behaviour is counter to the model2 MVC implementation that rails is using as controllers should really be in charge of coordinating (helped by specialized classes if need be).

The author correctly points out that the way rails deals with views is painful however rails views are not tied to the model at all, they are tied to controllers and this can be modified if need be.

I personally maintain an approach of tying dedicated ruby classes to view files (e.g. SidebarComponent) and compose the view in the controller using a chain of these classes. This approach is much more object-oriented and avoids the weird pass-along view-hierarchies that many rails projects have.

There are much things to improve about rails but the project doesn't seem to absorb more object-oriented approaches over time and is tilted heavily towards DSL's for everything and doesn't value creating a more specialized class hierarchy to encapsulate more complexity.

I don't see a need to move away from Rails yet though as you can easily work inside the framework in a more Object-oriented approach. I guess you can characterize my approach as skinny models, skinny controllers, fat business logic objects and every partial is powered by a view object.

clessg 19 hours ago 4 replies      
I too have stopped using Rails. To other Rails refugees, I highly recommend the up-and-coming Phoenix Framework for the Elixir language: http://www.phoenixframework.org/.

Simple as in simple, fast, productive, beautiful Ruby-like syntax. Favors a functional and immutable workflow over object-oriented and mutable spaghetti code.

Locke 18 hours ago 5 replies      
Nostalgia warning: I also grew frustrated with Rails / ActiveRecord and jumped to Merb / DataMapper with excitement. Perhaps it will sound like a small thing, but I remember feeling especially frustrated that every ActiveRecord model became such via inheritance. This drove me crazy:

 class User < ActiveRecord::Base
Why should my `User` extend from `ActiveRecord::Base`? Is a User of my app also an ActiveRecord::Base? What is an ActiveRecord::Base anyway?

So, when DataMapper came along and favored composition over inheritance I was sold.

However, my experience with DataMapper was that it didn't have the stability of ActiveRecord, nor did it reach feature parity.

And, then Rails killed Merb. At the time I thought it was a tragedy. The competition Merb provided did improve Rails, it would have been nice if that competition had been longer term, though. On the flipside, I wonder if there would have been enough community to support two large Ruby web frameworks.

In the end I made peace with Rails and ActiveRecord (and, believe it or not, even ActiveSupport!). I don't have great love for Rails, but it is a powerful tool, so I accept it for what it is.

That said, I think the author's criticisms are well stated and hopefully there will be some influence on the future direction of Rails, etc, and the way devs approach Rails.

jon-wood 19 hours ago 1 reply      
I'm somewhat split on the author's opinion here. I also picked up Rails back in the day (I think the first version I installed was 0.8), and I have a bit of a love/hate relationship with it.

As a framework for building comventional HTML templated database backed websites I still think it's near unbeatable, especially with the addition of a few common gems. However it so often gets shoehorned into other places - API backends, rich client apps, and massive systems. Places where the Rails conventions rapidly go from everything falling into place to getting in the way and needing to be worked around, or heavily tuned.

I completely agree that ActiveRecord is the core of why big Rails applications are complex to work with and often exhibit disproportionally poor performance. It doesn't lend itself well to composition, and results in data persistence being intertwined with business logic. That's fine for early prototypes, and in all honesty will get you to market quickly, but it makes extracting the concepts you discover during development into a coherant API much more complex than I'd like.

These days I still use Rails for the aforementioned HTML generating web applications, but it's often acting as an API client to systems built on top of a Grape based backend, with an API built around service objects and a simple data access later.

Doctor_Fegg 16 hours ago 2 replies      
I read this with a lot of head-nodding but also a great sense of sadness.

I agree with pretty much every brickbat hurled at Rails in this article, except for one thing:

> Both projects were ultimately killed by Rails as Merb was merged into Rails, what turned out to be a major Rails refactor for its 3.0 version. DataMapper lost its community attention and without much support, it never evolved as it could if Merb was not merged into Rails.

DataMapper wasn't killed by Rails absorbing Merb. DataMapper was killed by "Things You Should Never Do, Part I":

> "They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch."


DataMapper 2 was insanely ambitious, and true enough, it was never finished. A core subset was salvaged as ROM (Ruby Object Mapper), which has a certain artisan purity but is a long way from hitting the readable/understandable sweet spot that DataMapper originally had. But DataMapper 1 was abandoned by its developers. Pull requests languished and there weren't even any maintenance releases. I idled in the DataMapper IRC channel for a while and pretty much every query was answered with "don't bother, wait for DM 2". Unsurprisingly, people switched to ActiveRecord instead.

I still use DataMapper. I use it in my own framework, which is plaintively simple compared to Rails and no doubt makes a load of mistakes, but they're the mistakes I've chosen to make (which, as per DHH, is "opinionated" software). I would love DataMapper to be, if not revived, at least slightly maintained. There is life outside Rails, but you have to want it.

agentgt 1 hour ago 0 replies      
I have seen posts like this time after time of experienced developer eventually hating (opinionated and/or coupling) framework.

I too eventually came to a similar conclusion albeit on different platforms. Frameworks get you started quickly but you will eventually hate them. And I do mean framework and not library (there is a subtle but massive difference... yes the oxymoron was intentional).

Its even worse in Javascript land. You could rewrite this post and just insert one of the many Javascript frameworks like Angular.

captain_crabs 13 hours ago 0 replies      
I'm a full-time Rails developer, and I've run into the exact pains that he's talking about. By and large, I've fought with ridiculous things as projects grew, and run into extremely 'magical' bugs. It's frustrating as hell. But I've found the way through them is actually to lean into ActiveRecord, not away from it.

When it really boils down to it, ActiveRecord is a perfect pattern to deal with the "submit a form, validate it, run some logic, return the results" round-trip that's the core experience of web applications. Do you have validated data you need to do really complex logic on? Sweet. Send it on over to some PORO's. Have a complex query that isn't easy to cache? Nice! ActiveRecord will get out of your way. What's really hard is figuring out how to frame the problem in a way that makes Rails a great solution for it. This part is really really hard, I think in a lot of ways akin to art (and by 'art' I don't mean some deeper honor or aesthetic - just that there's no guidance), since the principles and patterns actually come from your domain and not somewhere else. And the developers I know who really hate the way Rails does something, especially ActiveRecord, are huge on applying patterns.

When I've taken the time to think through how to structure, really deeply, the applications are a pleasure to change and modify even when I come back much later. And for me, the proof is in the pudding: those aforementioned other developers who hate ActiveRecord don't complain! They'll see it, give a small 'huh,' then go do what they need to do just fine. But when I don't, they turn into the twisted spaghetti nightmares. I see a massive sub-current of dissatisfaction with Rails, and I think the reason for this is just a subtle but markedly different priority of thought. I can't articulate it yet, I just know what it looks like when I see it.

Anyway, I love Rails as it is now. Soon I hope I can deploy a single application across web and mobile platforms natively, along with supporting them all, as one guy. That's insane. Really like this blog post because I've got some things I'd like to be able to do with the ActiveRecord API, but I'm not really sure how. I don't know why the thought never occurred to me to just ask!

fishnchips 16 hours ago 1 reply      
I'm not surprised by this development. I'm familiar with Piotr's code and it's hard to miss the fact that he's writing Ruby as if it was a functional language. Sharing inspiration between languages and paradigms is an awesome thing but if you're going to try hard writing Ruby like if you wrote Haskell you'd probably be much better off writing Haskell in the first place. Back to the original article I think it says more about Piotr's personal path as a developer than Rails and Ruby community at large.
brightball 19 hours ago 1 reply      
The thing that has blown my mind about Elixir and Phoenix is how it gets everything right. It's the clear migration path from Rails.
adamors 18 hours ago 1 reply      
I have 2 main concerns going forward with any Rails alternative (ie. Trailblazer and the like): community support and jobs.

I have no idea how big the community behind each library/framework is and honestly I don't have the time to properly test-drive multiple and research the health of their respective ecosystems. With Rails I can rely on 10-15 gems that can help me with 80% of my projects. Perhaps I overestimate my reliance on 3rd party code, but that leaves me with my second issue.

Ruby jobs are de facto Rails jobs. I have weekly email newsletters set up for Ruby jobs and in the last 3+ years only a handful were Sinatra/Rack based. It would be interesting to see a reliable source of non-Rails Ruby jobs out there.

cloudjacker 18 hours ago 6 replies      
What I'm trying to understand is why all these coding academy students are learning ruby on rails:

To make them have to go back to coding academy school in two years?

So that they maintain legacy codebases and don't touch the new shiny features the experienced engineers are working on until they get their training wheels off?

Because there is actual demand for ruby on rails despite all the hate and sunsetting of rails I see on HN?

vonklaus 19 hours ago 1 reply      
Reminds me of Zed Shaw's critique of Rails[0] which explains many of the short comings of Rails. Zed wrote the mongrel server for rails. Assume this article is massively politically incorrect & vulgar as hell, just as a heads up. Really good read though, and does explain some of the interior politics of rails and how it evolved.


chuhnk 18 hours ago 0 replies      
I was quite happy to walk away from the Ruby on Rails world 4 or 5 years ago. I basically escaped before the onslaught of Rails 3.0 and what came next. I think somewhere around the adoption of coffee script I got pretty concerned.

While at Google I became aware of Go and what that looked like at scale. I would highly recommend trying out Go. I think the simplicity will be a refreshing change but there will also be some familiarity coming from the Ruby/C world, can't quite pinpoint why but just feels that way.

And if you're looking for frameworks, shameless plug, I'm working on this https://github.com/micro/micro.

Bombthecat 7 hours ago 0 replies      
I like rails. A lot. The syntax is amazing. Period. If you can get a gem or get what you need from rails. It is simply close to amazing.

But dare you. You just go down one simple level. (first level is what you get with rails and for example extending active record.) it becomes a horrible mess of dependencies and magic / structures.

I never found a framework so hard to modify. ( for example writing your own multi category / multi tag gem)

That's when i stopped web dev. It was just mind boggling. This is several years ago. I think around version 3 or even 2? And every version broke the active record stuff...

I really thought rails would catch on in germany. But if you look on indeed there are literally like five jobs in Germany... Maybe I'm not alone with that problem...

Ericson2314 16 hours ago 1 reply      
Now I never use dynamic languages and never thought Rails was a good idea. But I'm glad I read as through as the "Convenience-Oriented Design" and the simple vs easy quote. I've seen many designs which have sacrificed consistency or orthogonality to make the common case as tiny as possible, and struggled to articulate what was wrong when I was counter with "how simple the tool is to use". Hopefully this will give me some better language.
mark_l_watson 15 hours ago 0 replies      
I usually find "I am stop using XXX" type articles to be a bit negative. That said, I stopped using Rails years ago myself, so the linked article resonates with me. My goto web frameworks are Ruby+Sintra and Clojure+ring+compojure. Sometimes I also use meteor.js but it is more like Rails in that it is a complete package of everything you are likely to need, pre-configured.
chaostheory 6 hours ago 0 replies      
I'm not going to argue.

However if you've run into the same problem where you feel like you need to either make fat model or fat controller in Rails, here's a simple solution:


tunesmith 18 hours ago 0 replies      
Having little experience with Rails, it's the tone of that "Crazy, Heretical" article [1] that really strikes me. That tone where you're laboring to describe something that really truly does make sense when you're in the company of a bunch of people that just don't want to hear it.

There's too much purism in those kinds of arguments, arguments that really are more about spectrums. It sounds like Rails is really great for learning the shallowest 40% of a wide variety of programming concerns, and a lot of web solutions for small clients aren't ever going to dip below that 40%. And as soon as you get beyond that 40%, asking questions like "but what if it doesn't make sense to do it that way?" and "what is that weird runtime bug that doesn't happen in my development environment?" and "why can't I refactor without being scared I'll introduce a weird regression bug?" then it's totally fine to be attracted to the idea of static typing, compilation, and libraries/frameworks that are more explicit.

[1] http://jamesgolick.com/2010/3/14/crazy-heretical-and-awesome...

__mp 19 hours ago 6 replies      
Scrolling in Firefox (OSX) is very slow.
im_down_w_otp 14 hours ago 0 replies      

 Secondly, avoiding mutable state is a general, good advice, which you can apply in your Ruby code. Ditching global state and isolating it when you cant avoid is also a really good general advice.
As an Erlang programmer I agree with the general sentiment here, but wouldn't doing this in Ruby really lean hard on one of the pathologically nasty Ruby runtime issues? Which is that it doesn't release memory back to the system?

When every new operation on a data-structure starts by producing a copy of the previous version it seems like you'd really want a runtime which is efficient and intelligent about how it manages memory?

Unless that idiosyncrasy of Ruby has changed since I last used it several years ago. In which case. Nevermind. :-)

douchescript 16 hours ago 0 replies      
The things that suck in rails are the views, the asset pipeline and the stupid helper methods for routes that are so complicated for nested routes it's a joke. The database part is actually the good part if you do it right.
elliotlarson 9 hours ago 0 replies      
I've been developing apps with Rails since it was announced. It's been an amazing tool/toolchain, providing a very useful level of abstraction that allows rapid development.

Recently, I've been doing a lot of Go (Golang) development. For me, it's sort of the anti-Ruby-Rails. I really like the fact that it requires me to "roll my own" in so many situations. I need to understand what's happening under the hood to make things work.

But, the abstraction/magic of Rails is useful. Speed to market with trustable, battle tested magic is a profitable proposition.

I want to introduce Golang into my current project, but I haven't quite found the right place for it yet. Rails makes it so much easier and faster to add functionality at our scale.

dk8996 19 hours ago 5 replies      
I personally can't wait for Rails to be a distant past. So many start-ups got burnt badly by using Rails -- easy to start, hard to scale, maintain, and extend. Rails is too opinionated and if your use-cases don't fit into "Rails way" your screwed.
kyledrake 15 hours ago 2 replies      
I'm not going to pretend it's better than Rails, but I implement most of my projects as Sinatra psuedo-frameworks, with Sequel for the ORM. Sequel is a solid ORM that has a reputation of being stable and reliable. Sinatra is just simple (vs easy sometimes) and I like the way it integrates routing directly. It's good enough to power Neocities anyways, and I've baked in some of the niceties you'd expect from a larger framework (such as testing).
jakelarkin 13 hours ago 1 reply      
my two cents on Rails (as someone who's worked with Rails/Ruby, Node, Scala, Golang in some depth)

- ORM is bad at any sort of scale/complexity.

- Net IO that looks like property access makes it too easy for devs to do bad things, N+1 queries etc.

- ORM abstraction is not helpful when you know the exact efficient query you want.

- ActiveRecord Models + the Ruby module system encourage OO bloat.

- ActiveRecord database drivers are too numerous and not well implemented. Default schema generation includes many SQL anti-patterns.

- ActiveRecord has too many public interfaces, too much meta-programming. Hard to understand scope, Hard to analyze and improve performance.

- ActiveSupport monkey patches too much of Ruby. Need an very deep understanding of language to understand what's going on at a non-superficial level.

- Rails/Ruby community places too much emphasis on "clever" DSLs / APIs

- If your stack/framework choices limited to dynamic languages, Rails is a lot less efficient than Node and for web apps you need JS developers anyway.

a2800276 15 hours ago 3 replies      
What I don't understand is why people feel the need to proclaim that they're leaving xyz? Why not just focus on the positive and focus your energy on the new stuff you'll be doing. It sounds like someone who just broke up and keeps claiming they're totally over it but just can't stop talking about what a bitch their ex is. If you're planning to do haskell, elixir, clojure and scala, write about that.
artellectual 19 hours ago 0 replies      
Although I may not agree with your perspective entirely, but I respect it.

I think everyone has different needs, and you have your personal needs which Rails no longer satisfies, so you are moving on.

Rails still solves a lot of problems for a lot of people I believe. I work as a consultant for a few companies and in some cases I see people losing time and money on what would be resolved immediately if they had just made the right call by using Rails.

Good luck on your journey, and remember the rails community will always have its door open for you :). Your projects on github look exciting, I will take a look at them. Very cool stuff!

themgt 17 hours ago 0 replies      
I've also seen the writing on the wall for leaving Rails/Ruby. I think a lot is down to Ruby the language simply not being well-suited for an increasingly multi-core/concurrent future. That said, a more reasonable approach is to begin transitioning a few projects at a time, and make sure you've solidified on your new tools of choice before chucking the old ones.

I remember a few years ago I teased with ditching ruby for node.js. I'm glad I dipped my toes in the water and waited for Elixir.

zinssmeister 10 hours ago 0 replies      
Rails is great. And of course it isn't and will never be perfect. Obviously the author would love some alternative frameworks, mainly so he can use some more and some less exotic code patterns. Maybe after almost a decade of writing CRUD apps in Rails he is a bit bored? I can't blame him and seeing the Ruby community so focused on basically one framework is a tiny bit sad.

So I definitely see where he is coming from with all this, but truth be told it was never Rails' indent to be a precision instrument. It has been a reliable framework that makes a lot of Developers very happy. A great reliable car for the masses so to speak. But not a Ferrari.

kazinator 17 hours ago 0 replies      
> First of all, it worries me that many people react like that. Were discussing a tool, not a person, no need to get emotional.

Laughably, no.

Code is written by people; it's essentially a set of ideas that execute and do things in accordance with the vision of the creators. The creators poured themselves into the tool. Code is an extension of the self.

(If it isn't, it should be; people take the best responsibility for code when it's like that, and it's probably the main essence that separates OSS from most proprietary work. People don't fully invest themselves into creations that can completely disappear out of their lives forever upon the next organization restructuring.)

fiatjaf 19 hours ago 2 replies      
This should come at no surprise, although I don't see anyone criticizing Rails anywhere.

I've never written a line of Ruby, but trying to read Rails code evokes me all this: it seems like a framework created with the sole purpose of hiding everything possible from the programmer.

hhw3h 15 hours ago 0 replies      
Is it uncommon to use other patterns within rails like service objects? I like using Draper, Pundit, and other gems which provide decorator and policy objects.

I prefer to not use Helpers and AR callbacks (generally Concerns too) as I dont think they encourage good OO even though they are officially part of rails.

Ultimately it comes down to what your goal is. Is it to deliver some business value fast to end users? Build a new product with a small team? Then I think rails is a great tool to achieve that.

dorianm 14 hours ago 1 reply      
TLDR: People don't do what I want them to do. They don't think the way I want them to think. And my way of thinking is way better than theirs.
kfir 13 hours ago 0 replies      
Without being negative anyone else found this statement a bit strange - "as my bachelor thesis was about Rails"

* I am not aware of any CS program that do thesis at a bachelor level.

* It is hard for me to imagine getting approval for doing a thesis on "Rails", it is mostly a CRUD web framework.

Nemant 17 hours ago 5 replies      
What is a good alternative?
dedsm 13 hours ago 0 replies      
I've always said Rails is good for short-lived build and forget projects, it's too "magic" for my taste.

I can't imagine what it must feel like to receive a mid-to-large sized Rails project to maintain.

nine_k 16 hours ago 0 replies      
This article formulates well why I strongly prefer libraries to frameworks.

A framework dictates you the structure of what you are doing. Libraries can be combined, replaced, and used in various ways, fitting more the task at hand than the mist common case.

yodasays 16 hours ago 0 replies      
Explicit is better than implicit.
gonyea 19 hours ago 1 reply      
This isn't a Rails gripe. He only seems to know Ruby/Rails and lashes out at it. Frameworks = Lots of abstraction. At least the abstractions in Rails aren't leaky.
samfisher83 19 hours ago 0 replies      
Trying to visit this page makes my browser lock up.
tangue 19 hours ago 1 reply      
Rails developers are spending more time fighting Rails these days than improving it. I have the feeling that good coders have left the ROR ship and that Rails is now the framework for code-schools and learn-to-code bootcamps. It's a bit sad, but as long as Rails is tied to Basecamp I don't see any solutions.

The author is right, competition is important, the Rails/Merb scenario is the open-source version of Microsoft E.E.E

funkaster 17 hours ago 0 replies      
I have the same sentiment with Rails. I built a big warehouse/delivery system (routing, etc) on 2006 with Rails 1.2.3. still being used up to this day. After that, I never looked at rails again. All of my web apps small and medium were made with Sinatra. After reading this article, I'll give a look to hanami.
vintageseltzer 18 hours ago 1 reply      
Is Django any better?
VLM 17 hours ago 0 replies      
Sounds like he wants Arachne. Which isn't written yet. He even mentions learning Clojure. Linked article stops just short of mentioning it, but there are some themes that are common between the linked article and the story of what Arachne should fix.


The concept of the zeitgeist is interesting. A lot of general dislike of Rails is in the air and the only question seems to be what the replacement will be, not if there will be a replacement. Naturally the ruby guy thinks it'll be a new OO ruby framework which has always done the impossible before and the Clojure people think it'll be the usual immutable functional magic which has always done the impossible before. Best plan for the future is wish them all luck and wait and see. I don't see any other useful strategy at this instant?

My money is quite literally on the Clojure guys, I was one of the backers of the Arachne kickstarter, but best of luck to Hanami and rom.rb and all the new wave ruby stuff. One of you revolutionaries had better succeed, or else...

Some of it may be the natural result of success and age. A near decade of continually having to revamp my rails code "because the guys at hulu really need it" or "this is very helpful for github" is inevitably going to eventually make me say (if you'll excuse the profanity) F them I'm sick of rails and authoritarian monoculture in general and hulu is very nice but it can just go help itself, I have work to do that isn't "I have to chase latest fad that someone else needs". It may be a natural state for the lifespan of a framework to enter a stage where staying on the bus is more expensive than hopping off, maybe even hopping off anywhere. I tried some Play framework projects and I felt the sirens pull of static typing, but, after a few years, unless you keep up with changes, the technical debt just accumulates each framework revision until I'm sick of seeing releases instead of pumped.

I'm talking about projects that last many years not a semester or so. I'm not sure the chronological nut has been cracked of heres a magic solution for long lived projects. Assuming there is a magic solution. Perl CGI I suppose (only slightly tongue in cheek kidding)

programminggeek 18 hours ago 1 reply      
The elves are leaving Middle Earth.
pm24601 13 hours ago 0 replies      
Another point that was subtly made, but needs highlighting.

If you are programming in ruby you are ONLY doing a front facing webapp. There are lots of servers out there that run headless. The ruby ecosystem is completely dominated and controlled by rails. As a result, ORM mappings and gems are completely focused on a web-response centric world.

As monolithic ruby apps are broken into microservices, ruby as a language will also be abandoned for those microservice back-ends - simply because the ruby ecosystem has only one way of viewing the world.

angersock 18 hours ago 0 replies      
Thaank you for that well-reasoned rebuttal! I especially like your deep insights into the different issues the author is having.
blairanderson 19 hours ago 0 replies      
unclebucknasty 18 hours ago 1 reply      
A few years ago if you said anything negative about Rails, you were either pure evil or unbearably stupid; or both.

If you didn't fully embrace tableless design with CSS, despite the latter's glaring defeciencies for that task, it was the same treatment.

I also recall great pressure to adopt EJB with its deeply flawed architecture, etc.

I could go on, but the point is that there seems to come a time when all of these things face a moment of truth, wherein sober minds feel a little freer to speak aloud on their shortcomings. This frequently snowballs into all out backlash against the tech in favor of The Latest Shiny Thing.

The enforcers of what's hip and smart for everyone now move on to browbeating Developer Land with the new tech, and we all allow it to happen again. Why do we keep repeating this process when we've all seen the movie so many times?

btcboss 19 hours ago 0 replies      
Another Rails is good/bad post. Just like the rest....
douchescript 16 hours ago 0 replies      
Oh no, the Merb heresy comes back to haunt. No merb was not better, it was just different and sucked in some different ways.
soheil 16 hours ago 1 reply      
> rather than hitting F5. Just saying.

QED. My hypothesis was confirmed, up to this point I was pretty sure he's been living under a rock possibly for decades complaining about how hard programming can sometimes be until I realized he is an IE user.

Show us a better solution without sacrificing everything that rails has given us and maybe we will listen.

Show HN: Doctest C++ single-header testing framework github.com
39 points by onqtam  11 hours ago   7 comments top 4
optforfon 3 hours ago 0 replies      
"You can just write the tests for a class or a piece of functionality at the bottom of it's source file - or even header file!"

Woah.. this makes it so easy I might even use it. I'm really not a fan of TDD b/c it explodes the number of things you have to maintain - but the way this is rolled in with the documentation makes it quite appealing. Thank you so much for you work. If I start to use this regularly, I'll make sure to donate

gravypod 10 hours ago 0 replies      
I think this is great. The best part of this is that there has been an obvious large amount of thought put into the programmer-interface to this library.

It just plain looks great. I'll have to give a try some time soon.

RossBencina 10 hours ago 1 reply      
I would like to see benchmark that compares run-time performance with Catch.
je42 10 hours ago 2 replies      
weird they didn't compare with clang...
India is set to launch a scale model of a reusable spacecraft on Monday bloomberg.com
126 points by adventured  8 hours ago   48 comments top 12
balakk 8 hours ago 3 replies      
runewell 16 minutes ago 0 replies      
Cool! The more common reusable rockets become, the more we can assume their cost benefits and apply them to even more ambitious space projects.
dmode 7 hours ago 2 replies      
"The final version will take at least 10-15 years to get ready."

This was at the bottom of the Hindu article. How does this compare with SpaceX and Blue Origin's timelines ?

dingo_bat 8 hours ago 5 replies      
Some news outlets have been calling it a "space shuttle", which is highly inaccurate considering that this has no plans of being for human use. I'm happy with the progress ISRO has made but lying/deception will not help anybody.
linux_devil 6 hours ago 0 replies      
Indian Space Research Organization or ISRO plans to test two more such prototypes before the final version which will be about six times larger at around 40 metres and will take off around 2030. src :http://www.ndtv.com/india-news/india-all-set-to-launch-its-o...
arjie 6 hours ago 0 replies      
Video would be cool. Something I really like is being able to watch all these launches, famous or otherwise. On Youtube, you can go all the way to the Saturn SA-1 launch: https://www.youtube.com/watch?v=-0-8Pd7fK9w. Fantastic stuff.
deanstag 8 hours ago 3 replies      
'Everything went according to the projectory, he said adding that the winged space plane will not be recovered from the sea.' - Is that okay? Do a lot of missions around the world consider sea a dumping ground?
yoda2 7 hours ago 0 replies      
Rather than focusing on competition between different organizations we should focus on development happening in space science in recent years.
Allamaprabhu 8 hours ago 0 replies      
dang 6 hours ago 2 replies      
We detached this subthread from https://news.ycombinator.com/item?id=11751948 and marked it off-topic.
hackaflocka 8 hours ago 3 replies      
My suggestion to India: focus on encouraging the would-be Elon Musks in your population. Give them permission to do it. Remove the barriers to them doing it.

You'll get there much faster if the private sector were doing it.

Show HN: StrelkiJS - small library to index and SQL-join in-memory collections github.com
27 points by kidstrack  9 hours ago   4 comments top
polskibus 6 hours ago 2 replies      
Does anyone know of data structures better suited to solve the same goal? (ie fast in memory joins with multiple dimensions, star schema OLAP queries, etc.) ?
Generation Nintendo filfre.net
144 points by danso  15 hours ago   38 comments top 6
nzonbi 12 hours ago 4 replies      
I remember the first time that I played Super Mario bros on the NES. I was so absolutely blown away, that words don't make justice. I had previous experience with many consoles and computer games. But this thing -Mario Bros- was so incredibly well designed, that it was beyond unbelievable. This little box, packing this strange, exotic world, full of inexplicable, surprising creatures and devices. With many different areas full of mysteries. Reinforced by the perplexing, hypnotizing music. I still can clearly hear it. The amazing sound effects. I can still clearly hear the sound of entering a pipe, and descending into the underground. The amount of engaging and continually fresh motion, required to overcome the always changing obstacles, was absolutely unequaled at the time.

There were moments that I would be idle, pondering how it was possible to pack so much brilliant design on a thing. I would then be also tremendously impressed by other games of the time, notably Zelda and Super Mario Bros 3. That I would become a big fan of Japanese game design for life. Investigating and following the individual game creators. Since then, I slowly noticed that the Japanese, and some Asians, have a unique sensibility that gives their games a peculiar flavor. Many people -mainly westerns- don't resonate with this peculiar uniqueness, that rarely exist in western games. And that is ok, just interesting. It is like they can achieve a laser focus on a particular set of simple, essential human emotions. It can be found on how they draw things, the stories that write, etc. That particularly inspires some people.

For many people, Nintendo games of the time were so dramatically better designed, compared to the standards of the time, that they were like gold compared to the others dirt. That was an important factor, possibly the biggest, in the creation of the generation Nintendo. It is very interesting to know what were the conditions and events, that allowed Nintendo the company, to come up with these absolutely brilliant products at the time. That allowed it to achieve legendary status among so many people.

tboyd47 17 minutes ago 0 replies      
One thing I've appreciated more about Nintendo as I've grown older is the culture war that was going on over Mario.

For example, no Mario game ever features Brooklyn, even once. Yet the American movie and cartoon show are all about Mario being from Brooklyn. Then when Mario 3 came out, not only is he not from Brooklyn, but he wears a cat suit and turns into a Buddhist statue. That's a big f*-you message to America from Japan if I ever saw one. Pretty hilarious.

0x0 12 hours ago 0 replies      
The lockout chip mentioned is detailed in wikipedia: https://en.m.wikipedia.org/wiki/CIC_(Nintendo)#10NES

Some highlights include unlicensed cartridges using a "voltage spike" to knock out the chip, and the patent expiring in 2006.

studentrob 4 hours ago 1 reply      
Wow cool, so Nintendo survived because the US didn't bomb Kyoto.

I hope we never see that kind of destruction again.

BTW, I bicycled around Kyoto once and could not find any trash, even a straw wrapper. The Kyoto protocol is aptly named. I highly recommend visiting there if you have a chance.

nickpsecurity 12 hours ago 1 reply      
This is amazing. It's one of those moments where actions of a single person change the course of history in a better way. I mean, I'm sure American companies might disagree. ;) Yet, I'm glad they showed up knocking down our industries' garbage with higher quality or effectiveness. It was a nice, reality check whose individual cases could've been endured and/or countered earlier if American executives weren't so arrogant. For auto's and such, that is, as Nintendo's offering was pretty forward-thinking.

More amazing is the founder also invented the Raman concept and motels dedicated to pimping in a small span of time. Plus, really cool toys. Innovative guy and company. Very unique.

EDIT: Forgot to add a piece about absolutely hating Nintendo for their contribution to DRM and lock-in. Sometimes I want innovators to think less. Bunch of A-holes. ;)

chipsy 10 hours ago 0 replies      
W/R to "What If's", I think the big one is "What If Atari Were Stronger/More Focused". They tried to grow into a "manufacturing conglomerate" rather than a "consumer design" company. It misused their original branding and culture. For this the Warner acquisition deserves a large share of the blame, even as it seemed necessary for financing the VCS launch. Either way, by the time the company was at its most profitable, it was already rotten from mismanagement.

Had they found a path towards a more contemporary-Apple-like management style, and some foresight for the third-party situation and the console lifecycle, they would have broken away from the scattershot focus, released a good, DRM-protected, backwards-compatible update to the VCS in the early 80's instead of the half-baked 5200/7800 efforts, insulated themselves from the arcade crash with original console software, and starved Nintendo's air supply at retail.

They might have even still released computers, just of a different sort - perhaps they would have kept Jay Miner's team and had an Atari-branded Amiga, with a console version released soon after(c. 1985-86) instead of 8 years later(CD32 - 1993).

Dynamic Swift mjtsai.com
25 points by mpweiher  13 hours ago   24 comments top 7
chris_7 11 hours ago 4 replies      
Not sure I see what's going on here (it's just a blog post linking to other blog posts?), but is this arguing that Apple should turn the safety of Swift back into the unsafe mess of Objective-C? Horrifying stuff like swizzling, respondsToSelector tricks, etc.? That's what Swift is trying to get away from!

Some of the classes in Apple's frameworks need to be overridden, some of them cannot be overridden, some of them don't both to call their overridden accessors - it's a mess! You can only discover this stuff by reading the documentation (if it's even there), when it ought to be enforced by a strict compiler.

supster 12 hours ago 1 reply      
Relevant reply by Chris Lattner on the Swift Evolution mailing list: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...
kylnew 10 hours ago 0 replies      
I think ObjC has left us with some dynamic habits we don't actually always need, particularly in all non-UI layers. I appreciate the way using Swift has me questioning just how much dynamism and mutability my programs really need. Overuse of things like setValue:forKey: and performSelector: have a bit of code smell anyway and there's no reason you can't replicate these kinds of behaviors, if you really need to, with Swift Dictionaries.
adamnemecek 11 hours ago 1 reply      
"Great web frameworks like Ruby on Rails, for example, cant be built without relying on a more dynamic language."

You don't necessarily want dynamism, you want metaprogramming.

DHowett 11 hours ago 0 replies      
> Leaning on the Objective-C runtime feels like a temporary solution because it only exists on the Mac and iOS.

That's not entirely true; libobjc2[0] exists, for example, and is fully compatible with objc4.

[0]: https://github.com/gnustep/libobjc2

ebbv 9 hours ago 1 reply      
I don't want to call someone I don't know a bad developer, but as a general rule if software is moving more towards safety and reliability and you are resisting that change that's a sign there's a problem with your habits and preferences. Not that the decisions being made are bad.

Swift is not a perfect language, there's a lot of low hanging fruit (some of which is being dealt with in Swift 3), but it's lack of support for dynamic behavior is not one of those problems.

comex 10 hours ago 2 replies      
The posts in the first linked series (http://inessential.com/2016/05/) seem to mostly focus on how dynamism can avoid the need for boilerplate. For example, he states that the following code is tedious to write and error-prone:

 if localObject.foo != serverObject.foo { localObject.foo = serverObject.foo changeDictionary[fooKey] = serverObject.foo } if localObject.bar != serverObject.bar { localObject.foo = serverObject.bar changeDictionary[barKey] = serverObject.bar }
But there's another language feature that can replace boilerplate, without dynamism and its associated performance and type safety penalties. A feature that's reviled by many, and was omitted from Swift, but serves this purpose perfectly: macros. What if you could write this?

 macro_rules! merge { ($localObject:expr, $serverObject:expr, ($($prop:ident),+)) => { $( if $localObject.$prop != $serverObject.$prop { $localObject.$prop = $serverObject.$prop; changeDictionary[stringify!($prop)] = $serverObject.$prop; } )+ } } merge!(localObject, serverObject, (foo, bar, baz));
Actually, this is valid Rust code, as you might have guessed if you know that language. I picked it because it has similar syntax to Swift and a powerful, hygienic macro system. (This case is simple enough that a C macro would also work fine without any ugly arcane stuff - you'd have to have a separate macro invocation for each property, but whatever. But Rust's system scales better to slightly more complicated scenarios.)

Admittedly, not everything you can implement with dynamism is as easy to replace with macros. Many things are essentially impossible with the simple pattern matching language used above. In Rust, there is also the option to write a compiler plugin that can do arbitrary-ish things with ASTs, so almost anything is possible with enough work, including things you can't implement with dynamism, but fiddling with AST transformations is hairy enough that boilerplate looks a lot more attractive, relatively speaking.

Even so, I bet macros would satisfy most of the people advocating for "Dynamic Swift". And for the people who hate them - do you really think doing crazy things at runtime is any safer or easier to understand? It's certainly slower.

(By the way, you can implement dynamism with macros. For example, you could autogenerate functions like 'setProperty(name: String, val: Any)' that switch on the name and do the appropriate casting and such. Though depending on your attitude, you might consider this crucial enough that it should be built into the language.)

Autonomous Mini Rally Car Teaches Itself to Powerslide ieee.org
203 points by Osiris30  18 hours ago   36 comments top 13
gear54rus 9 hours ago 1 reply      
Oh wow. Everything from watching end result to the tech exposure is so very-well done here. You just have to appreciate the way they told us about this, it has everything:

- Source

- H/W specs

- Basic docs, build instructions

- Video of success

- Video of failure

- Side-by-side simulation and outcome

Makes it easy to understand even for someone who isn't well-versed in how such systems operate and what they're made of.

bwilliams18 13 hours ago 1 reply      
For those who haven't heard, Formula E, the electric racing series is putting together a support race called RoboRace.http://roborace.com/All autonomously driven cars designed for exactly that purpose. It should be exciting.
jaytaylor 14 hours ago 2 replies      
Pretty cool that they have the source code up on github: https://github.com/AutoRally/autorally

Looks like it's mostly C++, Python, and Arduino.

I wish I could find where the power slide learning algorithm is, but browsing around I'm not sure where that portion is in the project.

Aelinsaar 15 hours ago 1 reply      
There's something exciting about watching that, and knowing that it was software figuring that out, and not a human. Even more exciting is that sort of offhanded hope at the end of the article, because yeah, we're eventually going to see a period when human and machine drivers compete. I don't know how long that period will last; maybe it will be like the history of humans vs computers at chess, or maybe it will be more like Go.
ChuckMcM 14 hours ago 1 reply      
That is an excellent result. Back in the 90's I converted a Radio Shack Wild Cougar (R/C truck) to autonomous control with a 68HC11 and had to slow it down under a walking pace just to keep it from crashing (it nominally went 20mph when you just controlled it manually). To consider the difference in compute over those 20 years is really quite stunning.

I particularly liked the lack of back propagation in the design (keeps things fast) but was really curious how they determine "optimal" (center of track). So that makes me wonder if the algorithm would learn a "line" through the track that optimized speed at reduced energy.

Next up, multiple cars on the track!

q-base 4 hours ago 1 reply      
On a dirt track it would be expected to see the optimization algorithm at some point start powersliding. The current generation may find powersliding as being for show and drifting comes to mind. But there is are some very good reasons for why rally-drivers do this and it is not for showing off. First; it's actually faster on loose surfaces. So of course if you design an algorithm that should be optimized for loose track surfaces you have to take this into account.

Secondly at least for human drivers, the sliding keeps you in control. Driving very fast on loose surfaces is bound to make your vehicle slide around. By initiating these yourself, you are in control - you can feel the amount of grip available, so by oscillating over and under the limit you can keep your speed high while still being in control. This last aspect of it is by far the most difficult and interesting aspect of designing such an algorithm in my opinion. This is where the "Scandinavian flick" for instance comes into play. But I will have to disagree with a commenter below that said this was done at 4:34 - that is more a correction of the exit of the last turn than a true Scandinavian flick. Solving the oscillating aspect of dirt driving through driver feel is something I am having a very hard time believing can be solved by algorithms. Driving around on hard surfaces where the friction is somewhat known I could see algorithms come close to humans, but on loose surfaces it seems quite a long way into the future if ever.

bigiain 10 hours ago 0 replies      
Prediction: Uber aquihire/licence this "Aggressive Driving with Model Predictive Path Integral Control" to make Uber trips more like taxi rides...


dclowd9901 15 hours ago 1 reply      
There was a moment where the robot accidentally did a Scandinavian flick (4:34 or so). I wonder if did that through random pattern injection, and could learn from it.
soared 15 hours ago 1 reply      
Does anyone have more videos where I can watch machines learn?
Anonjt87rCh52z 6 hours ago 0 replies      
Can anyone see what the cameras are being used for? Looking at the github I see GPS and IMU being fused but nothing about the cameras.
sheraz 15 hours ago 0 replies      
Not at all in my areas of expertise or hobbies, but this is really cool. Gives me the feeling that I really live in the future.
EGreg 4 hours ago 0 replies      
Are robotic cars close to outperforming human drivers on short distances?

Watching Jason Statham movies like The Transporter and The Fast and The Furious sets a pretty high bar when it comes to fictional getaway drivers.

B1FF_PSUVM 16 hours ago 3 replies      
> the little cars maximize their speed while keeping themselves stable and under control. Mostly.

You let an optimizer do its work within whatever constraints you want, it will nose out a solution. If the goal is to maximize speed without regard to fuel and rubber consumption, power slides are in the cards.

No need to "teach" or "learn". Math works.

$12M stolen from 1,400 convenience store ATMs across Japan in 2 hours mainichi.jp
279 points by sjreese  1 day ago   181 comments top 16
ghshephard 21 hours ago 8 replies      
Interesting thing about Japan, is the complete acceptance of people wearing surgical masks in public (it's considered to be polite if you are ill). Makes it a lot harder to identify people from video surveillance.
level09 21 hours ago 1 reply      
Would the south african bank be held accountable for this? or can they get away with this as the cards are fake/stolen ?

The article makes it seem as if Banks of the ATMs are the ones who lost the money.

I'm also a bit surprised the criminals carried their operation in Japan, It would have been easier in a more messy place e.g India / Africa ?

TheAlchemist 19 hours ago 1 reply      
Interestingly this is not the first time it happens. 3 years ago:http://www.nytimes.com/2013/05/10/nyregion/eight-charged-in-...
underdendride 20 hours ago 0 replies      
Can I speculate that this is an untraceable form of tax protection payment from 7 Bank to Yakusa?
allisthemoist 16 hours ago 1 reply      
100 people involved? That's a lot of people who could potentially slip up.
ryanl0l 22 hours ago 1 reply      
A more accurate title would probably be "120M stolen from hacked South African bank via 1,400 convenience store ATMs across Japan"
vegabook 20 hours ago 3 replies      
14000 transactions at 1400 ATMs in 2 hours?! Think about the logistics. That's 120 transactions per minute. And 1600 cards. Either an army of coordinated people, in itself highly risky as it vastly increases the likelihood of someone grassing on the group, or fewer people all sitting at the same ATMs drawing for 2 hours.... difficult not to get noticed. If they really pulled this off it will be the most well organized organized crime ever.
ck2 13 hours ago 0 replies      
How on earth does the system absorb a nearly $13M loss now?
sjreese 19 hours ago 1 reply      
This is the only declassified story, I could find .. The 120M number is real and BMO is on full alert.
logicallee 20 hours ago 0 replies      
Can a mod change the title to use the amt listed in the first paragraph?

Current title: 120M stolen from 1,400 convenience store ATMs across Japan in 2 hours

First paragraph of article:

>TOKYO (Kyodo) -- A total of 1.4 billion yen ($12.7 million) in cash has been stolen from some 1,400 automated teller machines in convenience stores across Japan in the space of two hours earlier this month, investigative sources said Sunday.

Suggested title: $12.7 million stolen from 1,400 convenience store ATMs across Japan in 2 hours

imaginenore 22 hours ago 4 replies      
I'm surprised there are still ATM cards without chips, and in Japan out of all places.
Bootvis 22 hours ago 3 replies      
The article has the amount at USD 12,7m. That's a lot of money but quite a bit less than the title suggests.
nametakenobv 22 hours ago 0 replies      
How would that make a difference?
known 19 hours ago 0 replies      
How did they get passwords?
mtgx 17 hours ago 1 reply      
Let me guess they run Windows XP Embedded?
shrugger 23 hours ago 8 replies      
Amateur hour.

If you want to rob a bunch of ATMs and get away with it, try keeping your vulnerable window longer than 2 hours...

I mean, it's going to be pretty straightforward to gather a bunch of footage and see what happened those 2 hours. These guys will get busted within the next few days basically guaranteed.

Reverse Engineering a Mysterious UDP Stream in My Hotel gkbrk.com
1376 points by gkbrk  2 days ago   175 comments top 32
Animats 1 day ago 4 replies      
At least they play music. I once stayed in a Howard Johnsons Motor Lodge in Pittsburgh near CMU, where, at corridor intersections, they had speakers to generate some ambient noise to mask voices from the rooms. Most places that do this use white noise, or water sounds. But this was Pittsburgh. They were playing faint machinery noises - whirr, chunk, etc. At first I thought someone had left a PA microphone open in the boiler room or something, but no, it was deliberate.
tharshan09 1 day ago 7 replies      
Can you send your own UDP packets to the elevator then?
janci 1 day ago 3 replies      
Now I know what to do in a hotel!


tonyedgecombe 1 day ago 4 replies      
I was in an elevator a while ago when there was a ringing followed by a voice trying to sell PPI services (a regular source of spam in the UK). The emergency system was just an embedded phone.
dkopi 1 day ago 2 replies      
Binwalk is a great tool for finding potential files within a given binary stream: https://github.com/devttys0/binwalkIt has an incredible list of supported file types.
daveguy 1 day ago 2 replies      

Revelation/Disappointment -- it is elevator music.

Or is it? Maybe he gave up too quick. Maybe that is how they disguise the secret spy transmissions!


ReedJessen 1 day ago 0 replies      
This is really well written. Good length for one quick bus ride. I was on pins an needles until the end. Great blog post.
omash 1 day ago 1 reply      
That was just the carrier, hiding the steganographic payload.
mhd 1 day ago 0 replies      
You had to follow that shaggy dog a long time, but it finally led you to the girl from Ipanema.
detaro 1 day ago 3 replies      
Reminds me how surprised I was when I found out that many IP phone installations use multi-cast strictly to distribute on-hold music to all phones, instead of the phones pulling files from a server or storing it locally.
SoonDead 1 day ago 1 reply      
Excellent story, although the URL spoiled it for me, might worth changing it to something more vague.
gnicholas 1 day ago 0 replies      
A story with a decidedly less innocuous outcome: https://medium.com/@nicklum/my-hotel-wifi-injects-ads-does-y...
kw71 1 day ago 2 replies      
Next time you see a multicast stream, try playing it with vlc.
ngcazz 1 day ago 0 replies      
As someone who doesn't really grok either Python or IP programming, I really enjoyed how simple yet educative (and ultimately, funny) this blog post was. Definitely will be trying this exercise the next time I end up at a hotel with a laptop.

I wonder how hard it might be to hijack the stream to have the receivers play your own packets.

buttershakes 1 day ago 0 replies      
It's probably a microphone hidden in his room that encodes the data stenographically into elevator music.

More seriously, no investigation as to what happens when you try to inject your own data?

Namidairo 1 day ago 0 replies      
Have you tried throwing the raw recording against SoX? Worked pretty well when I had an unknown recording to play for which I had guessed the format. (Which turned out to be IMA-ADPCM with reversed byte ordering)
nowprovision 1 day ago 1 reply      
ah damn, not a network guy but couldn't they put lifts on their own subnets and avoid populuting the constrained airway. good read :)
deepsun 1 day ago 2 replies      
Does it drain battery of mobile devices not listening to the port?

Have you tried multicasting your own audio to the same port? That might have been fun.

caylorme 1 day ago 0 replies      
Now just copy the headers and send some multi cast audio of your own to hijack the elevator and bathroom audio :)Could make for a good prank
djabatt 1 day ago 1 reply      
Great network engineering. I think we all need a set of tools that allow us to find if we are being bugged by all the networked smartTV's, printers VOIP phones etc. Not to mention the Amazon Echo. I dig it if someone had a Wireshark/Python app that allowed everyone to listen to with Amazon Echo was sending to Amazon.
rocky1138 20 hours ago 0 replies      
Out of general interest, do you have a copy of the mp3 we could listen to? I couldn't find one in the article.
known 1 day ago 1 reply      
I'd start with

tcpflow -p -C -i eth0 port 80 | grep -oE '(GET|POST|HEAD) .* HTTP/1.[01]|Host: .*'

daveheq 1 day ago 2 replies      
I love knowing I get spammed with elevator music over WiFi at a hotel the whole time.
cmarcond06 1 day ago 2 replies      
Is this legal? What if the media was from Hotel Cameras?
chris_wot 20 hours ago 0 replies      
Er, if they are transmitting data to a multicast IP address, what's to stop you doing the same?

If they haven't done a check on the IP address that they are receiving the data from, then it would now be trivial to panic people in elevators by recording a fake emergency broadcast.

Not jus that, but what else is that hooked up to? If they are multicasting to your IP address and your IP address isn't the lift, then you can do some IGMP snooping to see what else there is out there. Or you could do a DoS on the lifts to see what happens.

Of course, it might be nothing. But when I get in a lift, I'd hope this sort of thing wasn't possible.

KillerRAK 1 day ago 0 replies      
I applaud the effort and your curiosity. Any good tunes on that stream? Perhaps you're working on a remix?
selectiveupvote 1 day ago 0 replies      
Okay, I got a good laugh out of this one and appreciate it.
binaryanomaly 1 day ago 0 replies      
Hehe, nice story! Congrats ;)
lynxaegon 1 day ago 0 replies      
haha! Best reverse engineering of elevator music ever :)
m00dy 1 day ago 0 replies      
Nice story
matiasb 1 day ago 0 replies      
It was an ex-NSA agent talking to his mom :)
punnerud 1 day ago 0 replies      
Do you know you are a nerd when you laugh out loud after reading this?

My girlfriend: Was that something I would also laugh at?Me: Most likely not ;)

Hiroshima (1946) newyorker.com
157 points by canjobear  15 hours ago   154 comments top 18
partycoder 12 hours ago 11 replies      
There are many views on the bombing. The difference is that Americans do not hide it. It's controversial, but people know about it.

Same with Germans. People know about how the Germans behaved in WW2, they're not proud about it, it's learned in schools, and they do not hide it.

Japan on the other side, actively hide their actions. Japanese schools present a revisionist version of history in which all their horrible and brutal war crimes never happened. The Japanese people promote a vision of Japan with a tradition full of honor and virtue. Do they teach their kids that they dropped a bomb carrying the bubonic plague? or the massacre of Nanking? the mass beheadings of people? the sexual slavery? and many other horrible crimes, even targeting kids... probably not. This is the difference.

Hondor 12 hours ago 3 replies      
It's funny how war is always glorious from the point of view of the aggressor and always sad from the point of view of the aggressed (sp?), even when they're the same person! How we feel depends on who's story we're reading at the time.

"She had not had an easy time. Her husband, Isawa, had gone into the Army just after Myeko was born, and she had heard nothing from or of him for a long time, until, on March 5, 1942, she received a seven-word telegram: Isawa died an honorable death at Singapore. "

In case you don't know what her husband would have contributed to in Singapore - "between 25,000 and 50,000 ethnic Chinese in Singapore and Malaya ... were rounded up and taken to deserted spots around the island and killed systematically." https://en.wikipedia.org/wiki/Japanese_occupation_of_Singapo...

sandworm101 10 hours ago 1 reply      
WWII is falling out of living memory very quickly. It should be of no surprise that it is becoming every more glorified. Europe is shifting dangerously to the right. The US is about to elect a hawk (both of them are hawks). An Asian nation (this time China) is talking expansion, in influence if not in literal territory. The western world is split on the issue of migrants fleeing local conflicts. And everyone talks of tightening boards. All the ducks are lining up nicely for the day we forget the lesson.

There is only so much room in Canada.

canjobear 13 hours ago 1 reply      
It took me about an hour to read this whole thing. If you are like me and find the Hiroshima event fascinating, I highly recommend doing this. The piece follows several people who ended up in a park together the night after the bomb, and hauntingly captures the mood of that night and the horrifying physical details of what people went through. I definitely think I'll remember some images from this article for a while.
Trombone12 13 hours ago 1 reply      
The following summary by Alex Wellerstein represents his understanding of the consensus view. Notably the idea that the bombings where carefully motivated by some sort of ethical calculus is not something that historians now believe. From http://blog.nuclearsecrecy.com/2013/03/08/the-decision-to-us...

* Its not really clear that Truman ever made much of a decision, or regarded the bomb/invasion issue as being mutually exclusive. Truman didnt know if the bomb would end the war; he hoped, but he didnt know, couldnt know. The US was still planning to invade in November 1945. They were planning to drop as many atomic bombs as necessary. There is no contemporary evidence that suggests Truman was ever told that the causalities would be X if the bomb was dropped, and Y if it was not. There is no evidence that, prior to the bombings of Hiroshima and Nagasaki, that Truman was particularly concerned with Japanese causalities, radiation effects, or whether the bombs were ethical or not. The entire framing of the issue is ahistorical, after-the-fact, here. It was war; Truman had atomic bombs; it was taken for granted, at that point, that they were going to be used.

* Defeat is not surrender. Japan was certainly defeated by August 1945, in the sense that there was no way for them to win; the US knew that. But they hadnt surrendered, and the peace balloons they had put out would have assumed not that the Emperor would have stayed on as some sort of benign constitutional monarch (much less a symbolic monarch), but would still be the god-head of the entire Japanese country, and still preserve the overall Japanese state. This was unacceptable to the US, and arguably not for bad reasons. Japanese sources show that the Japanese military was willing to bleed out the country to exact this sort of concession from the US.

* American sources show that the primary reason for using the bomb was to aid in the war against Japan. However, the fact that such weapons would be important in the postwar period, in particular vis--vis the USSR, was not lost on American policymakers. It is fair to say that there were multiple motivations for dropping the bomb, and specifically that it looks like there was a primary motivation (end the war) and many other derivative benefits that came from that (postwar power).

* Japanese sources, especially those unearthed and written about by Tsuyoshi Hasegawa, make it clear that prior to the use of the atomic bombs, the Japanese cabinet was still planning on fighting a long battle against invasion, that they were hoping to exact the aforementioned concessions from the United States, and that they were aware (and did not care) that such an approach would cost the lives of huge numbers of Japanese civilians. It is also clear that the two atomic bombs did shock them immensely, and did help break the stalemate in the cabinet but that the Soviet invasion of Manchuria also shocked them immensely, perhaps equally, maybe even more (if you have a choice between being occupied by Truman or occupied by Stalin, the decision is an easy one). But there is no easy way to disentangle the effects of the bombs or the Soviet invasion, in this sense they were both immensely influential on the final decision. That being said, using the bomb as an excuse (as opposed to we are afraid of Russians) did play well with the Japanese public and made surrender appear to be a sensible, viable option in a culture where surrender was seen as a complete loss of honour.

Aelinsaar 14 hours ago 2 replies      
It must have been a surreal experience, to walk through the burning remains of your world, without an explanation as to "How".
mevile 14 hours ago 7 replies      
> In the street, the first thing he saw was a squad of soldiers who had been burrowing into the hillside opposite, making one of the thousands of dugouts in which the Japanese apparently intended to resist invasion, hill by hill, life for life; the soldiers were coming out of the hole, where they should have been safe, and blood was running from their heads, chests, and backs. They were silent and dazed.

However I may feel about the bomb having been used, I don't doubt it saved more lives than it took. As for all the civilians and children and other non-combatants, I doubt the tragedy would have been less without the bomb. It's hard for me to reconcile how such an obvious force of destruction could be more desirable than an invasion, and that's probably because it's very hard for me to understand the destruction an invasion creates. An invasion's impact isn't as easy to put into neat pictures and anecdotes as a singular experience that a bomb explosion creates.

There's some question if the ramp up of the Soviets on their eastern border had more to do with Japan's surrender than the atomic bombs we dropped. I'm not sure there's a clear answer there.

ksou32 12 hours ago 3 replies      
It's very easy to judge past America 60 years from the moment.

Say you're about to be shipped out to run up a hill and killed by Japanese machine guns.

Wouldn't you prefer Truman use a super weapon to end the war ?

eceppda 12 hours ago 1 reply      
The US started targeting defenseless civilians in Japan via fire bombing, and the majority of the Japan's cities had already been destroyed by the time they deployed the atomic bombs. The attacks on civilians in Japan were the first examples of what is now standard procedure in the air force, now known as "strategic bombing".

Mark Selden, a researcher at Cornell University, wrote an excellent article detailing this: http://apjjf.org/-Mark-Selden/2414/article.html

He points out how the attacks on civilians in Dresden, for example, were met with shock by Europeans at the time. But the Americans never reacted as strongly to the same bombings in Japan.

Several cities of civilians were destroyed in the Korean War. In Vietnam city bombins and chemical weapons were deployed against civilians. Carpet bombing was the term for targeting civilians in Iraq I. Etc etc etc. It's all basically the same.

rootbear 10 hours ago 0 replies      
I read the book version of this essay shortly before visiting Japan. It is strange to stand next to the one preserved ruined building at ground zero and think that there was once an atomic fire storm in that spot.
studentrob 10 hours ago 0 replies      
Cool. I hope Americans remember this as we consider electing someone who thinks nuclear war is a bargaining tool.

"If they do, they do" is not something a President should be saying about other nations engaging in nuclear war. Go mess up someone else's planet, Trump.

KamiCrit 7 hours ago 0 replies      
Wow that was a great read. Really shocking about what to took to survive Hiroshima. The bomb, fires, flooding, radiation sickness.
unknown2374 13 hours ago 0 replies      
And the states continue to get away with things like this to this day.
oska 11 hours ago 0 replies      
The Bomb Didnt Beat Japan Stalin Did [1]

[1] http://foreignpolicy.com/2013/05/30/the-bomb-didnt-beat-japa...

tiredwired 12 hours ago 0 replies      
It's ok, everything worked out in the end. US & Japan turned the whole relationship thing around.
microcolonel 12 hours ago 1 reply      
I read through this, revisited some of the images of the aftermath.

I don't think I'll be able to eat meat today, my stomach is turning and I feel like I could vomit.

It is so painful figuring that this is one of the "better" possible outcomes of the war.

dang 9 hours ago 0 replies      
Personal attacks and name-calling are not allowed on Hacker News, regardless of how wrong someone is.

We detached this comment from https://news.ycombinator.com/item?id=11750599 and marked it off-topic.

dmfdmf 11 hours ago 2 replies      
I have no problem with America bombing Japan with nuclear weapons. They (and Germany) deserved what they got, they started a world war and we finished it. All this American angst and apologia is coming from the anti-American intellectuals in the US. They have been spreading their propaganda unopposed for decades. The US is NOT GUILTY and owes no apologies whatsoever, Obama's pathetic mea culpa trip to Japan notwithstanding. For all you rationalists who like thought experiments (pretentiously called gedanken experiments by the same intellectuals); imagine what the world would be like if Germany and the Japanese had won WWII.
       cached 23 May 2016 13:02:01 GMT