hacker news with inline top comments    .. more ..    10 Jul 2015 News
home   ask   best   3 years ago   
The Coder Who Encrypted Your Texts wsj.com
115 points by eas  2 hours ago   52 comments top 14
1
moxie 2 hours ago 11 replies      
I get a lot of credit for the stuff that Open Whisper Systems does, but it's not all me by a long shot. Trevor Perrin, Frederic Jacobs, Christine Corbett, Tyler Reinhard, Lilia Kai, Jake McGinty, and Rhodey Orbits are the crew that really made all this work happen.
2
sergiotapia 1 hour ago 1 reply      
>Unfortunately, if Mr. Marlinspikes encryption scheme can be applied to imagery, then childporn collectors thank him too.

And there we go, highest voted comment on the article: a strawman about child pornography. Think of the keeeds

3
nly 6 minutes ago 0 replies      
Didn't Whatsapp stop encrypting SMS a while back? If you lose data you're sending in the clear, right?
4
iamthebest 1 hour ago 2 replies      
I tried installing TextSecure recently but it wouldn't work without the Google Play services.

I hadn't herd of their new app Signal. Has anyone tried it? I'm really interested in hearing anyone's experience using it.

BTW, I ended up installing Telegram ...and it may be mere co-incidence, but I started noticing some weird things happening that I've never seen before. I connect to the internet exclusively via tethering to my phone and while tethered I started seeing messages in Firefox from my desktop machine giving warnings that were something like "Could not establish secure connection because the server supports a higher version of TLS". My guess is that it was some sort of MITM attack... and I was possibly targeted due to the traffic to Telegram servers.

One other thing regarding Telegram: I really don't like that it reads my contact list and uploads it to their server to check if my contacts have a Telegram account. I've blocked the permission for now.

5
abalone 1 hour ago 0 replies      
I've had a ton of respect for Marlinspike ever since he published sslstrip, an incredibly simple defeat of HTTPS.[1]

It's a perfect demonstration of the fundamental insecurity of the web thus far. When an insecure communication mode (HTTP) is the default and perfectly ok most of the time, the browser has no idea when you are supposed to be operating on a secure channel (HTTPS) but have been tricked into downgrading by a man in the middle attack.

I can't prove it but I believe his work is a significant factor behind the shift towards deprecating HTTP in favor of HTTPS all the time. That is the only real solution.

[1] http://www.thoughtcrime.org/software/sslstrip/

6
justcommenting 57 minutes ago 0 replies      
Kudos to moxie and team for their work and their example of positively enabling others to speak freely, for inspiring others to build better alternatives, and for being the change they wish to see in the world.

Also wanted to share one of the most provocative moxie-isms I've heard in recent years from him, in reference to WL:

"What about the truth has helped you?"

7
lisper 1 hour ago 1 reply      
Not that I really want to steal any of Moxie's thunder, but if you're reading this comment thread you might also be interested in SC4:

https://github.com/Spark-Innovations/SC4

Strong encryption that runs in a browser. Recently completed its first security audit.

8
yuhong 2 hours ago 1 reply      
I am thinking about why encryption was only used by the military in the first place, back when the infamous Bell monopoly on phone service existed. I think cracking encryption was one of the reasons computers was created in the first place, right?
9
patcon 2 hours ago 1 reply      
Thank god this man exists.
10
PhantomGremlin 1 hour ago 0 replies      
Great article, not paywalled.

Here's the thing that Moxie recognizes, that many other programs don't (in any domain):

 He says he wants to build simple, frictionless apps, adopting a Silicon Valley buzzword for easy to use.

11
btczeus 1 hour ago 2 replies      
There is not any evidence of encryption on WhatsApp, source code is closed so you can never be safe.
12
mayneack 1 hour ago 0 replies      
Whisper the app is unrelated to Whisper Systems.
13
btczeus 1 hour ago 4 replies      
This guy is not part of the solution. He is part of the problem.https://f-droid.org/posts/security-notice-textsecure/
14
tedunangst 1 hour ago 0 replies      
Four comments in as many minutes. You're on a roll!
Solving multi-core Python lwn.net
38 points by ot  1 hour ago   13 comments top 5
1
BuckRogers 4 minutes ago 0 replies      
What problem does this solve? It does not solve the multicore "problem". I'm not a huge fan of this proposal. A few reasons.

This doesn't really add much that wasn't possible before, and doesn't really solve any technical or PR issue. The PR issue will never be resolved with CPython, those people who don't understand it are free to write multithreaded Java apps.But I think explicitly spinning up pthreads should be reserved for writing systems software. I'm assuming subinterpreters means they'll be allocated as pthreads because this is meant to "use all your cores".

The ideal solution would be to find a more implicit approach. Think gevent for pthreads or subinterpreters. That would be a lot more work to figure out, this proposal looks more like a hack. I don't this type of "improvement" will draw people to Python3 either.

2
maxerickson 1 hour ago 0 replies      
In case it isn't obvious, this got lots of discussion on the mailing list it was posted to:

https://mail.python.org/pipermail/python-ideas/2015-June/thr...

3
mianos 47 minutes ago 0 replies      
If this comes I'll move from 2.7 to a 3.x
4
zabbadabba 1 hour ago 4 replies      
This proposal is needlessly complicated. All Python has to do is put all interpreter state into a single struct to allow for completely independent interpreter instances. Then the GIL can be discarded altogether. And finally they have to change the C API to include a PythonInterpreter* as the first parameter rather than relying on the thread-local data hack.
5
smegel 15 minutes ago 1 reply      
Wow, Python will become the new Perl, how cool is that?!

Not very.

A Garbage Collector for C and C++ hboehm.info
15 points by brudgers  35 minutes ago   1 comment top
1
eatonphil 13 minutes ago 0 replies      
Why would this outperform malloc for multi-threaded programs? Is that a property of GCd programs in general?

Edit: That is, is it typical for GCd programs to outperform manually-memory managed programs in multi-threaded environments?

Google Glass is Alive techcrunch.com
12 points by brianchu  30 minutes ago   2 comments top 2
1
zaiwu 0 minutes ago 0 replies      
At least this time it has a clearer target, the enterprise market. Nice try.

I think last time Google Glass failed because Google didn't demonstrate any practical scenario using this product. It's like, Google was telling us, "this is a cool product but nobody knows the application of it, so, use your imagination!" Users get lost when you define such a big scope.

This time Google should definitely learn from Hololens. Show some killer apps on Google Glass and let users feel excited about it, not only because it is cool, but also useful.

2
dmitrygr 6 minutes ago 0 replies      
What shoddy reporting!

...

"Reportedly the new version of Google Glass also sports improved battery life, partly because of the Intel Atom CPU. The specific clock speed is unknown, but this tiny SoC has proven itself by powering most Android Wear devices."

...

NONE of the current Wear devices run Atom chips. They all run ARM Cortex-A7 chips, except Moto360, which runs a Cortex-A8 !

Leaked Hacking Team code to edit browser history? github.com
12 points by Vexs  1 hour ago   6 comments top 3
1
compbio 29 minutes ago 0 replies      
Let's not jump to conclusions. What is more likely? That this is generating test data for a forensics tool, or that this is planting child porn on a dissident's computer (with fixed file names and directories)?

I think it is rather cheeky (and dangerous) to accuse companies of doing such horrible stuff, when all we have is a few leaks (without context) to go on.

See for instance: https://github.com/hackedteam/rcs-common/blob/master/lib/rcs... which generates test keystrokes, test users and test programs for chat. The file names of the "planted" evidence is far too obvious, and they generate a hash for it. In short there is nothing in this code that implies this is used to plant evidence.

2
marak830 10 minutes ago 1 reply      
Somehow id always figured this would be fairly easy. Would something like this be considered proof?

Now ive never tried it myself, but i cant imagine theres much(if any), security on a history?

3
Vexs 1 hour ago 1 reply      
It's hard to write titles under the char limit, but the just of it is that the code appears to edit a user's browser history to include child porn and bomb blueprints, or something to that effect. It then edits the date and time to place it back in time.

If this code is ever verified, it seems like it could turn the verdict on some CP cases.

Why we maintain desktop apps for OS X, Windows, and a web application medium.com
64 points by mathouc  5 hours ago   45 comments top 14
1
vbezhenar 2 hours ago 4 replies      
What I don't like is WebView distributed as a "native app". It's not native app and it's not something I want to install, don't cheat me. Webpages should stay in browser.

I expect native app to be written with Objective C (OS X/iOS user here), having very low memory usage, fast startup and offline usage. Also I expect as much integration with the system, as possible, native controls (not that buggy emulation without my favorite emacs-like keybindings) and native behavior.

Probably we miss an important piece of technology: installable web-apps. Website opened in the frameless browser window, identifiable as a different application which could be easily pinned to Launchpad. With all advantages that "separate webview" has, but with some important difference: sandboxing. So I can feel safe when I launch this application, because I don't need to trust all my files, passwords and system to another application. I've seen that kind of technology in the iOS: website bookmark could be pinned as a desktop icon, but it's just a bookmark.

2
incompatible 3 minutes ago 0 replies      
"Once they find their place in the Windows Start menu or the Mac OS Dock, they are always visible."

I don't think this is true anymore in Windows 8+. From my limited experience in using it, once you install something it disappears into a mass of icons and is never seen again. If you can remember the name you may be able to search for it.

3
hobarrera 25 minutes ago 0 replies      
On the other end, as a user, I greatly prefer desktop apps, because I don't "lose" them as much. It's easy to lose a chat tab between lots of browser windows+tabs. A single IM window is harder to lose.

And then there's the integration: both look&feel and functionality can make a huge difference.

4
jayvanguard 3 hours ago 0 replies      
The bottom line is that if you care about your users and their experience you'll make a native app for them. Everything else is a tradeoff. There are definitely many application areas where the web is effectively their "native" home so that is all you need but I thought this article did a good job of laying out when that isn't good enough.
5
saurik 4 hours ago 2 replies      
> At Front for example, people using the desktop app spend on average 34% more time on the app that those using the web version.

...maybe I'm crazy, but the primary metric I would evaluate a tool designed to increase business productivity is not how much time I spend using it, and in fact would normally be summarized as the exact opposite of this metric...

6
incompatible 18 minutes ago 0 replies      
I'd say it surely depends on what the "app" is supposed to do. I like to run Firefox natively rather than as a web app. On the other hand I'll be thrilled when the local tax office convert their Windows/MacOS software to a web app so that I can use it on Linux.
7
BinaryIdiot 3 hours ago 1 reply      
I love it when companies make software that I use on my mobile phone and it syncs with a desktop application as well. As a web developer I appreciate web applications but the experience is almost always better in a native application (at least I've found very few exceptions).

So I'm happy companies like this still provide desktop applications. It would be great if it were more of a trend but I'm not sure that it is yet. If there were better frameworks to use that would allow code reuse across all platforms it would certainly make this easier.

8
akurilin 4 hours ago 3 replies      
I've been using slack just fine in the browser, haven't had much of a need for a desktop version of it.

I can just make a separate chrome instance for it and move it into its own xmonad workspace and switch to it much more conveniently than through alt-tab.

9
i_am_so_hungry 4 hours ago 0 replies      
I have just rediscovered the fantastic FreePascal/Lazarus, and have been thinking up desktop app projects to build. Cross-compile to Windows, Linux and OSX with native UI.
10
thoman23 2 hours ago 3 replies      
I'd like to hear a little more about how this "recent technology" that she linked to helps me deploy web apps to the desktop. The http://nwjs.io/ site certainly doesn't have much info on how it can help for this use case.
11
codezero 2 hours ago 0 replies      
I've been using Front since February and I love it. The desktop app was one of the key reasons I went with it because it is really fast compared to web interfaces of other apps I tried out.
12
DrScump 3 hours ago 1 reply      
"Your users dont need an authorization to" ... what?
13
nsgi 3 hours ago 1 reply      
Installable webapps cover at least the first two points.
14
unimpressive 4 hours ago 2 replies      
Bad article title. Doesn't even attempt to show a statistical trend that they're 'coming back' (I would argue with the existence of app stores and the like that they never really left). Does talk about the authors experience of offering a desktop app which is very interesting.
Amazon API Gateway Build and Run Scalable Application Backends amazon.com
371 points by strzalek  12 hours ago   156 comments top 35
1
fosk 9 hours ago 4 replies      
This is very interesting, and I am surprised it didn't happen a long time ago. The Lambda function integration opens up lots of new ideas when building API backends ready to be consumed by clients apps like, for example, a client-side Javascript application.

On the other side it seems like other extra functionality is limited and very AWS-oriented. If you are looking for an open source gateway that can be expanded with extra functionality, and potentially sit on top of the AWS-> Lambda integration, take a look at https://github.com/Mashape/kong I am a core maintainer too, so feel free to ask me any question). Kong is accepting plugins from the community (http://getkong.org/plugins), which can be used to replace or extend functionality beyond any other gateway including AWS Gateway.

2
andybak 11 hours ago 7 replies      
I've not tried very hard but I'm not sure I get it.

I've got an API already running. What does this buy me?

Caching? I can see some benefit there it's read heavy.

Auth and access control? Feels like that's part of my app code but maybe there's a benefit I'm missing

A lot of the other benefits also feel like it would be hard to cleanly separate them from my app code.

What's the elevator pitch and who's the target market?

3
jamiesonbecker 10 hours ago 2 replies      
AMAZON DEPRECATES EC2

November 3, 2017, SEATTLE

At the AWS Re:Invent in Las Vegas today, Amazon Web Services today announced the deprecation of Elastic Compute Cloud as it shifts toward lighter-weight, more horizontally scalable services. Amazon announced that it was giving customers the opportunity to migrate toward what it claims are lower cost "containers" and "Lambda processes".

"We believe that the cloud has shifted and customers are demanding more flexibility," tweeted Jeff Barr, AWS Spokesperson. "Customers don't want to deal with the complexities of security and scaling on-instance environments, and are willing to sacrifice controls and cost management in order to take advantage of the great scale we offer in the largest public cloud."

Barr went on to add that since their acqui-hire of Heroku last year, AWS has decided that the future of the cloud was in Platform as a Service (PaaS) and is now turning its focus to user-centric SSH key and user management services like https://Userify.com.

Amazon (AMZN) stock was up $0.02 on the latest announcements.

4
vlad 7 hours ago 2 replies      
I'm working on ApiEditor.com, here's a screenshot:

http://i.imgur.com/wSEKeVb.png

I originally built it for the Swagger Spec 15 months ago, as the first visual Swagger Editor. Let me know if you guys are interested in using something like this.

Notice it has the code side-by-side. Also, it scrolls to and highlights the section of the spec you're editing. Notice the dropdowns are aware of custom models (e.g. "User", "Order") in the documents and suggest them to make it easy to use.

5
pea 7 hours ago 0 replies      
This looks really interesting -- I think the abstraction of the server away from a bunch of development cases is gonna happen pretty quickly.

We're hacking on something really similar to this at https://stackhut.com, where we are building a platform for Microservices which are powered by Docker, can be run wherever through HTTP/JsonRPC, and are really simple to build/deploy. Think "Microservices as a service"... ;)

To give you an example, here is a task which converts PDFs to images: http://stackhut.com/#/services/pdf-tools, and here is the code powering it https://github.com/StackHut/pdf-tools which we `stackhut deploy`d.

We're about to open up the platform and would love to hear what, if you had a magic wand, you'd wish upon the product.

6
cdnsteve 11 hours ago 2 replies      
There goes the rest of my workday :D

I was looking for something like this. Lambda functions are amazing but restricted because they weren't easily consumable externally. This is they key.

7
jstoiko 7 hours ago 2 replies      
I am surprised that Amazon did not add support to more API description formats like RAML or apiblueprint. It is such a key feature. If I wanted to use this service in front of existing APIs, even only one API, I would not want to go through the work of having to redefine all my endpoints through a web form!

Shameless plug: after working on several API projects, I have been researching ways to not have to "code" over and over again what goes into creating endpoints, it became so repetitive. Lately, I turned to RAML (Yaml for REST) and, with 4 other developers, we created an opensource project called Ramses. It creates a fully functional API from a RAML file. It is a bit opinionated but having to "just" edit a Yaml file when building a new API simplified my life. As a bonus, I also get a documentation and a javascript client generated from the same file.

EDIT: forgot the url https://github.com/brandicted/ramses and url of a quick start tutorial: https://realpython.com/blog/python/create-a-rest-api-in-minu...

8
ecopoesis 12 hours ago 4 replies      
This looks incredibly slick. Speaking as someone who is implementing all the ceremony (security, logging, etc) around a new API right now I would use this in a heartbeat.

Of course, in a couple years, assuming success, the AWS lockin will suck. But given the odds of success I think I'd take the chance.

9
avilay 1 hour ago 0 replies      
I had built a command line tool for a similar purpose to generate REST APIs running on the MEAN stack. It creates "user" and "account" resources by default with user and admin authn/authz built in. It then deploys to heroku - creating a mongodb instance and a web dyno. Putting this out here in case anybody finds it useful.

https://bitbucket.org/avilay/api-labs/

10
estefan 8 hours ago 4 replies      
Please can people give examples of what they're using lambda for. Everything I've seen has been really basic (like image scaling), but most things I think of require a database.
11
tootie 8 hours ago 4 replies      
For heavy users of AWS services (not just EC2, but fancy SaaS/PaaS stuff) do you ever regret being locked in to a hosting provider? Does it restrict your ability to develop locally? Have you been bitten by problems that you can't resolve because you don't own the product? Or do you pretty much just love it?
12
traek 11 hours ago 1 reply      
Google has a similar offering for apps running on App Engine, called Cloud Endpoints[1].

[1] https://cloud.google.com/endpoints/

13
mpdehaan2 11 hours ago 3 replies      
I get the web UI for understanding it, but this is often not how people want to work...

What tools are there to allow me to keep my code/API layouts in source control when uploading to this?

I'm sure they exist somewhere, so mostly curious about pointers. (A sample node project using this would probably go a long way)

14
brightball 9 hours ago 0 replies      
One of the other benefits of using Cloudfront based endpoints is that your app servers behind it can avoid the TCP handshakes that add some latency. Amazon did an interesting presentation at re:Invent on the performance improvement from using Cloudfront ahead of dynamic requests that was eye opening.
15
joeyspn 11 hours ago 1 reply      
Seems like a great product to quickly get started with a mBaaS in a powerful cloud like AWS. The concept looks really similar to StrongLoop's loopback [0] with a big difference: vendor lock-in. I like the openness that StrongLoop is bringing on this front... IMO the best solution is one that allows you to move your containerised API from cloud to another cloud.

That being said, having this as an option in AWS is pretty cool and potentially time-saving. I'll probably give it a shot soon.

[0] https://strongloop.com/node-js/api-platform/

16
clay_to_n 6 hours ago 0 replies      
For those interested, the creators of Sails.js (a node-encapsulating framework) have created a sorta similar product called Treeline[1].

[1] https://treeline.io/

17
zenlambda 7 hours ago 1 reply      
I just tried this out with a Lambda function; I was wondering why you can't serve HTML with this (Yes, I know this product is aimed at enterprise ReST API stuff... one can try at least).

Well, it seems that authentication for the client is mandatory. This makes it unsuitable for rendering markup and serving it directly to clients.

Can anyone confirm this can only serve JSON content? I suspect were anonymous requests allowed, I'd see my markup rendered as a JSON string.

18
jonahx 10 hours ago 0 replies      
Would someone knowledgeable mind answering a few questions:

1. What are the differences between this + AWS lambda and parse? Is there additional functionality or freedom with this route? Is it cheaper?

2. What kind of savings could one expect hosting an API with this vs a heroku standard dyno?

19
acyacy 11 hours ago 2 replies      
I wish Lambda would allow listening to a socket [it helps binaries communicate with node]. This would move our team to use this without any further doubt.
20
jakozaur 12 hours ago 2 replies      
Isn't it yet another case of AWS doing a cheap replacement of existing company:https://apigee.com

I doen't have experience with their product, but on surface they look similar.

21
serverholic 9 hours ago 1 reply      
How do you develop for these kinds of services? It seems like you'd need to setup a whole development cluster instead of developing locally.
22
agentgt 7 hours ago 1 reply      
I can't help but notice that this looks more like an enterprise integration tool (think mulesoft) than API management (think apogee.. or I think that is what they do).

Speaking somewhat from experience (webmethods, caml, spring-integration and various other enterprise integration tools) they always want you to use "their" DSL which is often not even stored on the developers filesystem (ie webmethods... you submit the code to the "broker" or "rules engine" or "router"... lots of words for thing that does the work). Which leads to very awkward development.

Consequently I wonder if they will have git integration because writing code even snippets in a web form no matter how nice gets old fast.

23
jwatte 3 hours ago 0 replies      
Service as a Service - we've reached Serviceception!
24
culo 10 hours ago 1 reply      
Smart move for AWS, but they are not innovating nothing here, just following. Late.

Companies like Apigee/Mashape/3scale/Mulesoft have been doing cloud API management in various forms since 2008. Even Microsoft Azure has an API management offering since two years.

Nowadays all those API gateway features are commodities and doesn't make sense to pay for it anymore. Indeed Open Source projects such as KONG [1] are getting tremendous tractions. Same things happened in search with all cloud solutions and then ElasticSearch came out and was game over.

[1] https://github.com/mashape/kong

25
machbio 3 hours ago 0 replies      
I was trying this API gateway out, unfortunately there is no way to delete the API created..
26
dangrossman 12 hours ago 2 replies      
API is an acronym, and the product is called "Amazon API Gateway". This submission title is bugging me more than it should. Sorry for the meta-comment.

Edit: The submission title has been changed since this comment was written.

27
intrasight 9 hours ago 2 replies      
They say "If you already utilize OAuth tokens or any other authorization mechanism, you can easily setup API Gateway not to require signed API calls and simply forward the token headers to your backend for verification." It would be nice if AWS would stand up an authentication service that could handle oauth. Or do they already have such a thing?
28
anoncoder 6 hours ago 0 replies      
Do some math. It's expensive. 100 r/s for one month is about $900. Plus your bandwidth and EC2 charges (unless your using Lambda). For simple Lambda functions, you can get 100 r/s on a micro for $9.
29
adamkittelson 10 hours ago 0 replies      
It'd be cool if they'd use this to wrap their own XML-only APIs to provide JSON wrappers.
30
ramon 7 hours ago 0 replies      
31
dougcorrea 11 hours ago 1 reply      
32
kordless 11 hours ago 0 replies      
Given the current market direction with containerization and decentralization, I think using something that is vendor specific is probably a bad idea.
33
kpennell 7 hours ago 1 reply      
For someone who doesn't understand this that well, is this similar to firebase?
34
fiatjaf 5 hours ago 0 replies      
How do you test these things?
35
graycat 10 hours ago 3 replies      
Okay, at Google I found thatIAM abbreviates Amazon'sIdentity and Access Management.

So, the OP hasan undefined three letteracronym.

Suspicions confirmed: The OP is an example of poortechnical writing.

Why do I care? I could beinterested in Amazon Web Services(AWS) for my startup. But so far bya very wide margin what hasbeen the worst problem in mystartup? And the candidates are:

(1) Getting a clear statement of the business idea.

(2) Inventing the crucial,defensible, core technology.

(3) Learning the necessary programminglanguages.

(4) Installing software.

(5) System backup and restore.

(6) Making sense out of documentation about computersoftware.

May I have the envelope please(drum roll)? And the winner is,

(6) Making sense out of documentation about computersoftware.

And the judges have decided thatuniquely in the history of thiscompetition, this selection deservesthe Grand Prize, never before givenand to be retired on this firstaward, for the widest margin ofvictory ever.

No joke, guys: The poorly writtendocumentation, stupid words for really simpleideas, has cost me literallyyears of my effort. No joke.Years. No exaggeration. DidI mention years?

At this point, "I'm reticent.Yes, I'm reticent." MaybeAmazon has the greatest stuffsince the discovery and explanation of the3 degree K backgroundradiation, supersonicflight, atomic power,the microbe theory ofdisease, electric power,mechanized agriculture,and sex, but if Amazoncan't do a good job,and now I insist on a verygood job, documenting theirwork, which is to beyet another layer of documentationbetween me and some microprocessors,then, no, no thanks, no way,not a chance, not evenzip, zilch, zero.

What might it take me tocut through bad Amazon documentation of AWS,hours, days, weeks,months, years, thenfrom time to time,more hours, days, or weeks,and then as AWS evolves,more such time? What wouldI need to keep my startupprogress on track,500 hours a day? More?All just to cut throughbadly written documentationfor simple ideas, and worsedocumentation for complicatedideas?

First test: Any cases of undefined terms or acronyms?

Result: About three such cases,and out'a here. Gone. Done.Kaput. I don't know what AWShas, but I don't need it.

Sorry, AWS, to get me and mystartup as a user/customer,you have to up your gameby several notches. The firstthing I will look at is the quality of the technicalwriting in your documentation.And, I have some benchmarks forcomparison from J. von Neumann,P. Halmos, I. Herstein, W. Rudin,L. Breiman, J. Neveu, D. Knuth.

Amazon, for now, with me, fromyour example of writing here,you lose. Don't want it.Can't use it. Not even for free.I'm not going to invest mytime and effort trying tocut through your poor technicalwriting. And, the next timeI look at anything from AWS,the first undefined termand I'm out'a here again.

Yes, I'm hyper sensitive aboutbad technical writing -- couldn'tbe more sensitive if my fingersand arms were burned off. Wheneverpossible, I will be avoiding anychance of encountering bad technicalwriting, ever again. Period. Clear enough?

More generally, my view is thatbad technical writing is theworst bottleneck for progressin computing. Come on, AWS, up your game.

I don't want to run a serverfarm and would like you to dothat for me, but neither do Iwant to cut through morebad technical writing -- for thatwork, my project is alreadyyears over budget.

Go 1.5's vendor/ experiment medium.com
37 points by craigkerstiens  6 hours ago   7 comments top 2
1
roskilli 1 hour ago 2 replies      
> Multiple versions of the same library will cause problems ... This is a limitation of ... but probably also a sane one

> Another issue (hat tip to onetruekarl) comes up when 2 vendored dependencies also vendor their own copies of another depencency with the same name ...

So I still don't see why people are content with having a vendoring solution that doesn't support multiple versions of the same package and strict ASCII name conflicts. I understand this is an experiment but it begs the question why it doesn't want to at least eventually bite off doing powerful things (it seems to handwave the hard problems with a future spec that kind might solve it - seemingly something probably orthogonal to this experiment).

I have really enjoyed what NPM did with a simple package.json file and enabling recursive dependencies. It seems like if go were to add a simple file that maps package name to package URL with a decent support for different URL protocols you could build the same recursive dependency graph and depend on multiple versions of the same package and also different packages with the same name.

2
cledet 49 minutes ago 1 reply      
Correct me if I'm wrong but doesn't remote packages have the same side-effects as this experimental vendoring?
How We Deploy Python Code nylas.com
136 points by spang  5 hours ago   83 comments top 24
1
viraptor 30 minutes ago 0 replies      
> curl https://artifacts.nylas.net/sync-engine-3k48dls.deb -o $temp ; dpkg -i $temp

It's really not hard to deploy a package repository. Either a "proper" one with a tool like `reprepro`, or a stripped one which is basically just .deb files in one directory. There's really no need for curl+dpkg. And a proper repository gives you dependency handling for free.

2
morgante 47 minutes ago 1 reply      
Their reason for dismissing Docker are rather shallow, considering that it's pretty much the perfect solution to this problem.

Their first reason (not wanting to upgrade a kernel) is terrible considering that they'll eventually be upgrading it anyways.

Their second is slightly better, but it's really not that hard. There are plenty of hosted services for storing Docker images, not to mention that "there's a Dockerfile for that."

Their final reason (not wanting to learn and convert to a new infrastructure paradigm) is the most legitimate, but ultimately misguided. Moving to Docker doesn't have to be an all-or-nothing affair. You don't have to do random shuffling of containers and automated shipping of new imagesthere are certainly benefits of going wholesale Docker, but it's by no means required. At the simplest level, you can just treat the Docker contain as an app and run it as you normally would, with all your normal systems. (ie. replace "python example.py" with "docker run example")

3
Cieplak 5 hours ago 3 replies      
Highly recommend FPM for creating packages (deb, rpm, osx .pkg, tar) from gems, python modules, and pears.

https://github.com/jordansissel/fpm

4
svieira 51 minutes ago 0 replies      
Back when I was doing Python deployments (~2009-2013) I was:

* Downloading any new dependencies to a cached folder on the server (this was before wheels had really taken off)* Running pip install -r requirements.txt from that cached folder into a new virtual environment for that deployment (`/opt/company/app-name/YYYY-MM-DD-HH-MM-SS`)* Switching a symlink (`/some/path/app-name`) to point at the latest virtual env.* Running a graceful restart of Apache.

Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn't go out and I'd try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).

These days the things I'd definitely change would be:

* Use a local PyPi rather than a per-server cache* Use wheels wherever possible to avoid re-compilation on the servers.

Things I would consider:

* Packaging (deb / fat-package / docker) to avoid having any extra work done over per-machine + easy promotions from one environment to the next.

5
doki_pen 2 hours ago 0 replies      
We do something similar at embedly, except instead of dh-virtualenv we have our own homegrown solution. I wish I new about dh-virtualenv before we created it.

Basically, what it comes down to a build script that builds a deb with the virtualenv of your project versioned properly(build number, git tag), along with any other files that need to be installed (think init scripts and some about file describing the build). It also should do things like create users for daemons. We also use it to enforce consistent package structure.

We use devpi to host our python libraries (as opposed to applications), reprepro to host our deb packages, standard python tools to build the virtualenv and fpm to package it all up into a deb.

All in all, the bash build script is 177 LoC and is driven by a standard build script we include in every applications repository defining variables, and optionally overriding build steps (if you've used portage...).

The most important thing is that you have a standard way to create python libraries and application to reduce friction on starting new projects and getting them into production quickly.

6
tschellenbach 4 hours ago 1 reply      
Yes, someone should build the one way to ship your app. No reason for everybody to be inventing this stuff over and over again.

Deploys are harder if you have a large codebase to ship. rSync works really well in those cases. It requires a bit of extra infrastructure, but is super fast.

7
velocitypsycho 51 minutes ago 1 reply      
For installing using .deb files, how are db migrations handled. Our deployment system handles running django migrations by deploying to a new folder/virtualenv, running the migrations, then switching over symlinks.

I vaguely remember .deb files having install scripts, is that what one would use?

8
remh 4 hours ago 0 replies      
We fixed that issue at Datadog by using Chef Omnibus:

https://www.datadoghq.com/blog/new-datadog-agent-omnibus-tic...

It's more complicated than the proposed solution by nylas but ultimately it gives you full control of the whole environment and ensure that you won't hit ANY dependency issue when shipping your code to weird systems.

9
nZac 1 hour ago 1 reply      
We just commit our dependencies into our project repository in wheel format and install into a virtual env on prod from that directory eliminating PyPi. Though I don't know many other that do this. Do you?

Bitbucket and GitHub are reliable enough for how often we deploy that we aren't all that worried about downtime from those services. We could also pull from a dev's machine should the situation be that dire.

We have looked into Docker but that tool has a lot more growing before "I" would feel comfortable putting it into production. I would rather ship a packaged VM than Docker at this point, there are to many gotchas that we don't have time to figure out.

10
kbar13 5 hours ago 1 reply      
http://pythonwheels.com/ solves the problem of building c extensions on installation.
11
compostor42 59 minutes ago 1 reply      
Great article. I had never heard of dh-virtualenv but will be looking into it.

How has your experience with Ansible been so far? I have dabbled with it but haven't taken the plunge yet. Curious how it has been working out for you all.

12
sophacles 4 hours ago 0 replies      
We use a devpi server, and just push the new package version, including wheels built for our server environment, for distribution.

On the app end we just build a new virtualenv, and launch. If something fails, we switch back to the old virtualenv. This is managed by a simple fabric script.

13
sandGorgon 5 hours ago 3 replies      
The fact that we had a weird combination of python and libraries took us towards Docker.And we have never looked back.

For someone trying out building python deployment packages using deb, rpm, etc. I really recommend Docker.

14
avilay 2 hours ago 1 reply      
Here is the process I use for smallish services -

1. Create a python package using setup.py2. Upload the resulting .tar.gz file to a central location3. Download to prod nodes and run pip3 install <packagename>.tar.gz

Rolling back is pretty simple - pip3 uninstall the current version and re-install the old version.

Any gotchas with this process?

15
rfeather 2 hours ago 0 replies      
I've had decent results using a combination of bamboo, maven, conda, and pip. Granted, most of our ecosystem is Java. Tagging a python package along as a maven artifact probably isn't the most natural thing to do otherwise.
16
StavrosK 2 hours ago 3 replies      
Unfortunately, this method seems like it would only work for libraries, or things that can easily be packaged as libraries. It wouldn't work that well for a web application, for example, especially since the typical Django application usually involves multiple services, different settings per machine, etc.
17
webo 2 hours ago 1 reply      
> Building with dh-virtualenv simply creates a debian package that includes a virtualenv, along with any dependencies listed in the requirements.txt file.

So how is this solving the first issue? If PyPI or the Git server is down, this is exactly like the git & pip option.

18
BuckRogers 2 hours ago 2 replies      
Seems this method wouldn't work as well if you have external clients you deploy for. I'd use Docker instead of doing this, just to be in a better position for an internal or external client deployment.
19
ah- 5 hours ago 1 reply      
conda works pretty well.
20
daryltucker 5 hours ago 0 replies      
I see your issue of complexity. Glad I haven't ever reached the point where some good git hooks no longer work.
21
jacques_chester 1 hour ago 0 replies      
Here's how I deploy python code:

 cf push some-python-app
So far it's worked pretty well.

Works for Ruby, Java, Node, PHP and Go as well.

22
stefantalpalaru 3 hours ago 1 reply      
> The state of the art seems to be run git pull and pray

No, the state of the art where I'm handling deployment is "run 'git push' to a test repo where a post-update hook runs a series of tests and if those tests pass it pushes to the production repo where a similar hook does any required additional operation".

23
lifeisstillgood 4 hours ago 0 replies      
Weirdly I am re-starting an old project doing this venv/ dpkg (http://pyholodeck.mikadosoftware.com). The fact that it's still a painful problem means Inam not wasting my time :-
24
hobarrera 4 hours ago 2 replies      
> The state of the art seems to be run git pull and pray

Looks like these guys never heard of things like CI.

Iowa Makes a Bold Admission: We Need Fewer Roads citylab.com
198 points by atomatica  11 hours ago   119 comments top 14
1
mholt 10 hours ago 15 replies      
I'm from Iowa. There are a handful of population centers, and a sprinkling of homes and small communities between miles and miles and miles of farmland. The thing is, most people don't travel between the small communities - most driving takes people to or from town. If they're not going to town, they're going to visit neighbors or their fields, in which case gravel roads work great. Gravel roads work better than deteriorated pavement and have much lower maintenance costs.

I think "the entire system is unneeded" is a bit of a stretch, but I agree that, outside of cities, most routes don't need to be paved - you can safely travel 50 mph on a flat, straight gravel road. Of course the main arteries - Hwy 52, Hwy 20, I-80, and many others need to stay maintained. But there are so many small roads that, although quaint and a pleasure to drive, are probably unnecessary from a utilitarian/practical point of view.

2
w1ntermute 9 hours ago 0 replies      
Charles Marohn of Strong Towns (http://www.strongtowns.org/), who is quoted in the article, did a great podcast interview a while back on "how the post-World War II approach to town and city planning has led to debt problems and wasteful infrastructure investments": http://www.econtalk.org/archives/2014/05/charles_marohn.html
3
cjslep 8 hours ago 1 reply      
"The [Iowa] primary highway system makes up over 9,000 miles (14,000 km), a mere 8 percent of the U.S. state of Iowa's public road system." [0]

So while laudable, it would be very nice if North Carolina followed suit with its ~79,000 miles of maintained roads (largest of any state) [1]. But I doubt that would happen, my friend at NCDOT says the culture emphasizes building new roads (or the ones that get wiped out by hurricanes out on the outer banks), and change intersections in a manner that borders on the whimsical.

We like to build roads in challenging places, it seems [2].

[0] https://en.wikipedia.org/wiki/Iowa_Primary_Highway_System

[1] https://en.wikipedia.org/wiki/North_Carolina_Highway_System

[2] https://en.wikipedia.org/wiki/North_Carolina_Highway_12

4
kylec 9 hours ago 1 reply      
Per capita driving may have peaked, but as long as the capita is still growing there will still be more and more cars on the road.
5
dataker 1 hour ago 0 replies      
I briefly studied in South Dakota and Iowa without a car and it was a living nightmare.

Relying on friends and "taxis", I had to go through negative temperatures to get a simple can of soda.

After that, I could never complain about BART.

6
darkstar999 8 hours ago 2 replies      
So how do you let roads "deteriorate and go away"? Wouldn't there be huge unsafe potholes in the transition?

What kind of roads would they abandon? I didn't click through to all the references, but this article doesn't give any solutions.

7
dredmorbius 3 hours ago 0 replies      
Related: I need to confirm the trend held, but as of a year or two ago, US FAA RITA data showed peak aviation fuel in 2000. Total departures and passenger miles have been higher since, but due to smaller and more fully loaded aircraft.

By 2010- 2012 or so, actual fuel use was ~50% of year 2000 forecast estimates.

8
jsonne 9 hours ago 0 replies      
The article is referring to Iowa not Kansas.
9
raldi 8 hours ago 1 reply      
The article has a map showing which states have already hit peak traffic; does anyone know of a per-municipality or per-county list?

I'm really curious about whether this has happened in San Francisco.

10
mark-r 4 hours ago 2 replies      
I've always thought that total vehicle miles are capped by the availability of gas. Since fracking has expanded that supply, at least in the short term, I'd expect those mileage charts to start upticking again.
11
closetnerd 6 hours ago 1 reply      
This may make sense in Iowa but it makes no sense in California. Gravel roads would would slow the effective max speed down to a crawl which would further exasperate traffic. If anything we need a higher driving speeds.
12
programminggeek 10 hours ago 1 reply      
At one point in time an extensive road system is a competitive advantage. At another, it makes less sense.

The same thing happened with Railroads during their heyday. I remember seeing an old railroad map with stops at all these small towns in Nebraska. Now, railroads are almost entirely commercial with very few passenger stops in small towns.

It makes sense that at some point you just don't have the need for so many roads. If more people move to urban or even suburban city centers, things like public transportation, ride sharing, Uber, and even self-driving vehicles start to make a lot of sense and cut down a lot on driving volume and the need for roads.

13
ashmud 7 hours ago 0 replies      
One thing I learned, whether accurate or not, from the original SimCity is road maintenance is expensive. I almost invariably ended up peaking city size as the roads entered a constant state of disrepair.
14
AcerbicZero 8 hours ago 3 replies      
A bit old, but still relevant -http://archive.gao.gov/f0302/109884.pdf

I'm no expert on the topic, but it seems to me that if heavily loaded trucks are causing a disproportionate amount of damage they should be taxed at a rate which allows for proper maintenance of those roads.

A formalization in Coq of the Haskell pipes library github.com
43 points by lelf  8 hours ago   6 comments top 3
1
tikhonj 2 hours ago 1 reply      
I liked the little summary at the end:

> So in all it took 1.5 years to learn Coq well enough and to find the right abstraction, and 2 weeks to do the actual work.

That matches what other people have told me about Coq: the learning curve can be brutal, but you can be surprisingly productive once you get over it. (To be fair, this sort of problem also feels like a really good fit for Coqproving theorems in more developed sub-field of mathematics is much harder.)

A pretty cool project all around!

2
thinkpad20 30 minutes ago 0 replies      
Just glancing over this (super cool, John!), but what's going on here:

 Notation "f <~ g" := (f ~> g) (at level 70).
Seems likely to cause confusion... what's the reasoning/explanation behind this? I might imagine a law such as Notation "f <~ g" := (g ~> f), but...?

It's been too long since I wrote some Coq! Gotta get back into it :)

3
davidrusu 3 hours ago 0 replies      
This is why I love the Haskell community! Awesome work!
Elm for the Front End bendyworks.com
116 points by nkurz  8 hours ago   16 comments top 5
1
mrspeaker 6 hours ago 3 replies      
Great article! I'm a fan of Elm - but I can't say I'm a fan of writing HTML like this: https://github.com/twopoint718/elmchat/blob/7ee097b937117cc6...

And that's pretty much the most simple component you could create. The author is determined to reach their 10,000 lines of elm: that'd just be like, 10 reasonably-complex components! ;) But that's just the HTML part of the pie... the actual code seems quite beautiful.

2
munro 6 hours ago 1 reply      
> Extensible Records

I envy this feature, and wish it baked into Haskell! Idris also did this right, and I think OCaml and SML too.

3
damoncali 3 hours ago 3 replies      
Am I the only one who thinks elm is an email client? I must be getting old.
4
mightybyte 4 hours ago 1 reply      
Elm is doing some really cool things, but I personally prefer Haskell and a proper FRP library like reflex [1] compiled to javascript with GHCJS. It is newer, so it doesn't have the flashiness and polish that Elm has in some places, but I feel like it's built on a more solid foundation.

[1] https://www.youtube.com/watch?v=mYvkcskJbc4

5
RehnoLindeque 5 hours ago 0 replies      
It's nice to see elm-html-shorthand in the wild. Please let me know what needs improving.
Show HN: Arcade Play retro games streamed inside a browser arcadeup.io
13 points by yaboyhud  5 hours ago   3 comments top 2
1
alakin 1 minute ago 0 replies      
What is the expected latency? Would this work with an FPS? Looks like its using Broadway.js?
2
DrScump 4 hours ago 1 reply      
How does one get to anything besides Warcraft?
Will Our Understanding of Math Deteriorate Over Time? computationalcomplexity.org
117 points by yummyfajitas  12 hours ago   51 comments top 11
1
jacobolus 8 hours ago 3 replies      
In mathematics and theoretical computer science, we read research papers primarily to find research questions to work on, or find techniques we can use to prove new theorems.

This is why figuring out an elegant, concise, and powerful set of mathematical models which apply to multiple domains, and then devoting effort to simplifying, organizing, and explaining those ideas in an accessible way is so important.

Incentives for researchers are mostly to push and prod at the boundaries of a field, but in my opinion mathematical ideas are only of marginal value in themselves; more important is the way they help us understand and interact with the physical universe, and for that building communities, developing effective languages and notations, codifying our understanding, and making it accessible both to newcomers and to outsiders is the most important task for a field, and perhaps for our society generally.

Just like with software projects or companies, the most success comes from helping a range of other people solve their problems and extend their abilities, not from making technically beautiful art projects for their own sake (not that theres anything inherently wrong with those).

Perhaps more generally, while theorem proving has overwhelmingly dominated pure mathematics and related fields for the past 80100 years, and has been an important tool since Euclid, theorem proving is only one way of approaching the world, and in my opinion is a mere tool, not an end in itself. Just like simulation is a tool, or drawing pictures is a tool, or statistical analysis is a tool.

I like this bit from Feynman: https://www.youtube.com/watch?v=YaUlqXRPMmY

2
tokenadult 7 hours ago 3 replies      
This is a very good and thought-provoking essay for a short blog post, and I have already shared it in a Facebook community heavily populated by professional mathematicians (where the moderator, with a Ph. D. in math from Berkeley, has given it a thumbs up). Thanks for sharing.

I really like the overall point of the post that mathematics once known can be forgotten or neglected, and mathematics written up for mathematics journals can be difficult to understand. Professor John Stillwell writes, in the preface to his book Numbers and Geometry (New York: Springer-Verlag, 1998):

"What should every aspiring mathematician know? The answer for most of the 20th century has been: calculus. . . . Mathematics today is . . . much more than calculus; and the calculus now taught is, sadly, much less than it used to be. Little by little, calculus has been deprived of the algebra, geometry, and logic it needs to sustain it, until many institutions have had to put it on high-tech life-support systems. A subject struggling to survive is hardly a good introduction to the vigor of real mathematics.

". . . . In the current situation, we need to revive not only calculus, but also algebra, geometry, and the whole idea that mathematics is a rigorous, cumulative discipline in which each mathematician stands on the shoulders of giants.

"The best way to teach real mathematics, I believe, is to start deeper down, with the elementary ideas of number and space. Everyone concedes that these are fundamental, but they have been scandalously neglected, perhaps in the naive belief that anyone learning calculus has outgrown them. In fact, arithmetic, algebra, and geometry can never be outgrown, and the most rewarding path to higher mathematics sustains their development alongside the 'advanced' branches such as calculus. Also, by maintaining ties between these disciplines, it is possible to present a more unified view of mathematics, yet at the same time to include more spice and variety."

Stillwell demonstrates what he means about the interconnectedness and depth of "elementary" topics in the rest of his book, which is a delight to read and full of thought-provoking problems.

http://www.amazon.com/gp/product/0387982892/

3
lmm 7 hours ago 1 reply      
But the proofs survive because they are proofs; if they don't communicate the proof of the result then they have failed and should not be accepted by journals. At the extreme end, machine-checkable proofs are in standard, documented formats; an alien reading them in ten thousand years should still be able to understand what's going on, at least if they understand the notation and the axioms.
4
pc2g4d 7 hours ago 2 replies      
Integrate concise and effect explanations into the relevant Wikipedia articles and you at least give future generations a good head start on understanding these things.
5
stared 5 hours ago 0 replies      
Ludwik Fleck's "Genesis and development of a scientific fact" goes very much in the line of "[science] only exists in a living community of mathematicians that spreads understanding and breaths life into ideas both old and new." (written pre-WW2; it served as an inspiration for Khun). Its most eye-opening example is the history of [the concept/knowledge/science/... of] syphilis, from ancient to modern times.

PDF (of print from 1979): http://www.evolocus.com/Textbooks/Fleck1979.pdf

6
spiritplumber 5 hours ago 1 reply      
Most people use math less and less (even your average cashier will have issues if the machine isn't working).

Will Myron Aub give us the feeling of power back?

http://downlode.org/Etext/power.html by Isaac Asimov on just this topic.

7
stephencanon 8 hours ago 2 replies      
Of course. Most modern mathematicians aren't fluent with half the material in (the ~100 year-old text) Whittaker and Watson "A Course of Modern Analysis". This was standard material even 60 years ago. You can get a PhD in mathematics today without once seeing an elliptic function, because computers are good enough at numerically solving the problems they were once used to solve symbolically.
8
agentgt 5 hours ago 1 reply      
I read the SA article the blog refers to and I couldn't decide if that particular colossal theory on symmetry was just an isolated incident or that "deterioration" is really happening to many disciplines/theories of math. It is certainly an obvious fact that things become popular and then eventually forgotten and then sometimes brought back. There is also different levels of understanding: breadth vs depth. I recall at one point there was concern of the opposite. That is too much depth and not enough breadth (the above theory is depth problem as many mathematicians know of the theory just not the exact proof).

I still think the unpublished problem ie "publication bias" is a bigger issue which I suppose is somewhat in similar vain. Supposedly google was working on that.

9
ripter 5 hours ago 1 reply      
To make it readable, paste this in the console:

 document.querySelector('.date-outer').style.backgroundColor = 'white';

10
sklogic 5 hours ago 0 replies      
Exactly, this unfortunate "saturate and move on" can be observed in pretty much any area.
11
hayd 3 hours ago 0 replies      
Ah, I remember a lecture course where we classified all groups of order up to 1000.

Joy.... "Like having your brains smashed out by a slice of lemon wrapped around a large gold brick."

A tale of indie RPG development irontowerstudio.com
16 points by danso  5 hours ago   discuss
The Promise of Relational Programming [video] youtube.com
13 points by tephra  3 hours ago   1 comment top
1
ctdean 18 minutes ago 0 replies      
A great talk by William Byrd
OpenSSL Security Advisory openssl.org
269 points by runesoerensen  15 hours ago   131 comments top 27
1
pilif 15 hours ago 6 replies      
> OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails

I think the latest big thing I've learned in my career is that trying to fix broken input data silently is always bad. Fixing stuff silently isn't helpful for the callers, it's very difficult to do and it produces additional code which also isn't running in the normal case, so it's much more likely to be broken.

Additionally, your callers will start to depend on your behaviour and suddenly you have what amounts to two separate implementations in your code.

I learned that while blowing up (though don't call exit if you're a library. Please.) is initially annoying for callers, in the end, it will be better for you and your callers because code will be testable, correct and more secure (because there's less of it)

2
judemelancon 11 hours ago 3 replies      
I am hardly astonished that a 319-line function that opens by declaring x, xtmp, xtmp2, chain_ss, bad_chain, param, depth, i, ok, num, j, retry, cb, and sktmp variables had a bug.

Before someone provides the standard "submit a patch" retort, I'll note that the variable naming is in full compliance with https://www.openssl.org/about/codingstyle.txt even if the function length isn't. A quick sample of other files suggests the function length matches actual practice elsewhere, too.

3
jgrahamc 14 hours ago 3 replies      
Interesting part is that the bug was introduced in the latest versions and has been fixed by the person who inserted it :-)

Bug added: https://github.com/openssl/openssl/commit/da084a5ec6cebd67ae...

Bug removed: https://github.com/openssl/openssl/commit/2aacec8f4a5ba1b365...

Although that's just the committer: https://twitter.com/agl__/status/619129579580469248

4
acqq 14 hours ago 1 reply      
We probably don't need to worry this time:

https://ma.ttias.be/openssl-cve-2015-1793-man-middle-attack/

"The vulnerability appears to exist only in OpenSSL releases that happened in June 2015 and later. That leaves a lot of Linux distributions relatively safe, since they haven't gotten an OpenSSL update in a while.

Red Hat, CentOS and Ubuntu appear to be entirely unaffected by this vulnerability, since they had no OpenSSL updates since June 2015."

5
mykhal 14 hours ago 1 reply      
from test/verify_extra_test.c:

 Test for CVE-2015-1793 (Alternate Chains Certificate Forgery) Chain is as follows: rootCA (self-signed) | interCA | subinterCA subinterCA (self-signed) | | leaf ------------------ | bad rootCA, interCA, subinterCA, subinterCA (ss) all have CA=TRUE leaf and bad have CA=FALSE subinterCA and subinterCA (ss) have the same subject name and keys interCA (but not rootCA) and subinterCA (ss) are in the trusted store (roots.pem) leaf and subinterCA are in the untrusted list (untrusted.pem) bad is the certificate being verified (bad.pem) Versions vulnerable to CVE-2015-1793 will fail to detect that leaf has CA=FALSE, and will therefore incorrectly verify bad

6
d_theorist 14 hours ago 1 reply      
So, updating server side OpenSSL will not close this vulnerability (for servers offering https-protected websites)? Is that correct?

If I understand the advisory correctly then this means that somebody could set up a webserver with a specially-crafted certificate and pretend to be somebody else, assuming that the client is running a vulnerable version of OpenSSL.

Is that right? I wish they would write these advisories in a slightly more helpful fashion.

7
coolowencool 13 hours ago 1 reply      
"No Red Hat products are affected by this flaw (CVE-2015-1793), so no actions need to be performed to fix or mitigate this issue in any way." https://access.redhat.com/solutions/1523323
8
jarofgreen 15 hours ago 0 replies      
In case it's slow:

OpenSSL Security Advisory [9 Jul 2015]

=======================================

Alternative chains certificate forgery (CVE-2015-1793)

======================================================

Severity: High

During certificate verification, OpenSSL (starting from version 1.0.1n and1.0.2b) will attempt to find an alternative certificate chain if the firstattempt to build such a chain fails. An error in the implementation of thislogic can mean that an attacker could cause certain checks on untrustedcertificates to be bypassed, such as the CA flag, enabling them to use a validleaf certificate to act as a CA and "issue" an invalid certificate.

This issue will impact any application that verifies certificates includingSSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication.

This issue affects OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o.

OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2dOpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p

This issue was reported to OpenSSL on 24th June 2015 by Adam Langley/DavidBenjamin (Google/BoringSSL). The fix was developed by the BoringSSL project.

Note

====

As per our previous announcements and our Release Strategy(https://www.openssl.org/about/releasestrat.html), support for OpenSSL versions1.0.0 and 0.9.8 will cease on 31st December 2015. No security updates for thesereleases will be provided after that date. Users of these releases are advisedto upgrade.

References

==========

URL for this Security Advisory:https://www.openssl.org/news/secadv_20150709.txt

Note: the online version of the advisory may be updated with additionaldetails over time.

For details of OpenSSL severity classifications please see:https://www.openssl.org/about/secpolicy.html

9
aninteger 13 hours ago 5 replies      
Why has the adoption of alternative SSL software been so low. We have libressl, boringssl, something from Amazon? Very few Linux distributions seem interested in shipping alternative SSL software.
10
Mojah 14 hours ago 1 reply      
11
0x0 14 hours ago 1 reply      
Debian stable/oldstable is not affected. Only in unstable: https://security-tracker.debian.org/tracker/CVE-2015-1793
13
benmmurphy 9 hours ago 1 reply      
An interesting coincidence is I noticed what I thought (and maybe is) a similar bug in the elixir hex module on the same day that this bug report was submitted to openssl. If you look at the hex partial chain method (https://github.com/hexpm/hex/blob/master/lib/hex/api.ex#L59-...) you can see it goes through all the certificates the other party supplied starting from the first one and tries to find one that is signed by a certificate in the trust store. it then explicitly returns it as the trusted_ca which effectively means the certificate has the CA bit set on it.

in order to exploit the attack in hex you need find a CA that will directly issue certificates off of a certificate in a trust store. apparently, this is not the recommended policy for CAs. so I made this tweet: (https://twitter.com/benmmurphy/status/613733887211139072)

'does anyone know a CA that signs directly from their root certs or has intermediate certs in trust stores? asking for a friend.'

and apparently there are some CAs that will do this. in the case of hex i think the chain you need to create looks something like this:

 RANDOM CERT SIGNED BY ISSUER NOT IN TRUST STORE | V VALID_CERT_SIGNED_BY_CERT_IN_TRUST_STORE (effectively treated as CA bit set) | V EVIL CERTIFICATE SIGNED BY PREVIOUS CERT

14
mykhal 15 hours ago 0 replies      
Changes between 1.0.2c and 1.0.2d [9 Jul 2015]

 *) Alternate chains certificate forgery During certificate verfification, OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate. This issue was reported to OpenSSL by Adam Langley/David Benjamin (Google/BoringSSL). [Matt Caswell]

15
kfreds 10 hours ago 2 replies      
The latest version (2.3.7) of the official OpenVPN client is vulnerable, as is Tunnelblick for OSX. No fix has been published yet. The OpenVPN clients for Android and iOS are not affected.

See https://mullvad.net/en/v2/news for more details.

16
runesoerensen 14 hours ago 1 reply      
It's worth noting that only releases since June 2015 are affected
17
0x0 13 hours ago 0 replies      
This sounds really familiar to the old IE bug that didn't check the CA flag - http://www.thoughtcrime.org/ie-ssl-chain.txt
18
mrmondo 7 hours ago 0 replies      
Good work on finding and fixing the bug to those involved. I don't think this is said often enough.
19
mugsie 14 hours ago 0 replies      
Seems to be OK for anyone using non beta versions of Ubuntu as well:

http://people.canonical.com/~ubuntu-security/cve/2015/CVE-20...

20
eatonphil 14 hours ago 2 replies      
I've got a few sites using OpenSSL certs; do I need to do anything?
21
ericfrederich 13 hours ago 0 replies      
At first I thought this was the result of that Hacking Team dump, but it seems this was reported prior to that.
22
arenaninja 14 hours ago 0 replies      
Well this isn't how I wanted to start my morning
23
ck2 14 hours ago 1 reply      
Nothing I can find in yum for CentOS 6 or 7
24
Sharker 14 hours ago 2 replies      
Its for old version. For example actual debian not affected. https://security-tracker.debian.org/tracker/CVE-2015-1793
25
api 8 hours ago 0 replies      
A lot of these SSL vulnerabilities show that complexity is an inherently bad thing for security. In general, bugs in a system are exponentially not linearly proportional to system complexity. With security that means that the addition of a feature, option, or extension to the security layer of a system exponentially decreases its trustworthiness.
26
tomjen3 14 hours ago 2 replies      
How is it that we still depend on something so broken?
27
sneak 14 hours ago 1 reply      
agl++;
The Bitcoin Megatransaction: 25 Seconds to Verify ozlabs.org
24 points by RustyRussell  4 hours ago   discuss
Neuroscientists Demonstrate Operation of a Network of Brains nicolelislab.net
10 points by paublyrne  5 hours ago   discuss
Ask HN: Why don't transistors in microchips fail?
162 points by franciscop  17 hours ago   88 comments top 20
1
joelaaronseely 10 hours ago 5 replies      
There is another mechanism called "Single Event Upset" (SEU) or "Single Event Effects" (SEE) (basically synonymous). This is due to cosmic rays. On the surface of the earth, the effect is mostly abated by the atmosphere - except for neutrons. As you go higher in the atmosphere (say on a mountaintop, or an airplane, or go into space) it becomes worse because of other charged particles that are no longer attenuated by the atmosphere.

The typical issue at sea level is from neutrons hitting silicon atoms. If a neutron hits the neucleus in some area of the microprocessor circuitry, it suddenly recoils, basically causing an ionizing trail of several microns in length. Given transistors are now measured in 10s of nanometers, the ionizing path can cross many nodes in the circuit and create some sort of state change. Best case it happens in a single bit of a memory that has error correction and you never notice it. Worst case it causes latchup (power to ground short) in your processor and your CPU overheats and fries. Generally you would just notice it as a sudden error that causes the system to lock up, you'd reboot and it would come back up and be fine, leaving you with a vague thought of, "That was weird".

2
gibrown 9 hours ago 1 reply      
As a former hardware engineer who worked on automated test equipment that tested ASICs (and did ASIC dev), there are a lot of different methods used to avoid this.

As others mentioned, most of these problems are caught when testing the chips. Most of the transistors on a chip are actually used for caching or RAM, and in those cases the chips have built in methods for disabling the portions of memory that are non-functional. I don't recall any instances of CPUs/firmware doing this dynamically, but I wouldn't be surprised if there are. A lot of chips have some self diagnostics.

Most ASICs also have extra transistors sprinkled around so they can bypass and fix errors in the manufacturing process. Making chips is like printing money where some percentage of your money is defective. It pays to try and fix them after printing.

Also, as someone who has ordered lots of parts there are many cases where you put a part into production and then find an abnormally high failure rate. I once did a few months of high temperature and vibration testing on our boards to try and discover these sorts of issues, and then you spend a bunch of time convincing the manufacturer that their parts are not meeting spec.

Fun times... thanks for the trip down memory lane.

3
kabdib 10 hours ago 2 replies      
Oh, they do fail.

The last time I worked with some hardware folks speccing a system-on-a-chip, they were modeling device lifetime versus clock speed.

"Hey software guys, if we reduce the clock rate by ten percent we get another three years out of the chip." Or somesuch, due to electromigration and other things, largely made worse by heat.

Since it was a gaming console, we wound up at some kind of compromise that involved guessing what the Competition would also be doing with their clock rate.

4
ajross 11 hours ago 3 replies      
Yes, they can fail. Lots and lots of them fail immediately due to manufacturing defects. And over time, electromigration (where dopant atoms get kicked out of position by interaction with electron momentum) will slowly degrade their performance. And sometimes they fail due to specific events like an overheat or electrostatic discharge.

But the failure rate after initial burn-in is phenomenally low. They're solid state devices, after all, and the only moving parts are electrons.

5
zokier 10 hours ago 1 reply      
Slightly related thing is RAM random bit errors. There was an interesting article published few years ago where some guy registered domains that differed by one bit from some popular domains and recorded the traffic that hit them. Kinda scary to think what else is wrong in your RAM then... Too bad that ECC is still restricted to servers and serious workstations.

http://dinaburg.org/bitsquatting.html

6
Nomentatus 8 hours ago 0 replies      
Nearly all chips experienced transistor failures, rendering them useless, back in the day. Intel is the monster it is because they were the guys who first found out how to sorta "temper" chips to vastly reduce that failure rate (most failures were gross enough to be instant, back then, and Intel started with memory chips.) Because their heat treatment left no visible mark, Intel didn't patent it, but kept it as a trade secret giving them an incredible economic advantage, for many years. They all but swept the field. I've no doubt misremembered some details.
7
nickpsecurity 10 hours ago 1 reply      
They're extremely simple, have no moving parts, and the materials/processes of semiconductor fabs optimize to ensure they get done right. The whole chip will often fail if transistors are fabbed incorrectly and rest end up in errata sheets where you work around them. Environmental effects are reduced with Silicon-on-Insulator (SOI), rad-hard methods, immunity-aware programming, and so on. Architectures such as Tandem's NonStop assumed there'd be plenty of failures and just ran things in lockstep with redundant components.

So, simplicity and hard work by fab designers is 90+% of it. There's whole fields and processes dedicated to the rest.

8
RogerL 10 hours ago 0 replies      
Others have answered why, here is the 'what would happen'. Heat your CPU up by pointing a hair dryer at it (you may want to treat this as a thought experiment as you could destroy your computer). At some point it begins to fail because transistors are pushed past theiroperating conditions. Another way to push it to failure is to overclock. The results are ... variable. Sometimes you won't notice the problems, computations will just come out wrong. Sometimes the computer will blue screen or spontaneously reboot. And so on. Just depends where the failure occurs, and if the currently running software depends on that part of the chip. If a transistor responsible for instruction dispatch fails it's probably instant death. If a transistor responsible for helping in computing the least significant bit of a sin() computation, well, you may never notice it.
9
jsudhams 1 hour ago 0 replies      
So would that mean we need to ensure the systems in critical area (not nuclear or some but banks and transaction critical) be tech refereshed mandatory at 4/5 years? Especially when 7nm production starts.
10
wsxcde 3 hours ago 0 replies      
Others have already mentioned one failure mechanism that causes transistor degradation over time: electromigration. Other important aging mechanisms are negative-bias temperature instability (NBTI) and hot carrier injection (HCI). I've seem papers claim the dual of NBTI - PBTI - is now an issue in the newest process nodes.

This seems to be a nice overview of aging effects: http://spectrum.ieee.org/semiconductors/processors/transisto....

11
intrasight 8 hours ago 2 replies      
When I was studying EE, a professor said on this subject that about 20% of the transistors in a chip are used for self-diagnostics. Manufacturing failures are a given. The diagnostics tell the company what has failed, and they segment the chips into different product/price classes based upon what works and what doesn't. After being deployed into a product, I assume that chips would follow a standard Bathtub Curve: https://en.wikipedia.org/wiki/Bathtub_curve

As geometries fall, the effects of "wear" at the atomic level will go up.

12
mchannon 12 hours ago 1 reply      
Generally, yes, a failing transistor can be a fatal problem. This relates to "chip yield" on a waferfull of chips.

Faults don't always manifest themselves as a binary pass/fail result; as chip temperatures increase, transistors that have faults will "misfire" more often. As long as this temperature is high enough, these lower-grade chips can be sold as lower-end processors that never in practice reach these temperatures.

Am not aware of any redundancy units in current microprocessor offerings but it would not surprise me; Intel did something of this nature with their 80386 line but it was more of a labeling thing ("16 BIT S/W ONLY").

Solid state drives, on the other hand, are built around this protection; when a block fails after so many read/write cycles, the logic "TRIM"s that portion of the virtual disk, diminishing its capacity but keeping the rest of the device going.

13
tzs 9 hours ago 1 reply      
Speaking of the effects of component failure on chips, a couple years ago researchers demonstrated self-healing chips [1]. Large parts of the chips could be destroyed and the remaining components would reconfigure themselves to find an alternative way to accomplish their task.

[1] http://www.caltech.edu/news/creating-indestructible-self-hea...

14
greenNote 8 hours ago 0 replies      
As stated, two big variables are clock rate and feature size, which both effect mean time between failures (MTBF). Being more conservative increases this metric. I know from working in a fab that there are many electrical inspection steps along the process, so failures are caught during the manufacturing process (reducing the chance that you see them in the final product). Once the chip is packaged, and assuming that it is operated in a nominal environment, then failures are not that common.
15
Gravityloss 5 hours ago 0 replies      
They do fail. Linus Torvalds talked about this in 2007http://yarchive.net/comp/linux/cpu_reliability.html
16
spiritplumber 9 hours ago 1 reply      
This is why we usually slightly underclock stuff that has to live on boats.
17
msandford 3 hours ago 0 replies      
> Considering that a Quad-core + GPU Core i7 Haswell has 1.4e9 transistors inside, even given a really small probability of one of them failing, wouldn't this be catastrophic?

Yes, generally speaking it would be. Depending on where it is inside the chip.

> Wouldn't a single transistor failing mean the whole chip stops working? Or are there protections built-in so only performance is lost over time?

Not necessarily. It might be somewhere that never or rarely gets used, in which case the failure won't make the chip stop working. It might mean that you start seeing wrong values on a particular cache line, or that your branch prediction gets worse (if it's in the branch predictor) or that your floating point math doesn't work quite right anymore.

But most of the failures are either manufacturing errors meaning that the chip NEVER works right, or they're "infant mortality" meaning that the chip dies very soon after it's packaged up and tested. So if you test long enough, you can prevent this kind of problem from making it to customers.

Once the chip is verified to work at all, and it makes it through the infant mortality period, the lifetime is actually quite good. There are a few reasons:

1. there are no moving parts so traditional fatigue doesn't play a role

2. all "parts" (transisotrs) are encased in multiple layers of silicon dioxide so that you can lay the metal layers down

3. the whole silicon die is encased yet again in another package which protects the die from the atmosphere

4. even if it was exposed to the atmosphere, and the raw silicon oxidized, it would make silicon dioxide, which is a protective insulator

5. there is a degradation curve for the transistors, but the manufacturers generally don't push up against the limits too hard because it's fairly easy and cheap to underclock and the customer doesn't really know what they're missing

6. since most people don't stress their computers too egregiously this merely slows down the slide down the degradation curve as it's largely governed by temperature, and temperature is generated by a) higher voltage required for higher clock speed and b) more utilization of the CPU

Once you add all these up you're left with a system that's very, very robust. The failure rates are serious but only measured over decades. If you tried to keep a thousand modern CPUs running very hot for decades you'd be sorely disappointed in the failure rate. But for the few years that people use a computer and the relative low load that they place on them (as personal computers) they never have a big enough sample space to see failures. Hard drives and RAM fail far sooner, at least until SSDs start to mature.

18
MichaelCrawford 3 hours ago 0 replies      
They do.

That's why our boxen have power-on self tests.

19
rhino369 10 hours ago 0 replies      
Extremely good R&D done by semiconductor companies. It's frankly amazing how good they are.
20
Gibbon1 9 hours ago 1 reply      
Transistors don't fail for the same reason the 70 year old wires in my house don't fail. The electrons flowing through the transistors doesn't disturb the molecular structure of the doped silicon.
How Doom got ported to NeXTSTEP (2013) wilshipley.com
7 points by fezz  1 hour ago   discuss
You are a kitten in a catnip forest bloodrizer.ru
9 points by madars  4 hours ago   discuss
VisPy: Python library for interactive scientific visualization on GPU vispy.org
12 points by skadamat  5 hours ago   1 comment top
1
kelsolaar 1 hour ago 0 replies      
I use Vispy for a toy project (https://github.com/colour-science/colour-analysis) and I'm quite happy with it so far even though the project is still young. The development team is nice and doing a great job. The API offers different levels of abstraction, I'm mainly using the high level layer which is enough for my current needs and doesn't involve a single glsl line.
Market Complexity Broke the NYSE Before Saving It bloombergview.com
10 points by dsri  5 hours ago   discuss
Show HN: Bubblin Next-generation books bubbl.in
5 points by IpxqwidxG  3 hours ago   discuss
Tetris Guideline tetrisconcept.net
45 points by xvirk  9 hours ago   5 comments top 3
1
partisan 1 hour ago 1 reply      
> The tetrominoes spawn horizontally and with their flat side pointed down.

> Terms used in the user manual: "Tetriminos" not "tetrominoes" or "tetrads" or "pieces", letter names not "square" or "stick", etc.

The spec doesn't meet the spec.

2
jrcii 2 hours ago 0 replies      
Interestingly, the #1 hit for Tetris on Google is freetetris.org which is endorsed with the Tetris trademark, yet it does not follow these guidelines. For example the pieces do not start "with their flat side pointed down".
3
vzaliva 2 hours ago 1 reply      
When I was coding a Tetris clone as my standard exercise to learn a new programming language I found this site to be very useful:

http://tetris.wikia.com/wiki/Tetris_Wiki

European Parliament rejects EU plan to axe Freedom of Panorama amateurphotographer.co.uk
104 points by nsns  10 hours ago   35 comments top 9
1
phaemon 10 hours ago 2 replies      
In case you're trying to fight your way through the double negatives, this means they're keeping Freedom of Panaroma (which is good).
2
alttt 8 hours ago 2 replies      
This is one of the most uninformed titles I've read. There were no eu (commission or council) plans to kill freedom of panorama. The liberals inside the European parliament proposed it in one committee, the liberals, conservatives, right wing and socialists voted in favour of it (green and left against) in the committee and it passed. Now in the plenary, after sufficient backlash, socialists and some liberals have changed their opinion, thus an amendment introduced in a committee was slashed in three plenary. All that happened in the parliament, not in a mysterious other EU.
3
eCa 9 hours ago 0 replies      
> "We must now continue to fight for an extension of important copyright exceptions such as this one to all member states."

Good!

> Cavada wanted all European nations to adopt laws that may require permission from a building's architect before an image is published commercially.

I would like a list of architechts supporting such a law. They deserve all photographs (and mentions) of their buildings to be removed from the internet.

4
x0054 9 hours ago 2 replies      
You know, you live your life, blissfully unaware of laws as idiotic as this, and then one day you wake up, read the front page of HN, and now you have yet another thing to feel utterly outraged by. Shouldn't I have absolute right to photograph public spaces.
5
deif 8 hours ago 2 replies      
Unsurprisingly the EU made a sensible ruling. Just like most of the rulings they've made in the past 10 years. I'm not sure why the media and social justice groups got so worked up over it as it was in the minority anyway.

It's called a democracy for a reason - you've got to have fair representation from all angles, not just the most sensible. People are getting mightily agitated every time a strange law is brought to the table recently. I blame Twitter. /s

6
mmanfrin 6 hours ago 0 replies      
I read the title and was very confused, surely there must be some sort of bill or law or something called 'Panorama', because the only thing I can think of by that name is taking a long picture.

And then I read the article, and it truly referenced pictures.

What idiot thought legislating fucking long pictures was a good idea?

7
jfoster 4 hours ago 0 replies      
It seems silly for architects to even want this. They would be undermining themselves! Would the Eiffel Tower be nearly as famous as it is if only approved/licensed photographers were allowed to photograph it? Would I even care who Frank Gehry is if photos of his buildings were not appearing in my Facebook feed and findable on Google, Wikipedia, etcetera? Is not getting your building "out there," as widely as possible, a success metric for an architect?
8
nsns 6 hours ago 1 reply      
For anyone thinking this is stupid, it's actually a pointer to a very important change in the power structure of the world:

"Occupation of the Kasbah in Tunis and of the Syntagma Square in Athens, siege of Westminster in London during the student movement of 2011, encirclement of the parliament in Madrid on September 25, 2012 or in Barcelona on June 15, 2011, riots all around the Chamber of Deputies in Rome on December 14, 2010, attempt on October 15, 2011 in Lisbon to invade the Assembleia da Republica, burning of the Bosnianpresidential residence in February of 2014: the places of institutional power exert a magnetic attraction on revolutionaries. But when the insurgents manage to penetrate parliaments, presidential palaces, and other headquarters of institutions, as in Ukraine, in Libya or in Wisconsin, its only to discover empty places, that is, empty of power, and furnished without any taste. Its not to prevent the 'people' from 'taking power' that they are sofiercely kept from invading such places, but to prevent them from realizing that power no longer resides in the institutions. There are only deserted temples there, decommissioned fortresses, nothing but stage setsreal traps for revolutionaries. The popular impulse to rush onto the stage to find out what is happening in the wings is bound to be disappointed. If they got inside, even the most fervent conspiracy freaks would find nothing arcane there; the truth is that power is simply no longer that theatrical reality to which modernity accustomed us. [...]But what is it that appears on euro banknotes? Not human figures, not emblems of a personal sovereignty, but bridges,aqueducts, archespieces of impersonal architecture, cold as stone. As to the truth about the present nature of power, every European has a printed exemplar of it in their pocket. It can be stated in this way: power now resides in the infrastructures of this world. Contemporary power is of an architectural and impersonal, and not a representative or personal, nature."(Invisible Committee, To Our Friends, Semiotext(e) 2014)

9
Apocryphon 8 hours ago 1 reply      
See, democracy can and does work.
AWS CodePipeline amazon.com
205 points by jeffbarr  13 hours ago   68 comments top 9
1
saosebastiao 11 hours ago 1 reply      
Internally at Amazon, Pipelines (which inspired this service) was a lifesaver. Apollo (which is the inspiration for CodeDeploy) was also helpful, but should probably just be replaced by Docker or OSv at this point.

But if they ever release a tool that is inspired by the Brazil build system, pack up and run for the hills. When it takes a team of devs over two years to get Python to build and run on your servers, you know your frankenstein build system is broken. It could be replaced by shell scripts and still be orders of magnitude better. Nobody deserves the horror of working with that barf sandwich.

2
felipesabino 9 hours ago 1 reply      
I wonder why GitHub specifically and not just Git repos in general? Isn't it weird?

It means they don't even support their own new "Git" product AWS CodeCommit [1]

[1]https://aws.amazon.com/blogs/aws/now-available-aws-codecommi...

3
jtwaleson 12 hours ago 0 replies      
Is there any way to integrate this with ECS? That would be a great feature for me.
4
atmosx 9 hours ago 1 reply      
This is interesting for lone developers, but I'm not sure about the pricing:

Youll pay $1 per active pipeline per month (the first one is available to you at no charge as part of the AWS Free Tier). An active pipeline has at least one code change move through it during the course of a month.

Does this mean that every time you run a session you pay 1 EUR no matter how many stages the session has (pull, compile/build, test (multiple tests) and deploy?

5
maikklein 5 hours ago 1 reply      
Could I install Unreal Engine 4 on CodePipleline so that I can build my game remotely?
6
jayonsoftware 11 hours ago 1 reply      
Can we build .NET code ?
7
pragar 12 hours ago 0 replies      
Thanks. I was eagerly waiting for this :)
8
dynjo 12 hours ago 8 replies      
Amazon seriously need to hire some good UI designers. They produce great stuff but it all looks like it was designed by developers in 1980.
9
ebbv 12 hours ago 2 replies      
What's up with the Amazons spam? There's 5 different submissions on the front page right now. They could have all been one. Bad form, AWS team.
GitLab raises 1.5M gitlab.com
171 points by jobvandervoort  13 hours ago   73 comments top 19
1
jbrooksuk 5 hours ago 2 replies      
Firstly, a major congratulations to the gang at GitLab - well deserved!

We'd used GitLab for over a year internally, but as I've mentioned previously, it became a pain to maintain. So we switched to GitHub for our private "important" projects and turned off our GitLab instance (other reasons caused this too mind). Our version was 6.7 or something up until today.

Today we realised we should run GitLab internally again for non-critical repositories - since our networking is a pain to give external access to servers - we can't access it out of the office. I updated us to 7.12 CE and I kind of regret it.

The UI is so complicated now. Whilst there are good features that we like, it's so hard to navigate to where you want to be. I think this is down to the "contextual" sidebar. I really do prefer GitHub's UI for repo administration and usage, which is a shame.

Sure, the colours are nice in GitLab but it's far from functional. My colleagues felt the same way too.

Also (for those at GitLab) your Markdown renderer is still not rendering Markdown correctly in all cases...

Anyway, not to take away from the funding - it's excellent news!

2
general_failure 10 hours ago 5 replies      
A feature I miss in GitLab and Github is an issue tracker across multiple repos. For example, our project has 5-10 repos but they are all part of single release/milestone.

Currently, we have to create milestones in each of the repos and assign issues to those milestones. It's really a hassle. We cross reference commits a lot in the issues and this is the reason why we don't create a "empty" repo simple for common issues. Unless there is some way to say something like "Fixes commonissuetracker#43".

Thanks, a very happy gitlab user

3
nodesocket 8 hours ago 1 reply      
Seems like a small amount to raise from a heavyweight VC like Khosla and super angel Ashton Kutcher. I would imagine trying to compete against GitHub and GitHub enterprise would be a capital intensive thing.
4
edwintorok 7 hours ago 2 replies      
When I visit a project page I usually do it for one of these reason:

* learn about what the project is, a short description on what it is, how to install, where to find more documentation

* look at / search the files or clone the repo

* search bugreports or create a new bugreport

Your default project page looks quite similar to gitorious, which looks more like a place to just host your repository and not a place to interact with the project.Bitbucket's default looks way better for example, and github's is quite good too.

My suggestion to make Gitlab fit better into my workflow:

* default page/tab for project root should be configurable, either on per project or per user basis: I'd like to have the README as default for example, the Activity page by default interests me less.

* there should be a tab for issues on the default page, its more important than to see the activity IMHO

* you've got the clone URL in an easily accessible place, good!

* the Files view is quite similar to Github's (good!), but I can't figure out how to search (either fulltext or filename)

* I don't see a button to create a new issue (I'm not logged in, should I login first? Github has a new issue button that takes you to login)

* how do I search in issues (fulltext?)

* how do I search for project names, or inside projects/issues globally?

* the default project page should somehow highlight or focus on making it easy and obvious the main aspects on how you'd interact with the project, if all features are shown in equal style it feels somewhat cluttered and overloaded.

P.S.: should I open a feature request about these on the gitlab site?

5
Vespasian 5 hours ago 0 replies      
Congratulations on the funding!

I am using a gitlab instance for about 2 years on my personal server and have been very happy with it.

Recently, (finally!) we switched our research group over from (a very very old version of) redmine and you can't imagine my joy when that happended! I think never before in my life migrating wiki pages and issues felt so good.

Last but not least it is encouraging to see a European software startup thriving and growing like you do. Nothing against the great products from SV but a little geographical competition never hurt nobody. Right? ;)

Keep up the great work.Gre aus Deutschland / Greetings from Germany

6
BinaryIdiot 9 hours ago 1 reply      
I used GitLab at my last company. It was one of the earlier versions before they went to YCombinator. At the time I wasn't a fan; I ran into bugs and just had odd persistence issues.

But I've got to say GitLab is just incredible to use now. It's really nice and I now use it over BitBucket for my private repositories. I still use GitHub for OpenSource (that's going to be a hard barrier to get through if they really want to) but I'm a big fan.

So congrats on the round! This is technically the second seed round, right? Or does YCombinator not really count as a seed anymore?

7
wldcordeiro 8 hours ago 1 reply      
I've been using Gitlab now for a few months and really like it but I've run into some bugs on gitlab.com that I've reported through multiple avenues and have had zero success fixing. The main one is that there are some repos that if I make an issue or edit an issue on the server will 500 error on form submit (the submit will still occur, the redirect is broken.) It would be beyond nice to see this extra cash go to a more responsive support system.
8
jobvandervoort 13 hours ago 0 replies      
We're very excited with this opportunity. We'll be here if you have any questions.
9
jtwaleson 10 hours ago 1 reply      
Congrats from another Dutch company that expanded to the US! We're using GitLab for all our internal source code at Mendix, and are extremely happy with it.
10
physcab 11 hours ago 3 replies      
This is a naive question, but what's the difference between GitLab and Github?
11
neom 10 hours ago 1 reply      
Big fans of GitLab over here at DigitalOcean! Good work and good luck!
12
the-dude 10 hours ago 2 replies      
But what is the valuation?
13
schandur 8 hours ago 1 reply      
Congratulations to the GitLab team! We use a self-hosted version and are very happy with it.
14
marvel_boy 11 hours ago 1 reply      
Nice !Without doubt GitLab has created a lot of innovation. What are the main new things you will be deliver in the future?
15
ausjke 3 hours ago 0 replies      
For some reason I feel Redmine + Gitolite is the best for everything, except for code-review that is.
16
yAnonymous 7 hours ago 1 reply      
Congrats and thanks for the great software!
17
marcelo_lebre 13 hours ago 1 reply      
Nicely done!
18
joshmn 10 hours ago 3 replies      
Paging @sytse; "GitLab CEO here" coming soon... :)

For those who don't get the joke, https://www.google.com/search?num=40&es_sm=119&q="GitLab+CEO...

19
fibo 9 hours ago 1 reply      
I don't like the idea of a free as in beer software. GitHub is Great but Gitlab seems like a cheap clone, so targeting People that want to pay less or nothing. I don't think it is ethic to clone ideas, to build a better world we nerd new ideas.
       cached 10 July 2015 04:02:03 GMT