And there we go, highest voted comment on the article: a strawman about child pornography. Think of the keeeds
I hadn't herd of their new app Signal. Has anyone tried it? I'm really interested in hearing anyone's experience using it.
BTW, I ended up installing Telegram ...and it may be mere co-incidence, but I started noticing some weird things happening that I've never seen before. I connect to the internet exclusively via tethering to my phone and while tethered I started seeing messages in Firefox from my desktop machine giving warnings that were something like "Could not establish secure connection because the server supports a higher version of TLS". My guess is that it was some sort of MITM attack... and I was possibly targeted due to the traffic to Telegram servers.
One other thing regarding Telegram: I really don't like that it reads my contact list and uploads it to their server to check if my contacts have a Telegram account. I've blocked the permission for now.
It's a perfect demonstration of the fundamental insecurity of the web thus far. When an insecure communication mode (HTTP) is the default and perfectly ok most of the time, the browser has no idea when you are supposed to be operating on a secure channel (HTTPS) but have been tricked into downgrading by a man in the middle attack.
I can't prove it but I believe his work is a significant factor behind the shift towards deprecating HTTP in favor of HTTPS all the time. That is the only real solution.
Also wanted to share one of the most provocative moxie-isms I've heard in recent years from him, in reference to WL:
"What about the truth has helped you?"
Strong encryption that runs in a browser. Recently completed its first security audit.
Here's the thing that Moxie recognizes, that many other programs don't (in any domain):
He says he wants to build simple, frictionless apps, adopting a Silicon Valley buzzword for easy to use.
This doesn't really add much that wasn't possible before, and doesn't really solve any technical or PR issue. The PR issue will never be resolved with CPython, those people who don't understand it are free to write multithreaded Java apps.But I think explicitly spinning up pthreads should be reserved for writing systems software. I'm assuming subinterpreters means they'll be allocated as pthreads because this is meant to "use all your cores".
The ideal solution would be to find a more implicit approach. Think gevent for pthreads or subinterpreters. That would be a lot more work to figure out, this proposal looks more like a hack. I don't this type of "improvement" will draw people to Python3 either.
Edit: That is, is it typical for GCd programs to outperform manually-memory managed programs in multi-threaded environments?
I think last time Google Glass failed because Google didn't demonstrate any practical scenario using this product. It's like, Google was telling us, "this is a cool product but nobody knows the application of it, so, use your imagination!" Users get lost when you define such a big scope.
This time Google should definitely learn from Hololens. Show some killer apps on Google Glass and let users feel excited about it, not only because it is cool, but also useful.
"Reportedly the new version of Google Glass also sports improved battery life, partly because of the Intel Atom CPU. The specific clock speed is unknown, but this tiny SoC has proven itself by powering most Android Wear devices."
NONE of the current Wear devices run Atom chips. They all run ARM Cortex-A7 chips, except Moto360, which runs a Cortex-A8 !
I think it is rather cheeky (and dangerous) to accuse companies of doing such horrible stuff, when all we have is a few leaks (without context) to go on.
See for instance: https://github.com/hackedteam/rcs-common/blob/master/lib/rcs... which generates test keystrokes, test users and test programs for chat. The file names of the "planted" evidence is far too obvious, and they generate a hash for it. In short there is nothing in this code that implies this is used to plant evidence.
Now ive never tried it myself, but i cant imagine theres much(if any), security on a history?
If this code is ever verified, it seems like it could turn the verdict on some CP cases.
I expect native app to be written with Objective C (OS X/iOS user here), having very low memory usage, fast startup and offline usage. Also I expect as much integration with the system, as possible, native controls (not that buggy emulation without my favorite emacs-like keybindings) and native behavior.
Probably we miss an important piece of technology: installable web-apps. Website opened in the frameless browser window, identifiable as a different application which could be easily pinned to Launchpad. With all advantages that "separate webview" has, but with some important difference: sandboxing. So I can feel safe when I launch this application, because I don't need to trust all my files, passwords and system to another application. I've seen that kind of technology in the iOS: website bookmark could be pinned as a desktop icon, but it's just a bookmark.
I don't think this is true anymore in Windows 8+. From my limited experience in using it, once you install something it disappears into a mass of icons and is never seen again. If you can remember the name you may be able to search for it.
And then there's the integration: both look&feel and functionality can make a huge difference.
...maybe I'm crazy, but the primary metric I would evaluate a tool designed to increase business productivity is not how much time I spend using it, and in fact would normally be summarized as the exact opposite of this metric...
So I'm happy companies like this still provide desktop applications. It would be great if it were more of a trend but I'm not sure that it is yet. If there were better frameworks to use that would allow code reuse across all platforms it would certainly make this easier.
I can just make a separate chrome instance for it and move it into its own xmonad workspace and switch to it much more conveniently than through alt-tab.
On the other side it seems like other extra functionality is limited and very AWS-oriented. If you are looking for an open source gateway that can be expanded with extra functionality, and potentially sit on top of the AWS-> Lambda integration, take a look at https://github.com/Mashape/kong I am a core maintainer too, so feel free to ask me any question). Kong is accepting plugins from the community (http://getkong.org/plugins), which can be used to replace or extend functionality beyond any other gateway including AWS Gateway.
I've got an API already running. What does this buy me?
Caching? I can see some benefit there it's read heavy.
Auth and access control? Feels like that's part of my app code but maybe there's a benefit I'm missing
A lot of the other benefits also feel like it would be hard to cleanly separate them from my app code.
What's the elevator pitch and who's the target market?
November 3, 2017, SEATTLE
At the AWS Re:Invent in Las Vegas today, Amazon Web Services today announced the deprecation of Elastic Compute Cloud as it shifts toward lighter-weight, more horizontally scalable services. Amazon announced that it was giving customers the opportunity to migrate toward what it claims are lower cost "containers" and "Lambda processes".
"We believe that the cloud has shifted and customers are demanding more flexibility," tweeted Jeff Barr, AWS Spokesperson. "Customers don't want to deal with the complexities of security and scaling on-instance environments, and are willing to sacrifice controls and cost management in order to take advantage of the great scale we offer in the largest public cloud."
Barr went on to add that since their acqui-hire of Heroku last year, AWS has decided that the future of the cloud was in Platform as a Service (PaaS) and is now turning its focus to user-centric SSH key and user management services like https://Userify.com.
Amazon (AMZN) stock was up $0.02 on the latest announcements.
I originally built it for the Swagger Spec 15 months ago, as the first visual Swagger Editor. Let me know if you guys are interested in using something like this.
Notice it has the code side-by-side. Also, it scrolls to and highlights the section of the spec you're editing. Notice the dropdowns are aware of custom models (e.g. "User", "Order") in the documents and suggest them to make it easy to use.
We're hacking on something really similar to this at https://stackhut.com, where we are building a platform for Microservices which are powered by Docker, can be run wherever through HTTP/JsonRPC, and are really simple to build/deploy. Think "Microservices as a service"... ;)
To give you an example, here is a task which converts PDFs to images: http://stackhut.com/#/services/pdf-tools, and here is the code powering it https://github.com/StackHut/pdf-tools which we `stackhut deploy`d.
We're about to open up the platform and would love to hear what, if you had a magic wand, you'd wish upon the product.
I was looking for something like this. Lambda functions are amazing but restricted because they weren't easily consumable externally. This is they key.
EDIT: forgot the url https://github.com/brandicted/ramses and url of a quick start tutorial: https://realpython.com/blog/python/create-a-rest-api-in-minu...
Of course, in a couple years, assuming success, the AWS lockin will suck. But given the odds of success I think I'd take the chance.
What tools are there to allow me to keep my code/API layouts in source control when uploading to this?
I'm sure they exist somewhere, so mostly curious about pointers. (A sample node project using this would probably go a long way)
That being said, having this as an option in AWS is pretty cool and potentially time-saving. I'll probably give it a shot soon.
Well, it seems that authentication for the client is mandatory. This makes it unsuitable for rendering markup and serving it directly to clients.
Can anyone confirm this can only serve JSON content? I suspect were anonymous requests allowed, I'd see my markup rendered as a JSON string.
1. What are the differences between this + AWS lambda and parse? Is there additional functionality or freedom with this route? Is it cheaper?
2. What kind of savings could one expect hosting an API with this vs a heroku standard dyno?
I doen't have experience with their product, but on surface they look similar.
Speaking somewhat from experience (webmethods, caml, spring-integration and various other enterprise integration tools) they always want you to use "their" DSL which is often not even stored on the developers filesystem (ie webmethods... you submit the code to the "broker" or "rules engine" or "router"... lots of words for thing that does the work). Which leads to very awkward development.
Consequently I wonder if they will have git integration because writing code even snippets in a web form no matter how nice gets old fast.
Companies like Apigee/Mashape/3scale/Mulesoft have been doing cloud API management in various forms since 2008. Even Microsoft Azure has an API management offering since two years.
Nowadays all those API gateway features are commodities and doesn't make sense to pay for it anymore. Indeed Open Source projects such as KONG  are getting tremendous tractions. Same things happened in search with all cloud solutions and then ElasticSearch came out and was game over.
Edit: The submission title has been changed since this comment was written.
So, the OP hasan undefined three letteracronym.
Suspicions confirmed: The OP is an example of poortechnical writing.
Why do I care? I could beinterested in Amazon Web Services(AWS) for my startup. But so far bya very wide margin what hasbeen the worst problem in mystartup? And the candidates are:
(1) Getting a clear statement of the business idea.
(2) Inventing the crucial,defensible, core technology.
(3) Learning the necessary programminglanguages.
(4) Installing software.
(5) System backup and restore.
(6) Making sense out of documentation about computersoftware.
May I have the envelope please(drum roll)? And the winner is,
And the judges have decided thatuniquely in the history of thiscompetition, this selection deservesthe Grand Prize, never before givenand to be retired on this firstaward, for the widest margin ofvictory ever.
No joke, guys: The poorly writtendocumentation, stupid words for really simpleideas, has cost me literallyyears of my effort. No joke.Years. No exaggeration. DidI mention years?
At this point, "I'm reticent.Yes, I'm reticent." MaybeAmazon has the greatest stuffsince the discovery and explanation of the3 degree K backgroundradiation, supersonicflight, atomic power,the microbe theory ofdisease, electric power,mechanized agriculture,and sex, but if Amazoncan't do a good job,and now I insist on a verygood job, documenting theirwork, which is to beyet another layer of documentationbetween me and some microprocessors,then, no, no thanks, no way,not a chance, not evenzip, zilch, zero.
What might it take me tocut through bad Amazon documentation of AWS,hours, days, weeks,months, years, thenfrom time to time,more hours, days, or weeks,and then as AWS evolves,more such time? What wouldI need to keep my startupprogress on track,500 hours a day? More?All just to cut throughbadly written documentationfor simple ideas, and worsedocumentation for complicatedideas?
First test: Any cases of undefined terms or acronyms?
Result: About three such cases,and out'a here. Gone. Done.Kaput. I don't know what AWShas, but I don't need it.
Sorry, AWS, to get me and mystartup as a user/customer,you have to up your gameby several notches. The firstthing I will look at is the quality of the technicalwriting in your documentation.And, I have some benchmarks forcomparison from J. von Neumann,P. Halmos, I. Herstein, W. Rudin,L. Breiman, J. Neveu, D. Knuth.
Amazon, for now, with me, fromyour example of writing here,you lose. Don't want it.Can't use it. Not even for free.I'm not going to invest mytime and effort trying tocut through your poor technicalwriting. And, the next timeI look at anything from AWS,the first undefined termand I'm out'a here again.
Yes, I'm hyper sensitive aboutbad technical writing -- couldn'tbe more sensitive if my fingersand arms were burned off. Wheneverpossible, I will be avoiding anychance of encountering bad technicalwriting, ever again. Period. Clear enough?
More generally, my view is thatbad technical writing is theworst bottleneck for progressin computing. Come on, AWS, up your game.
I don't want to run a serverfarm and would like you to dothat for me, but neither do Iwant to cut through morebad technical writing -- for thatwork, my project is alreadyyears over budget.
> Another issue (hat tip to onetruekarl) comes up when 2 vendored dependencies also vendor their own copies of another depencency with the same name ...
So I still don't see why people are content with having a vendoring solution that doesn't support multiple versions of the same package and strict ASCII name conflicts. I understand this is an experiment but it begs the question why it doesn't want to at least eventually bite off doing powerful things (it seems to handwave the hard problems with a future spec that kind might solve it - seemingly something probably orthogonal to this experiment).
I have really enjoyed what NPM did with a simple package.json file and enabling recursive dependencies. It seems like if go were to add a simple file that maps package name to package URL with a decent support for different URL protocols you could build the same recursive dependency graph and depend on multiple versions of the same package and also different packages with the same name.
It's really not hard to deploy a package repository. Either a "proper" one with a tool like `reprepro`, or a stripped one which is basically just .deb files in one directory. There's really no need for curl+dpkg. And a proper repository gives you dependency handling for free.
Their first reason (not wanting to upgrade a kernel) is terrible considering that they'll eventually be upgrading it anyways.
Their second is slightly better, but it's really not that hard. There are plenty of hosted services for storing Docker images, not to mention that "there's a Dockerfile for that."
Their final reason (not wanting to learn and convert to a new infrastructure paradigm) is the most legitimate, but ultimately misguided. Moving to Docker doesn't have to be an all-or-nothing affair. You don't have to do random shuffling of containers and automated shipping of new imagesthere are certainly benefits of going wholesale Docker, but it's by no means required. At the simplest level, you can just treat the Docker contain as an app and run it as you normally would, with all your normal systems. (ie. replace "python example.py" with "docker run example")
* Downloading any new dependencies to a cached folder on the server (this was before wheels had really taken off)* Running pip install -r requirements.txt from that cached folder into a new virtual environment for that deployment (`/opt/company/app-name/YYYY-MM-DD-HH-MM-SS`)* Switching a symlink (`/some/path/app-name`) to point at the latest virtual env.* Running a graceful restart of Apache.
Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn't go out and I'd try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).
These days the things I'd definitely change would be:
* Use a local PyPi rather than a per-server cache* Use wheels wherever possible to avoid re-compilation on the servers.
Things I would consider:
* Packaging (deb / fat-package / docker) to avoid having any extra work done over per-machine + easy promotions from one environment to the next.
Basically, what it comes down to a build script that builds a deb with the virtualenv of your project versioned properly(build number, git tag), along with any other files that need to be installed (think init scripts and some about file describing the build). It also should do things like create users for daemons. We also use it to enforce consistent package structure.
We use devpi to host our python libraries (as opposed to applications), reprepro to host our deb packages, standard python tools to build the virtualenv and fpm to package it all up into a deb.
All in all, the bash build script is 177 LoC and is driven by a standard build script we include in every applications repository defining variables, and optionally overriding build steps (if you've used portage...).
The most important thing is that you have a standard way to create python libraries and application to reduce friction on starting new projects and getting them into production quickly.
Deploys are harder if you have a large codebase to ship. rSync works really well in those cases. It requires a bit of extra infrastructure, but is super fast.
I vaguely remember .deb files having install scripts, is that what one would use?
It's more complicated than the proposed solution by nylas but ultimately it gives you full control of the whole environment and ensure that you won't hit ANY dependency issue when shipping your code to weird systems.
Bitbucket and GitHub are reliable enough for how often we deploy that we aren't all that worried about downtime from those services. We could also pull from a dev's machine should the situation be that dire.
We have looked into Docker but that tool has a lot more growing before "I" would feel comfortable putting it into production. I would rather ship a packaged VM than Docker at this point, there are to many gotchas that we don't have time to figure out.
How has your experience with Ansible been so far? I have dabbled with it but haven't taken the plunge yet. Curious how it has been working out for you all.
On the app end we just build a new virtualenv, and launch. If something fails, we switch back to the old virtualenv. This is managed by a simple fabric script.
For someone trying out building python deployment packages using deb, rpm, etc. I really recommend Docker.
1. Create a python package using setup.py2. Upload the resulting .tar.gz file to a central location3. Download to prod nodes and run pip3 install <packagename>.tar.gz
Rolling back is pretty simple - pip3 uninstall the current version and re-install the old version.
Any gotchas with this process?
So how is this solving the first issue? If PyPI or the Git server is down, this is exactly like the git & pip option.
cf push some-python-app
Works for Ruby, Java, Node, PHP and Go as well.
No, the state of the art where I'm handling deployment is "run 'git push' to a test repo where a post-update hook runs a series of tests and if those tests pass it pushes to the production repo where a similar hook does any required additional operation".
Looks like these guys never heard of things like CI.
I think "the entire system is unneeded" is a bit of a stretch, but I agree that, outside of cities, most routes don't need to be paved - you can safely travel 50 mph on a flat, straight gravel road. Of course the main arteries - Hwy 52, Hwy 20, I-80, and many others need to stay maintained. But there are so many small roads that, although quaint and a pleasure to drive, are probably unnecessary from a utilitarian/practical point of view.
So while laudable, it would be very nice if North Carolina followed suit with its ~79,000 miles of maintained roads (largest of any state) . But I doubt that would happen, my friend at NCDOT says the culture emphasizes building new roads (or the ones that get wiped out by hurricanes out on the outer banks), and change intersections in a manner that borders on the whimsical.
We like to build roads in challenging places, it seems .
Relying on friends and "taxis", I had to go through negative temperatures to get a simple can of soda.
After that, I could never complain about BART.
What kind of roads would they abandon? I didn't click through to all the references, but this article doesn't give any solutions.
By 2010- 2012 or so, actual fuel use was ~50% of year 2000 forecast estimates.
I'm really curious about whether this has happened in San Francisco.
The same thing happened with Railroads during their heyday. I remember seeing an old railroad map with stops at all these small towns in Nebraska. Now, railroads are almost entirely commercial with very few passenger stops in small towns.
It makes sense that at some point you just don't have the need for so many roads. If more people move to urban or even suburban city centers, things like public transportation, ride sharing, Uber, and even self-driving vehicles start to make a lot of sense and cut down a lot on driving volume and the need for roads.
I'm no expert on the topic, but it seems to me that if heavily loaded trucks are causing a disproportionate amount of damage they should be taxed at a rate which allows for proper maintenance of those roads.
> So in all it took 1.5 years to learn Coq well enough and to find the right abstraction, and 2 weeks to do the actual work.
That matches what other people have told me about Coq: the learning curve can be brutal, but you can be surprisingly productive once you get over it. (To be fair, this sort of problem also feels like a really good fit for Coqproving theorems in more developed sub-field of mathematics is much harder.)
A pretty cool project all around!
Notation "f <~ g" := (f ~> g) (at level 70).
It's been too long since I wrote some Coq! Gotta get back into it :)
And that's pretty much the most simple component you could create. The author is determined to reach their 10,000 lines of elm: that'd just be like, 10 reasonably-complex components! ;) But that's just the HTML part of the pie... the actual code seems quite beautiful.
I envy this feature, and wish it baked into Haskell! Idris also did this right, and I think OCaml and SML too.
This is why figuring out an elegant, concise, and powerful set of mathematical models which apply to multiple domains, and then devoting effort to simplifying, organizing, and explaining those ideas in an accessible way is so important.
Incentives for researchers are mostly to push and prod at the boundaries of a field, but in my opinion mathematical ideas are only of marginal value in themselves; more important is the way they help us understand and interact with the physical universe, and for that building communities, developing effective languages and notations, codifying our understanding, and making it accessible both to newcomers and to outsiders is the most important task for a field, and perhaps for our society generally.
Just like with software projects or companies, the most success comes from helping a range of other people solve their problems and extend their abilities, not from making technically beautiful art projects for their own sake (not that theres anything inherently wrong with those).
Perhaps more generally, while theorem proving has overwhelmingly dominated pure mathematics and related fields for the past 80100 years, and has been an important tool since Euclid, theorem proving is only one way of approaching the world, and in my opinion is a mere tool, not an end in itself. Just like simulation is a tool, or drawing pictures is a tool, or statistical analysis is a tool.
I like this bit from Feynman: https://www.youtube.com/watch?v=YaUlqXRPMmY
I really like the overall point of the post that mathematics once known can be forgotten or neglected, and mathematics written up for mathematics journals can be difficult to understand. Professor John Stillwell writes, in the preface to his book Numbers and Geometry (New York: Springer-Verlag, 1998):
"What should every aspiring mathematician know? The answer for most of the 20th century has been: calculus. . . . Mathematics today is . . . much more than calculus; and the calculus now taught is, sadly, much less than it used to be. Little by little, calculus has been deprived of the algebra, geometry, and logic it needs to sustain it, until many institutions have had to put it on high-tech life-support systems. A subject struggling to survive is hardly a good introduction to the vigor of real mathematics.
". . . . In the current situation, we need to revive not only calculus, but also algebra, geometry, and the whole idea that mathematics is a rigorous, cumulative discipline in which each mathematician stands on the shoulders of giants.
"The best way to teach real mathematics, I believe, is to start deeper down, with the elementary ideas of number and space. Everyone concedes that these are fundamental, but they have been scandalously neglected, perhaps in the naive belief that anyone learning calculus has outgrown them. In fact, arithmetic, algebra, and geometry can never be outgrown, and the most rewarding path to higher mathematics sustains their development alongside the 'advanced' branches such as calculus. Also, by maintaining ties between these disciplines, it is possible to present a more unified view of mathematics, yet at the same time to include more spice and variety."
Stillwell demonstrates what he means about the interconnectedness and depth of "elementary" topics in the rest of his book, which is a delight to read and full of thought-provoking problems.
PDF (of print from 1979): http://www.evolocus.com/Textbooks/Fleck1979.pdf
Will Myron Aub give us the feeling of power back?
http://downlode.org/Etext/power.html by Isaac Asimov on just this topic.
I still think the unpublished problem ie "publication bias" is a bigger issue which I suppose is somewhat in similar vain. Supposedly google was working on that.
document.querySelector('.date-outer').style.backgroundColor = 'white';
Joy.... "Like having your brains smashed out by a slice of lemon wrapped around a large gold brick."
I think the latest big thing I've learned in my career is that trying to fix broken input data silently is always bad. Fixing stuff silently isn't helpful for the callers, it's very difficult to do and it produces additional code which also isn't running in the normal case, so it's much more likely to be broken.
Additionally, your callers will start to depend on your behaviour and suddenly you have what amounts to two separate implementations in your code.
I learned that while blowing up (though don't call exit if you're a library. Please.) is initially annoying for callers, in the end, it will be better for you and your callers because code will be testable, correct and more secure (because there's less of it)
Before someone provides the standard "submit a patch" retort, I'll note that the variable naming is in full compliance with https://www.openssl.org/about/codingstyle.txt even if the function length isn't. A quick sample of other files suggests the function length matches actual practice elsewhere, too.
Bug added: https://github.com/openssl/openssl/commit/da084a5ec6cebd67ae...
Bug removed: https://github.com/openssl/openssl/commit/2aacec8f4a5ba1b365...
Although that's just the committer: https://twitter.com/agl__/status/619129579580469248
"The vulnerability appears to exist only in OpenSSL releases that happened in June 2015 and later. That leaves a lot of Linux distributions relatively safe, since they haven't gotten an OpenSSL update in a while.
Red Hat, CentOS and Ubuntu appear to be entirely unaffected by this vulnerability, since they had no OpenSSL updates since June 2015."
Test for CVE-2015-1793 (Alternate Chains Certificate Forgery) Chain is as follows: rootCA (self-signed) | interCA | subinterCA subinterCA (self-signed) | | leaf ------------------ | bad rootCA, interCA, subinterCA, subinterCA (ss) all have CA=TRUE leaf and bad have CA=FALSE subinterCA and subinterCA (ss) have the same subject name and keys interCA (but not rootCA) and subinterCA (ss) are in the trusted store (roots.pem) leaf and subinterCA are in the untrusted list (untrusted.pem) bad is the certificate being verified (bad.pem) Versions vulnerable to CVE-2015-1793 will fail to detect that leaf has CA=FALSE, and will therefore incorrectly verify bad
If I understand the advisory correctly then this means that somebody could set up a webserver with a specially-crafted certificate and pretend to be somebody else, assuming that the client is running a vulnerable version of OpenSSL.
Is that right? I wish they would write these advisories in a slightly more helpful fashion.
OpenSSL Security Advisory [9 Jul 2015]
Alternative chains certificate forgery (CVE-2015-1793)
During certificate verification, OpenSSL (starting from version 1.0.1n and1.0.2b) will attempt to find an alternative certificate chain if the firstattempt to build such a chain fails. An error in the implementation of thislogic can mean that an attacker could cause certain checks on untrustedcertificates to be bypassed, such as the CA flag, enabling them to use a validleaf certificate to act as a CA and "issue" an invalid certificate.
This issue will impact any application that verifies certificates includingSSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication.
This issue affects OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o.
OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2dOpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p
This issue was reported to OpenSSL on 24th June 2015 by Adam Langley/DavidBenjamin (Google/BoringSSL). The fix was developed by the BoringSSL project.
As per our previous announcements and our Release Strategy(https://www.openssl.org/about/releasestrat.html), support for OpenSSL versions1.0.0 and 0.9.8 will cease on 31st December 2015. No security updates for thesereleases will be provided after that date. Users of these releases are advisedto upgrade.
URL for this Security Advisory:https://www.openssl.org/news/secadv_20150709.txt
Note: the online version of the advisory may be updated with additionaldetails over time.
For details of OpenSSL severity classifications please see:https://www.openssl.org/about/secpolicy.html
Some more details & patching guide here: https://ma.ttias.be/openssl-cve-2015-1793-man-middle-attack/
in order to exploit the attack in hex you need find a CA that will directly issue certificates off of a certificate in a trust store. apparently, this is not the recommended policy for CAs. so I made this tweet: (https://twitter.com/benmmurphy/status/613733887211139072)
'does anyone know a CA that signs directly from their root certs or has intermediate certs in trust stores? asking for a friend.'
and apparently there are some CAs that will do this. in the case of hex i think the chain you need to create looks something like this:
RANDOM CERT SIGNED BY ISSUER NOT IN TRUST STORE | V VALID_CERT_SIGNED_BY_CERT_IN_TRUST_STORE (effectively treated as CA bit set) | V EVIL CERTIFICATE SIGNED BY PREVIOUS CERT
*) Alternate chains certificate forgery During certificate verfification, OpenSSL will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate. This issue was reported to OpenSSL by Adam Langley/David Benjamin (Google/BoringSSL). [Matt Caswell]
See https://mullvad.net/en/v2/news for more details.
The typical issue at sea level is from neutrons hitting silicon atoms. If a neutron hits the neucleus in some area of the microprocessor circuitry, it suddenly recoils, basically causing an ionizing trail of several microns in length. Given transistors are now measured in 10s of nanometers, the ionizing path can cross many nodes in the circuit and create some sort of state change. Best case it happens in a single bit of a memory that has error correction and you never notice it. Worst case it causes latchup (power to ground short) in your processor and your CPU overheats and fries. Generally you would just notice it as a sudden error that causes the system to lock up, you'd reboot and it would come back up and be fine, leaving you with a vague thought of, "That was weird".
As others mentioned, most of these problems are caught when testing the chips. Most of the transistors on a chip are actually used for caching or RAM, and in those cases the chips have built in methods for disabling the portions of memory that are non-functional. I don't recall any instances of CPUs/firmware doing this dynamically, but I wouldn't be surprised if there are. A lot of chips have some self diagnostics.
Most ASICs also have extra transistors sprinkled around so they can bypass and fix errors in the manufacturing process. Making chips is like printing money where some percentage of your money is defective. It pays to try and fix them after printing.
Also, as someone who has ordered lots of parts there are many cases where you put a part into production and then find an abnormally high failure rate. I once did a few months of high temperature and vibration testing on our boards to try and discover these sorts of issues, and then you spend a bunch of time convincing the manufacturer that their parts are not meeting spec.
Fun times... thanks for the trip down memory lane.
The last time I worked with some hardware folks speccing a system-on-a-chip, they were modeling device lifetime versus clock speed.
"Hey software guys, if we reduce the clock rate by ten percent we get another three years out of the chip." Or somesuch, due to electromigration and other things, largely made worse by heat.
Since it was a gaming console, we wound up at some kind of compromise that involved guessing what the Competition would also be doing with their clock rate.
But the failure rate after initial burn-in is phenomenally low. They're solid state devices, after all, and the only moving parts are electrons.
So, simplicity and hard work by fab designers is 90+% of it. There's whole fields and processes dedicated to the rest.
This seems to be a nice overview of aging effects: http://spectrum.ieee.org/semiconductors/processors/transisto....
As geometries fall, the effects of "wear" at the atomic level will go up.
Faults don't always manifest themselves as a binary pass/fail result; as chip temperatures increase, transistors that have faults will "misfire" more often. As long as this temperature is high enough, these lower-grade chips can be sold as lower-end processors that never in practice reach these temperatures.
Am not aware of any redundancy units in current microprocessor offerings but it would not surprise me; Intel did something of this nature with their 80386 line but it was more of a labeling thing ("16 BIT S/W ONLY").
Solid state drives, on the other hand, are built around this protection; when a block fails after so many read/write cycles, the logic "TRIM"s that portion of the virtual disk, diminishing its capacity but keeping the rest of the device going.
Yes, generally speaking it would be. Depending on where it is inside the chip.
> Wouldn't a single transistor failing mean the whole chip stops working? Or are there protections built-in so only performance is lost over time?
Not necessarily. It might be somewhere that never or rarely gets used, in which case the failure won't make the chip stop working. It might mean that you start seeing wrong values on a particular cache line, or that your branch prediction gets worse (if it's in the branch predictor) or that your floating point math doesn't work quite right anymore.
But most of the failures are either manufacturing errors meaning that the chip NEVER works right, or they're "infant mortality" meaning that the chip dies very soon after it's packaged up and tested. So if you test long enough, you can prevent this kind of problem from making it to customers.
Once the chip is verified to work at all, and it makes it through the infant mortality period, the lifetime is actually quite good. There are a few reasons:
1. there are no moving parts so traditional fatigue doesn't play a role
2. all "parts" (transisotrs) are encased in multiple layers of silicon dioxide so that you can lay the metal layers down
3. the whole silicon die is encased yet again in another package which protects the die from the atmosphere
4. even if it was exposed to the atmosphere, and the raw silicon oxidized, it would make silicon dioxide, which is a protective insulator
5. there is a degradation curve for the transistors, but the manufacturers generally don't push up against the limits too hard because it's fairly easy and cheap to underclock and the customer doesn't really know what they're missing
6. since most people don't stress their computers too egregiously this merely slows down the slide down the degradation curve as it's largely governed by temperature, and temperature is generated by a) higher voltage required for higher clock speed and b) more utilization of the CPU
Once you add all these up you're left with a system that's very, very robust. The failure rates are serious but only measured over decades. If you tried to keep a thousand modern CPUs running very hot for decades you'd be sorely disappointed in the failure rate. But for the few years that people use a computer and the relative low load that they place on them (as personal computers) they never have a big enough sample space to see failures. Hard drives and RAM fail far sooner, at least until SSDs start to mature.
That's why our boxen have power-on self tests.
> Terms used in the user manual: "Tetriminos" not "tetrominoes" or "tetrads" or "pieces", letter names not "square" or "stick", etc.
The spec doesn't meet the spec.
> Cavada wanted all European nations to adopt laws that may require permission from a building's architect before an image is published commercially.
I would like a list of architechts supporting such a law. They deserve all photographs (and mentions) of their buildings to be removed from the internet.
It's called a democracy for a reason - you've got to have fair representation from all angles, not just the most sensible. People are getting mightily agitated every time a strange law is brought to the table recently. I blame Twitter. /s
And then I read the article, and it truly referenced pictures.
What idiot thought legislating fucking long pictures was a good idea?
"Occupation of the Kasbah in Tunis and of the Syntagma Square in Athens, siege of Westminster in London during the student movement of 2011, encirclement of the parliament in Madrid on September 25, 2012 or in Barcelona on June 15, 2011, riots all around the Chamber of Deputies in Rome on December 14, 2010, attempt on October 15, 2011 in Lisbon to invade the Assembleia da Republica, burning of the Bosnianpresidential residence in February of 2014: the places of institutional power exert a magnetic attraction on revolutionaries. But when the insurgents manage to penetrate parliaments, presidential palaces, and other headquarters of institutions, as in Ukraine, in Libya or in Wisconsin, its only to discover empty places, that is, empty of power, and furnished without any taste. Its not to prevent the 'people' from 'taking power' that they are sofiercely kept from invading such places, but to prevent them from realizing that power no longer resides in the institutions. There are only deserted temples there, decommissioned fortresses, nothing but stage setsreal traps for revolutionaries. The popular impulse to rush onto the stage to find out what is happening in the wings is bound to be disappointed. If they got inside, even the most fervent conspiracy freaks would find nothing arcane there; the truth is that power is simply no longer that theatrical reality to which modernity accustomed us. [...]But what is it that appears on euro banknotes? Not human figures, not emblems of a personal sovereignty, but bridges,aqueducts, archespieces of impersonal architecture, cold as stone. As to the truth about the present nature of power, every European has a printed exemplar of it in their pocket. It can be stated in this way: power now resides in the infrastructures of this world. Contemporary power is of an architectural and impersonal, and not a representative or personal, nature."(Invisible Committee, To Our Friends, Semiotext(e) 2014)
But if they ever release a tool that is inspired by the Brazil build system, pack up and run for the hills. When it takes a team of devs over two years to get Python to build and run on your servers, you know your frankenstein build system is broken. It could be replaced by shell scripts and still be orders of magnitude better. Nobody deserves the horror of working with that barf sandwich.
It means they don't even support their own new "Git" product AWS CodeCommit 
Youll pay $1 per active pipeline per month (the first one is available to you at no charge as part of the AWS Free Tier). An active pipeline has at least one code change move through it during the course of a month.
Does this mean that every time you run a session you pay 1 EUR no matter how many stages the session has (pull, compile/build, test (multiple tests) and deploy?
We'd used GitLab for over a year internally, but as I've mentioned previously, it became a pain to maintain. So we switched to GitHub for our private "important" projects and turned off our GitLab instance (other reasons caused this too mind). Our version was 6.7 or something up until today.
Today we realised we should run GitLab internally again for non-critical repositories - since our networking is a pain to give external access to servers - we can't access it out of the office. I updated us to 7.12 CE and I kind of regret it.
The UI is so complicated now. Whilst there are good features that we like, it's so hard to navigate to where you want to be. I think this is down to the "contextual" sidebar. I really do prefer GitHub's UI for repo administration and usage, which is a shame.
Sure, the colours are nice in GitLab but it's far from functional. My colleagues felt the same way too.
Also (for those at GitLab) your Markdown renderer is still not rendering Markdown correctly in all cases...
Anyway, not to take away from the funding - it's excellent news!
Currently, we have to create milestones in each of the repos and assign issues to those milestones. It's really a hassle. We cross reference commits a lot in the issues and this is the reason why we don't create a "empty" repo simple for common issues. Unless there is some way to say something like "Fixes commonissuetracker#43".
Thanks, a very happy gitlab user
* learn about what the project is, a short description on what it is, how to install, where to find more documentation
* look at / search the files or clone the repo
* search bugreports or create a new bugreport
Your default project page looks quite similar to gitorious, which looks more like a place to just host your repository and not a place to interact with the project.Bitbucket's default looks way better for example, and github's is quite good too.
My suggestion to make Gitlab fit better into my workflow:
* default page/tab for project root should be configurable, either on per project or per user basis: I'd like to have the README as default for example, the Activity page by default interests me less.
* there should be a tab for issues on the default page, its more important than to see the activity IMHO
* you've got the clone URL in an easily accessible place, good!
* the Files view is quite similar to Github's (good!), but I can't figure out how to search (either fulltext or filename)
* I don't see a button to create a new issue (I'm not logged in, should I login first? Github has a new issue button that takes you to login)
* how do I search in issues (fulltext?)
* how do I search for project names, or inside projects/issues globally?
* the default project page should somehow highlight or focus on making it easy and obvious the main aspects on how you'd interact with the project, if all features are shown in equal style it feels somewhat cluttered and overloaded.
P.S.: should I open a feature request about these on the gitlab site?
I am using a gitlab instance for about 2 years on my personal server and have been very happy with it.
Recently, (finally!) we switched our research group over from (a very very old version of) redmine and you can't imagine my joy when that happended! I think never before in my life migrating wiki pages and issues felt so good.
Last but not least it is encouraging to see a European software startup thriving and growing like you do. Nothing against the great products from SV but a little geographical competition never hurt nobody. Right? ;)
Keep up the great work.Gre aus Deutschland / Greetings from Germany
But I've got to say GitLab is just incredible to use now. It's really nice and I now use it over BitBucket for my private repositories. I still use GitHub for OpenSource (that's going to be a hard barrier to get through if they really want to) but I'm a big fan.
So congrats on the round! This is technically the second seed round, right? Or does YCombinator not really count as a seed anymore?
For those who don't get the joke, https://www.google.com/search?num=40&es_sm=119&q="GitLab+CEO...