> The planets also are very close to each other. If a person was standing on one of the planets surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth's sky.
> In contrast to our sun, the TRAPPIST-1 star classified as an ultra-cool dwarf is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun.
Accelerating at 1g the 1st half, and decelerating at 1g the 2nd half, the traveler would experience 7.3 years of time. For observers it would take 41.8 years at a max speed of 0.998c
If you had a near perfect hydrogen -> helium fusion engine, it'd take about 6 million tons of fuel (about the mass of the Pyramids of Egypt or 2,000 Saturn V rockets)
Given that these planets are so close to each other, and interacting with each other gravitationally, how likely is it that their orbital arrangement is stable over geological time?
TRAPPIST-1 is an ultracool dwarf star that is approximately 8% the mass of and 11% the radius of the Sun. It has a temperature of 2550 K and is at least 500 million years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K.
Due to its mass, the star has the ability to live for up to 45 trillion years ...
Too young star for alien life hopes?
"1. A planet is a celestial body that(a) is in orbit around the Sun"
Makes me wonder how much this definition of a planet was motivated by the desire to be able to give elementary schoolers a nice small set of things to memorize.
Edit: actually I might be incorrect about this, the resolution is titled "Definition of a Planet in the Solar System", I'm not sure the IAU actually has a definition of what a "planet" would be outside the solar system, but they may be open to the idea that they exist :)
I think our "best shot" at that right now, is to digitize humans. If we could store consciousness in binary, we could then transmit it at the speed of light (like we do with data every day!). You'd need a receiver on the distant planet though. So, your first 'payload' would have to be the receiver, and it would need to travel the slow old fashioned way :(
1. Where are they all?
2. How humanity will react on appearance of one of them? Will we finally stop our fights inside that sandbox and to focus on challenges that we are facing all together?
No problem, buddy. We've got the best ship in town. If you've got the dough I'm sure me and my buddy Chewy can work something out for ya.
Although, he had no proof for his intuition, so I cannot entirely cherish him as a martyr for modern cosmology.
So when the rest of the universe cools off and stars start to die, Trappist one keeps on shining and shining... We need to get there. But first, let's get rid of our genocidal tendencies.
> The density of matter in the interstellar medium can vary considerably: the average is around 10^6 particles per m3 but cold molecular clouds can hold 10^810^12 per m3 
But I imagine that interstellar gas would be easier to fly through than interstellar sand. Do we know the composition of matter in interstellar space?
It'd be amazing if the number was above 1 (or terrible if you believe in the Great Filter hypothesis).
The presence of a large, Jupiter-sized planet in a system is thought to be helpful for deflecting asteroid impacts. Obviously we are talking 'to scale' given TRAPPIST-1 itself is around the size of Jupiter!
The more we learn about this system is going to be fascinating though - the supposed inward migration of these planets may even help us understand more about how our own system formed.
Apparently they are building a FTL engine.http://www.spacewarpdynamicsllc.com/latest-news
I thought it'd be nice to know...
So I know nothing about this kind of star but I was reading something last year talking about risks to life. There's of course the obvious like being hit by comets and asteroids, gamma ray bursts and so on.
But there's also CMEs (coronal mass ejections). A CME from the Sun directly hitting the Earth would be devastating. The chance of getting hit by a CME is inversely proportional to your distance from the star just because you occupy a smaller arc from the star's perspective.
I wonder if this kind of star and having the worlds so close would pose a huge threat from CMEs. Does this kind of star even have the same number of CMEs as say the Sun?
The magnification could be large enough to analyze features on exoplanets. My dream would be to build a telescope large enough, so that with the help of the gravitational lens of the sun we'd have a google-earth like view of the exoplanets.
The "great filter" hypothesis is essentially that the rarity of intelligent life has to be explained by some parameter of the Drake equation, and that whatever the "small" parameter is is either in our past or in our future.
If the "great filter" is the rarity of habitable worlds, then clearly we don't need to fear it, since we already found one. But if habitable worlds aren't rare, then it's more likely it lies in our future (e.g. global thermonuclear war, plague, difficulty of space travel, etc).
Thus things like discovery of exoplanets, bacteria on mars, etc should make us rather concerned.
Tesla's car business is very profitable:
Tesla also reported an automotive gross margin excluding SBC and ZEV credit (non-GAAP), of 22.2% in the quarter, up from 19.7% a year ago, but down from 25.0% in Q3...
Looking at the future, Tesla said it expects to deliver 47,000 to 50,000 Model S and Model X vehicles combined in the first half of 2017, representing vehicle delivery growth of 61% to 71% compared with the same period last year.
reinvesting operating profits into investments:
The company also expects to invest between $2 billion and $2.5 billion in capex ahead of the start of Model 3 production and continues "to focus on capital efficiency while also investing in battery cell, pack and energy storage production at Gigafactory 1. It also forecast that both Model 3 and solar roof launches are on track for the second half of the year.
This company is firing (no pun intended) on all cylinders
But honestly, let him have it, I say. He's earning it. He's taking huge risks and succeeding where others weren't willing to try.
Loss was 69 cents vs 53 cents expected.
Revenues beat estimates, though.
> Later this year, we expect to finalize locations for Gigafactories 3, 4 and possibly 5 (Gigafactory 2 is the Tesla solar plant in New York).
Should be an exciting conference call. (5:30pm EST)
For anyone interested in trying out F# online, looks like Microsoft Research has such a tool: http://www.tryfsharp.org/Create. Unfortunately looks like you have to create an account of some sort to share scripts, so these alternatives might be better:
The article is actually more of an annotated version of the F# Tutorial Script which ships inside Visual Studio 2017 (also in other version of Visual Studio, but the Tutorial script is a bit different there).
You can get started with F# just about everywhere everywhere:
* Visual Studio
* Visual Studio for Mac
* Visual Studio Code (via Ionide plugins)
* .NET Core and the .NET CLI (`dotnet new console -lang F#`)
* Azure Notebooks (Jupyter in the browser via Azure) 
If you have a short attention span, I recently started posting sped up screencasts on twitter that range between 1-2 minutes. https://twitter.com/FSharpCasts
If there's a feature you want to see, let me know. I take requests.
I might be slightly biased but in my opinion it's one of the best programming articles I have ever read.
The "elid" intrigued me and I tried to look it up but couldn't find anything. Is this just a typo?
The best analogy would be a simple web page, which usually contains hyperlinks that a client can follow to discover new information. Unfortunately, web developers' understanding of REST ends with HTML, and they re-invent the wheel, badly, every time they create an ad hoc JSON-over-HTTP service.
There is a standardized solution for machine-to-machine REST: JSON-LD , with best practices to follow, and even some formalized specs. To Google's credit, they are now parsing JSON-LD in search results, which is much nicer to read and write than the various HTML-based micro-data formats.
On a related note, REST has nothing to do with pretty URLs, naming conventions, or even HTTP verbs. That is to say, it is independent of the HTTP protocol, but maps quite naturally to it.
But I am still looking for some books on good API-Design, anybody has any recommendations?
I've started teaching some people in the team how to use gRPC, and we're def going to be using it where permissible on client projects.
It took quite a bit of work for me to get a native Clojure client working to connect to the google cloud SDK. That was after wrestling with jar-hell around gRPC and calling the Java client from clojure, which is decidedly not pretty.
Also, the Protocol Buffers link in the 3rd paragraph is 404.
HTTP Method DELETE. Payload: empty.
Apart from HTTP Basic Auth, but please don't use that.
He basically works his way through history to demonstrate the development of the modern experimental method and extrapolates that society would be best served by extending the scientific method to many more aspects of society (social and cultural issues, etc).
(Just to preemptively clarify: The bad quality I mentioned was most noticeable when compared to a non-POD book. On its own, it looks OK-ish and you might not think anything of it, but when you look at it next to another book you can tell.)
The book itself seems pretty neat though! I'm a PDFs for tech books kind of guy, personally.
I'd recommend it.
Anyone have recommendations for a book or tutorial for creating a REST API with Go?
Is there a PROMO CODE available?
So, yay escape hatches!
At some point it was going to be a pattern (Java + jruby/ola bini's language)...
An ex-colleague pointed out to me on Twitter today that there are other APIs out there that have developed a concept similar to Stripe's `Idempotency-Key` header, the "client tokens" used in EC2's API for example . To my knowledge there hasn't really been a concerted effort to standardize such an idea more widely, but I might be wrong about that.
By the way, WebDAV extended this mechanism with a general If header  for all your precondition needs. Im kinda glad it didnt catch on though...
Do the Internet a favor, and file an issue with your favorite HTTP library asking them to implement exponential backoff.
I haven't found anything in JS that does this properly though. Do people really just write apps that crap out upon the first HTTP request failure?
The best library I have come across is actually SquareUp's OkHttp (the payment processing companies seem to be the only ones getting this right).
Happy to answer questions.
EDIT: direct link to the repo https://github.com/beakerbrowser/dathttpd. We also have Prometheus/Grafana integration which is pretty handy; it's currently the easiest way to watch the health of a swarm.
Additionally, Ive been running American Fuzzy Lop (with afl.rs) on cpp_demangle overnight. It found a panic involving unhandled integer overflow, which I fixed. Since then, AFL hasnt triggered any panics, and its never been able to find a crash (thanks Rust!) so I think cpp_demangle is fairly solid and robust.
That's what I like to see. Targeted useful reimplementations in Rust that play well to its strengths. In this case, as a double benefit to both the Rust ecosystem and to anyone that wants a robust demangling library.
Switching languages is cool, but the Rust code is actually longer and still uses a hand written parser, so how can you be sure it is any more correct or won't eat all your memory?
This is only true of UNIX system linkers, before the FOSS and UNIX clones wave, it was common for each compiler to have its own language specific linker.
Not that I have a use case in mind or anything, just curious.
You stay classy, Microsoft.
> Its not just the grammar thats huge, the symbols themselves are too. Here is a pretty big mangled C++ symbol from SpiderMonkey [...] Thats 355 bytes!
Here's a >4kB symbol I encountered while liberating some ebooks from an abandoned DRM app:
tetraphilia::transient_ptrs<tetraphilia::imaging_model::PixelProducer<T3AppTraits> >::ptr_type tetraphilia::imaging_model::MakeIdealPixelProducer<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 0ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> > > >, tetraphilia::Terminal> >, T3AppTraits, tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, tetraphilia::imaging_model::SeparableOperation<tetraphilia::imaging_model::ClipOperation<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > > >(tetraphilia::ArgType<tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 0ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> > > >, tetraphilia::Terminal> > > >, T3AppTraits::context_type&, tetraphilia::imaging_model::Constraints<T3AppTraits> const&, tetraphilia::imaging_model::SeparableOperation<tetraphilia::imaging_model::ClipOperation<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > >, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::SegmentFactory<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >*)
In particular, it needs to be normal enough that a significant fraction of all travelers do it. The feature can't be marketed as a protection for at-risk travelers, but as a common-sense safety mechanism useful to all travelers.
I think it's crazy that people walk around with phones that have access to years of email communications, and that even in the happiest timeline we could have ended up on after 2016, features like this are long overdue.
Firstly, social media's only incentive is to make your data as widely available as possible (in the interests of ad revenue), and maintain a good relationship with the government in their jurisdiction. Every other existing "privacy" setting on Facebook, LinkedIn, etc is already obfuscated to the point of unusability, for this reason, and "travel mode" would be no different.
Secondly, lets imagine that FB did implement a watertight "travel mode" that hid your embarrassing data effectively while you were travelling. Third parties would just start capturing and storing posts while you have "travel mode" off, and sell that to CBP, or whoever else wants to pay for it.
"$Company noticed an intrusion by the US Federal Agents, Border Guards, and have marked this as a compromised account. Please have your designated friend, if set, to authenticate your account."
Its now out of the persons sphere to fix, even if coerced. And companies can defend this by fact of an Acceptable Use Agreement violation itself is breaking a federal law: CFAA.
That, and it seems the only way to stop these issues now is to jam it up in legal limbo by citizens.
The real solution is a political one where we speak up and legeislate and litigate that the 4th amendment applies a the border.
Do they automatically suspect someone who denies using it and what would follow in that event?
The other form of this which would make sense: worksafe mode or public mode. If you're logging into your facebook/twitter account from a public computer, perhaps it doesn't have as full and unlimited access, and doesn't have access to non-reversible account actions, and strongly logs out. If you're logging in from a place defined as "work", it doesn't have notifications, certain groups, etc. (the "giving a meeting presentation on your laptop when a racy notification from spouse pops up" problem).
Not that it'd do much. If the border agents really want to see one's social media accounts, I have zero doubt they can get that data from other government agencies. In fact, they probably already have it. It sounds to me like they're just trying to assert their power and dominance over the people whose accounts they are demanding access to as a way to get off on intimidating others. Pretty typical behavior by law enforcement officers the world over.
However, foreigners would likely be turned back. Because using travel mode is arguably evidence of hiding stuff. And citizens might still be detained. It seems unlikely that they'd send agents to homes, to turn off travel mode. But it's arguably not impossible.
Google and/or Apple could add this as a new menu toggle similar to Airplane mode. Once switched on, while in an airport, prevents the device from being unlocked. Then by utilizing geofencing, once the device leaves the airport it unlocks and can be used again.
Technology cannot solve the problem we currently face with erosion of privacy at the border! These clever tricks trying to get around the issue only kicks the problem downfield, and likely won't effectively work. If they found out you have travel mode enabled - you may be denied entry or worse (note the comment from @mholt) - they would just detain you until the time lock runs out.
Basically, you can turn over the phone to border guards in a way which gives them access to what is on the phone, but which logs actions, and allows you to easily revert/revoke any changes they make. (This would also be a mode you'd turn over to an employer demanding access).
Potentially this mode might also block access to certain things (secret FB groups, archives over a certain age, some chat logs), but would otherwise be fully functional.
The benefit would mainly be that all actions taken would be logged and reportable, as a way to try to keep authorities from poking in places they shouldn't. It seems they are NOT mostly using forensic imaging tools, but logging in directly on the devices, at least right now, so there would be some value.
The only way this is going to change is with a change in the law.
- We first had PIN numbers. Easyily cracked/defeated.
- We then had passwords. Provide them or go back home where you came from.
- We might have travel mode. Defeated or made illegal. Go back home.
I think the only resolution to this is political. Make these searches go away - worldwide - as we near a very bad precedent.
Border security officer: I see you have trip mode on. Turn it off, and give us the phone, or you're not getting into the country.
Reminds me a little of this: https://xkcd.com/538/
Which is my way of saying that those automatic queries will likely start to annoy folks very soon, and they'll find a way not to deal with them.
For example, no more awkward `e["code"]` where `e.code` is just as valid.
We (tyk.io) use Slack notifications with Zapier for various searches on HN, Reddit, Twitter, Stackoverflow or even Google Alerts (the last one use built-in Slack `/feed` feature).
In your Bitcoin example, one would be notified if the displayed content of an article on the front page (or say top N, or greater than M points, ...) includes the word "Bitcoin".
Bonus points if you can do complex or negative matches. Ex: +database -mongodb
Good job so far!
Fastlane solves part of the problem, but I think you solve it all.
By the way, I've found this tweet: https://twitter.com/wercker/status/431757646333751296 Were a similar company explains that a service like this one is not allowed by Apple's EULA: http://store.apple.com/Catalog/US/Images/MacOSX.htm
Did you have any problem with Apple in the past?
- It's the same runner which is used for running the builds on https://www.bitrise.io/
- You can run the same build config (YML) with it both on https://www.bitrise.io/ and on your Mac/Linux
- CLI [home page](https://www.bitrise.io/cli) | [docs](http://devcenter.bitrise.io/bitrise-cli/) | [GitHub](https://github.com/bitrise-io/bitrise)
(disclaimer: CTO here)
I run my current CI/CD pipeline through jenkins for Rails, and I would like to keep everything running through jenkins. Is there a way to do this currently with bitrise?
Keep Fucking Fire!!!
All the while, you have a leadership that continues to say "Everything is OK! This is good news overall for the company! Let's continue to work hard in solidarity with each other! We're one big happy family!" Its important for people (in general) to see past that kind of management bullshit, look at the numbers (profitability, customers etc.) and make a very informed decision about their future, lest they get caught unaware by "restructuring".
Disclosure: Obviously, a former Racker. Loved the peers I worked with and the company culture. Management was (is?) a total shit-show. I was lucky enough to see the future and take a better job before the company went private.
The only other issue that I do have with them is that the "history" for messages has a really stupid encoding. Basically, when a message does fail, or get marked as spam, we have a web hook set up. This works great. However, when looking at the message source, it has a dumb encoding on carriage returns, and colons. It's not the biggest issue, but still annoying.
We ended up making a little appliance to resend failed emails for our sales guys. Basically, this had to have the code "Replace("=\r\n", "").Replace("=0D=0A", "").Replace("=3D3D", "").Replace("=3D", "=");" instead of just being a straight copy paste. Ideally we could have a "resend" button. In the console.
I haven't looked at their outbound service in a while because I've been so impressed with Sendgrid's dual DKIM CNAME setup so that they can handle automatically rotating your DKIM keys without bothering you...that it's really hard to even think about trying somebody else.
Yet with Mailgun everything always works, the API is super simple and so is integration. And their free tier lets you handle 10,000 msg per month!!
Maybe they can use some marketing, because the product is great!
Mailgun has one of the best inbound/outbound API combinations available, great for companies with a strong developer team.
I've avoided MailGun & SendGrid entirely for various reasons.
Hope they add support for tracking against SSL/HSTS sites.
Good luck to Mailgun! DFTBA
This gave me a mini-heartattack...I guess I don't think about VCs and investments as much as the average reader of these things.
...except Bob over there, but he's kind of a grumpy old man anyways, so you can just ignore him.
Pro tip: Just write "The team is very excited" or something like that, its saves that tiny bit of awkwardness.
I hope the spin off lets them focus on improving the service and the surrounding experience - I definitely would rather they succeed than go belly up. As it stands if someone pops up with similar offerings, I'd definitely check them out. But mailgun isn't impossibly far off from creating an exceptional product. The question is: with this change, will they?
- From a happy customer.
Google e.g. "interval BCa"
OP, how does this compare to scikits.bootstrap  feature/performance-wise?
What is specific to C# (applies to Java too) is having an implicit mutex/condition pair in each object. I think this is a terrible design mistake because it's not very practical and it's confusing to newbies (a big stumbling block for students when I studied concurrency at the uni).
It's not very practical because in most concurrency tasks I've dealt with (not using C# or Java), there's typically more conditions than there are mutexes (typically 2-4 + O(n) conditions per mutex). In Java/C# land, the typical solution would be to have a complex conditional expression, all the threads spinning on a .wait() there and then over-use of .notifyAll() instead of .notify() causing lots of spurious wakeups and wasting precious cpu cycles.
It's confusing because of the reason above, it's easy to go "I'll solve this by adding another mutex". Unfortunately this is seldom the correct solution (to any problem), and a much better result would be achieved adding more condition variables to wait on.
I wouldn't mind too much about a design mistake in a language I almost never use (Java/C#) if it wasn't the de facto language for learning about concurrency at universities. This has produced so many engineer with a twisted view on concurrency. I understand that "Java is easy" and "C is hard" but when we're already talking about memory models, multi-core, cache coherency and atomicity, C-the-language isn't the hard part and Java-the-language does very little to help with those parts.
But now I live on Erlang's BEAM (via Elixir) and I freaking love it. The real gain for me is that I found I didn't need mutexes, locks, critical sections, etc., because the super lightweight thread-like Erlang processes (not OS processes) themselves run in parallel but each one runs in a single-threaded manner. This effectively turns each process itself into its own critical section, and it's this aspect that I personally have found extremely valuable.
This is really just fundamental concurrency stuff though. Something which is sadly in short supply in some people's skill sets, but then I guess not everyone spent four years working on massively multithreaded C++ software early in their career like I did.
I'd really prefer it if C# made you share memory explicitly - default shared memory concurrency is just asking for trouble, in this and many other languages, because you have to do extra to do things right, rather than extra to do things wrong.
I expected the following to be safe:
Dim V(99) As Integer Parallel.For(0, 100, Sub(i) V(i) = i End Sub) Dim Result = V.Sum
Dim V(99) As Integer Parallel.For(0, 100, Sub(i) V(i) = i End Sub) Threading.Thread.MemoryBarrier() Dim Result = V.Sum
Does anyone know where it is? It was very informative..
Only need lock() and the full Monitor classes if there's more to be done within the locked statement.
There are two others:
It's well worth learning about the Interlocked class if you want to do parallel programming. If you get it wrong then you will see incorrect/corrupt results. It's also worth keeping in mind that for simple tasks it can be quicker to do them single threaded, due to the overheads involved. I demonstrated this in a simple benchmark app  that I wrote for a chapter of my book on ASP.NET Core .
In my experience another mistake made by novices and journeymen alike (at least occasionally by the latter) is to reach for this tool without careful consideration of whether it's even necessary.
You can't. Dictionary can be read concurrently, but once you start writing to it concurrently all bets are off. ConcurrentDictionary implements IDictionary and allows concurrent writes.
Also, Entity Framework. We had an icky bug coming from somebody storing the datacontext in a member variable. Don't do that.
they will mostly work (like multiple threads reading, or 1 writes while another retrieves), except when two threads try to simultaneously write, or when 1 writes while another enumerates.
When that happens, all bets are off, and you will silently get obtuse errors such as null return values or enumerating past the end of the collection, or a corrupt collection that throws exceptions from other (seemingly) random and unrelated areas.
I believe the new concurrent collections take care of this, but it is still so easy for beginners to shoot themselves with async programming. very much in stark contrast to how dev-friendly the rest of the CLR is.
A simple table would have worked fine.
I know, I know, just use a transpiler and emit a bundle... but really?
It's been a draft since 2015, and no browsers have any support for it yet, despite full support for the rest of the standard?
Are modules really that controversial?
I'm kind of disappointed, honestly, that despite all the progress in the ecosystem, this long standing issue still remains mysteriously unsolved, by anyone.
If you're using a transpiler anyway, who really cares if you have native support for the language features?
Speaking from this experience, and as someone who reviews on average 20-50 coder profiles a week, the public commit history of a coder is almost never a significant factor. I don't see any trends that indicate this is changing, either.
The vast majority just don't have much to show, having spent their years working behind walls on closed software.
Instead of relying on a public portfolio that in most cases won't exist, I rely on talking to these people directly, programmer to programmer. If we can code together, on the actual code they would be working on, that's about as good as it gets.
In other words, I rely on my experience as a coder to help make what are, ultimately, subjective judgement calls.
Sure, it'd be nice if everything I worked on was available open source, but let's just look at one example where that's not a good idea: video games.
Good luck getting the game industry to make their AAA games open source, especially before release, when hundreds of people are chomping at the bit to have a dirt easy time of cloning whatever game out there and tossing it on the app stores to make a few bucks off their broken, half-stolen mess of a game. It's a rampant problem right now (for example, 2048 and Flappy Bird both had open source versions, possibly unofficial... look how many clones of those games hit the app stores).
Not to mention, game development often is a creative one where features are tried and then have to be cut for time, while audiences assume that anything they see will be in the final game, and if not they were maliciously lied to. This leads to any information about the game being closely guarded except for planned waaay in advance marketing campaigns, for most games (indies is a little different, indies need whatever exposure they can get usually).
So no, not everyone is going to be working on open source software, and that's not going to change in the 2 year 'Mark my words!' timeline this guy has provided.
Github isn't exactly new and there is plenty of time for others to innovate in this space. Why haven't we seen this effect already? If it could happen fast it would already have happened. Why won't it be a slow taking a decade or more like the proliferation of Social networks?
> One of the principles of open source is meritocracythe best idea wins, the most commits wins, the most passing tests wins, the best implementation wins, etc.
Have we learned nothing?
"When a measure becomes a target, it ceases to be a good measure." - Goodhart's law
1. GitHub represents the major part of software development world.
No. It's a tip of an iceberg. I don't think I have to elaborate this one.
2. Open source software development model is an absolute winner.
No. It's just one model of development which fits some kinds of projects better than the others. Companies tend to open-source tools. Product's code is usually closed source. Obviously there are a lot of developers who spend most of their time on closed source projects. Some of them also have life.
3. Your open source projects contribution helps recruiters to evaluate you performance for any kind of software development project.
Not true. A lot of jobs are either legacy projects work, niche technologies work or both. Some niche technologies are closed source. Some legacy technologies predate open-source culture. The problems you face doing open source projects are very different from those you face doing b2b, b2c projects.
Having said that the author's opinion looks invalid to me.
The Software industry needs more humility and less magicians.
I believe that strong engineering skills are worth far more than a Github profile, and good recruiters will know how to spot this.
If however you are looking to be hired as the new "Rockstar developer" of the latest trending startup running buzzword architecture, then a good public brand might be useful indeed.
> Over the next 1224 months >in other words, between > 2018 and 2019how software > developers are hired is > going to change radically.
I don't want to look back on my life with that feeling. As adults, we have very little free time and spending it on software seems like a gross misallocation of resources, even if our careers would benefit. There's more to life than work.
Interviewer: I see you have a impressive profile on Github, but can you write an algorithm for quick sort on the whiteboard?
We are knowledge workers. We work by thinking and acting. A graph captures none of the first. The most valuable code I've ever written for an employer was about 100 lines that took 2 weeks to write. Would the commit graph capture that?
I have gotten into arguments with my boss about hiring based on commit graphs and commit counts. I lost. We hired someone who checks in a lot of stuff, but often has to fix it with other changes. His graph and change count looks great. It's a nightmare working with him.
In my experience, commit graphs are bogus.
The author seems unconcerned about a particular company having so much control. Note that he compares not having a Github account to not having email or a cellphone in general, rather than to not having e.g. Gmail or an iPhone in particular, which would be a more apt analogy. It's a disheartening reminder of the trend of the internet moving away from decentralized protocols to centralized services.
For those of us who do have open source code on Github, the consensus is clear : despite the hype, most employers don't really look at Github in any depth. They mostly prefer to grill candidates, rather than look at what they've actually done. I guess it must be more enjoyable.
> For those of us who spent the past decade making a billion dollar open source software company however, there is nothing free or spare time about working in the open.
> The way to get a job at Red Hat last decade was obvious. You just started collaborating with Red Hat engineers on some piece of technology that they were working on in the open, then when it was clear that you were making a valuable contribution and were a great person to work with, you would apply for a job. Or they would hit you up.
Actually, this sounds exactly like working for free. You do some spec work and if they like it maybe they'll hire you.
Good god.... I love GitHub and what it allows for but let's not paint this as the "commoners" knocking down the walls that protect the "elites" in their ivory castles. Commiting to GitHub does not, on it's own, make you a better developer or a better person. In the same way that not committing to GitHub does not, on it's own, make you worse developer or a worse person.
I've met people who don't have single commit to a public GitHub repo that are FAR better developers (and people in some cases) than ones I've know that are deeply ingrained in a public project. In some, not all, cases there are different skillsets required for OS work vs working at a "closed source" company. And while we are on that subject I'm quite sick of some of the attitudes I see on HN sometimes around "closed source" like it's some evil thing. It's not, people have to eat, have to pay rent, have to support their families and anyone that thinks you can do easily that while working for a for-profit company are living in a dream world. How many times have we seen posts here about how little money is actually given to OS? Sure I'd love to be able to only work on OS or work for a company that is open but there are very few success stories in that area. The cognitive dissidence the HN crowd expels when they fawn over the SasS companies or various other SV unicorns (all closed source) here and then derides closed source is astounding.
All of this to say the message of "Your GitHub profile will mean everything for your career" is simply bullshit. CAN it give you a leg up? Of course, but experience working for a company and in that kind of setting is much more valuable to the vast majority of companies. Take a look at Linus Torvalds, undoubtedly a genius and a person who has done immense good for the OS community, however his temper/attitude are legendary with HN periodically posting links to his smack downs on mailing lists or the like. That said do you think many companies are looking for that kind of an employee? I think not (No disrespect meant to him at all, it works for him and I doubt he is looking for a job at those kind of places anyways).
Most of the companies I've worked for or interviewed with have either not cared at all about my GitHub profile or even if they say they do care they don't give it more than a passing glance at best. Focus on being good at what you do and if that happens in public on GitHub/GitLab/etc then all the better but don't bend over backwards to make your profile look good at the expense of actually knowing your shit.
Someone doesn't like you or your repo, and can convince GitHub to evict you - you're off the network, and all your contributions are gone.
The times I've looked at Github it was either "meh, this doesn't tell me much" or "Holy shit, run away". And about 5X in favor of running away.
So it's been useful, in a sense, but not as a positive signal.
Another issue is what happens to GitHub if they are not successful as a company. There was a story a few months ago here about how the company itself is losing money. How much will your profile be worth if GitHub goes out of business?
In the near future all of our problems will be solved by minor celebrities of fashionable Github repos.
Hire or No Hire?
Enough people I know and respect have moved on to other companies that my own network has actually grown. If I chose to apply elsewhere the first thing I'd do is reach out in my own network, not spam recruiters with my Github account.
Your Github reputation can be used in addition to your network, or as a substitute if you have no network, but it will not replace the impact of having others already in the industry/company personally vouch for you.
"Who you know" is still the best way to get your foot in the door, because we are all still social animals.
Sometimes, in some cases, it makes sense to filter candidates by looking at their GH. But I would not bet on that: the better filter is the code being written today, which is unlikely to be available for most of the developers.
Haven't people been saying this for like, 10 years now?
Code is objective. It solves problems, passes tests, does something new, and performs better.... or it doesn't. Social media presence has no bearing on this at all. Yet, each impart a type of online trust. One is capabilities (competence) trust and the other is marketing or branding trust.
For people who don't know the difference or don't understand the value in the code marketing/branding trust is the only thing that exists. For everybody else trust in branding loses value quickly and must struggle to compete with the other more objective trust factor. It should be noted this "everybody else" category is the minority but is more influential on things get built of prioritized.
If you want to interview people effectively try this crazy formula:
0. Ask for a remote tech screen. Ask for a simple piece of work that you can evaluate and that resembles real work. Make sure it can be built and tested imperially (the best way is automatically). This should be simple and not 8 hours of work. Don't ask people to code on a whiteboard or do their best algorithmic design in a high stress interview session, you won't get it.
1. Be prepared. Know what you're interviewing for and list out he skills. Read a candidates resume and review the prior work they offer. Read their code first.
2. Code review their tech screen.
3. Make sure to ask questions about their approach to work, leadership and projects. Encourage them to ask you similar questions.
4. Have a broad cross section of your company interview. Mix up who does this, diversity is a plus here. Also, make sure designers and HR folks get a bit of time, not just engineers.
5. If the position is senior tech, whitebiarding is acceptable for architecture or planning or diagrams.
I'm actually slightly less sure of this today... It's probably true for startups/"tech" companies; but many, many programmers work LOB in non-tech companies and I haven't heard/seen much that the hiring process has changed. If anything, we've come closer to the consensus that a small "work sample" is the best possible kind of interview - show up, here's a laptop and a small CR or project, implement. This has the advantage of being do-able remotely too, of course.
Pull requests back to languages and frameworks works if you're at the skill level and motivation to work at the framework level (e.g. Google, Facebook, Microsoft; the companies making the mainstream frameworks). But it's a dis-service to ignore many, many programmers working successfully at a lower level. It seems like the portfolio-as-key to being hired is still not very true for that category.
Most people don't get the time and resources to work on open source. Some times, you can't even open source something, despite wanting to. For example, when I was in academia, I wanted to open source some worthwhile projects I had completed, but was never allowed to by superiors, because they considered them as competitive advantage IP.
Unfortunately, I don't think any of the above is realistically going to happen at any point in the next 10 years. The majority of paid-for-code is proprietary, meaning that your 9-5 work-product can never be "googled" by outsiders. As a consequence of this, recruiters and hiring managers will continue to treat open-source work as second-class-indicators, behind your resume, CV and references. It's going to take a major paradigm shift in engineering/business culture, before any of this changes.
That said, the author might be wrong about the timeframe, but he does paint a noble vision for the future. One that I'm sure future generations will be working in, and one that I hope I can experience one day.
This creates 2 things as a byproduct:
- It keeps my standards high as it's a paid for product
- It forces me to actively maintain the projects and even provide a tiny bit of customer service
I really don't care if people have a problem with this. It's their problem at that point, not mine. I have no issue demonstrating anything or talking about it, but we're not going over lines of code in my stuff. I choose not to treat my code hosting platform as a social network.
Github will allow the cream to rise to the top, sure, but those "Just a job" programmer roles are going to exist for a long time. This is because many companies simply want contract fulfilling crap, not quality code.
I can't imagine spending three years of weekends developing x companies software for a chance at an interview.
Surely, in the course of your daily work using giant open source projects you'll file a bug, comment on an issue or even submit a pull request. Most large applications are built off of dozens, or hundreds, of open source librariesall of which have bugs at various times.
I don't expect all web developers to have side-projects or libraries, but I would expect they'll interact with open source projects in some way. This way of thinking clearly doesn't apply in more closed source worlds like games, finance or giant enterprisebut it certainly holds for startup/web work in my experience.
I'm guessing that would > 80% of all software developed globally?
Look at how much OpenBSD code gets used in highly profitable commercial products, then compare it to the level of donations they get from the same companies...
Yeah except for the tiny niggle that an overwhelming majority of contracts stipulate that you can't actually contribute to OSS either on or off the job due to the fact that every single thing you think of while employed (and sometimes for a period after your employment ends) belongs fully to your employer.
OK, a nice GitHub profile is a plus, but as a hiring manager it is never a make-it-or-break-it kind of thing.
For a minute I was worried I might be losing my mind. Thankfully I've always been garrulous, though I'm sure other people don't see that as a blessing.
Seems like history might be coming around again on this one.
such a sentence could be ambiguous because it's not obvious who graduated. Some may argue it's bob
Really the most important part, though, is that DaemonSets are for services that need to run on each host. Like a log collection service  or prometheus node exporter .
Wouldn't a wildcard SSL cert + wildcard DNS entry work even without SNI support here? I haven't used the GCP load balancer, but as long as you are serving a single certificate (* .fromatob.com), the client/server don't have to rely on SNI at all.
That might be OK if 1) your data isn's sensitive or 2) you're running on your own metal (and so you control the network), but in GKE your nodes are on Google's SDN, and so you're sending your traffic across their DCs in the clear.
There are a couple of pieces of hard-to-find config required to achieve TLS-to-the-pod with Ingress:
1) You need to enable ssl-passthrough on your nginx ingress; this is a simple annotation: https://github.com/kubernetes/contrib/issues/1854. This will use nginx's streaming mode to route requests with SNI without terminating the TLS connection.
2) Now you'll need a way of getting your certs into the pod; kube-lego attaches the certs to the Ingress pod, which is not what you want for TLS-to-the-pod. https://github.com/PalmStoneGames/kube-cert-manager/ lets you do this in an automated way, by creating k8s secrets containing the letsencrypt certs.
3) Your pods will need an SSL proxy to terminate the TLS connection. I use a modified version of https://github.com/GoogleCloudPlatform/nginx-ssl-proxy.
4) You'll want a way to dynamically create DNS entries; Mate is a good approach here. Note that once you enable automatic DNS names for your Services, then it becomes less important to share a single public IP using SNI. You can actually abandon the Ingress, and have Mate set up your generated DNS records to point to the Service's LoadBalancer IP.
(As an aside, if you stick with Nginx Ingress, you can connect it to the outside world using a Kubernetes loadbalancer, instead of having to use a Terraform LB; the (hard-to-find and fairly new) config flag for that is `publish-service` (https://github.com/kubernetes/ingress/blob/master/core/pkg/i...).
In our setup, I wanted to add authentication to a few dozen sub domains, but use a single oauth2proxy instance. Github Oauth makes this kind of gross, the callback must point to the same subdomain you're trying to authenticate. But it does allow something like /oauth2/callback/route.to.this.instead
In the end, to achieve what I wanted (a single oauth2proxy for multiple internal services) I had to- fork oauth2proxy and make a few small changes to the redirect-url implementation- create a small service with takes oauth.acme.co/oauth2/callback/subdomain.acme.co and redirects to subdomain.acme.co to comply with GitHub' oauth requirements- created a small reverse proxy in Go which does something similar to nginx_auth_request. I had a few specific reasons to do this (like proxying websockets and supporting JWT directly)https://gist.github.com/groob/ea563ea1f3092449cd75eeb78213cd...
I hope that someone ends up writing a k8s ingress controller specific to this use case.
Why do you still need saltstack and how do you find terraform? Why do you need terraform (I suppose it is for your non kubernetes infrastructure?)?
My parents use WhatsApp on generations-old phones - it performs great and it's simple enough to understand. Why can't we keep the more complex stuff over on Snapchat? Or Instagram. Or Facebook Messenger. Or...
Bragging about E2E encryption while feeding the Facebook data monster IMO is a bit like bragging about how you transport your slaves in armored vehicles:
Yes, they are safer against robberies.
No, [given my current threat model] I'd still prefer driving something less secure that isn't abusing my every action [and every action of everyone I communicate with] for the profit of Facebook.
Edit: clarifications, in square brackets and below
It seems no doubt that Whatsapp is safer against an 3rd party adversary.
My points are only that
- I consider Facebook an adversary at this point,
- I don't belive they bought Whatsapp and removed the fees because of the goodness in Mark Zuckerbergs heart,
It's embarassing that a major app like Snapchatbuilt around ephemerality and privacy and often handling sensitive datastill doesn't have any form of end-to-end encryption.
Stories are now in:
- Instagram - Messenger - WhatsApp - Facebook (soon, I've seen the beta)
* I miss having status lines as a visible message to the world. I know this isn't exactly the same thing and that Whatsapp/Gtalk/etc. have those, but they have been de-emphasized, so saying "man, I'm excited about $this" isn't likely to reach my friends.
* I'm always conflicted between using Facebook for "thoughtful" stuff (that won't embarass me in six months' time) and just posting from random whimsical observations ("I saw a pretty butterfly!") and moods/feelings. My Facebook network is too wide now, too.
* I did install Snapchat to check it out, but it's just not for my demographic. Younger people take to the internet to complement and boost their meatspace life; we 30-somethings gradually drift apart from friends but want to keep some semblance (or even illusion) of a friend base that is alive.
Overall, I've been using Facebook as a degenerated blogging/syndication platform, but miss the social features of a social platform. Hey, when is the update getting to international iPhone App Stores? I want to try it!
On the other hand, Snapchat is an awful piece of software, and some competition to prod them into fixing it would be useful.
As infrastructure improves, Whatsapp is making the bet that users will prefer to use these features on an app that is already their primary social network.
I'm skeptical of the paternalistic arguments on HN that people don't really want these features -- perhaps the reason Whatsapp users don't use Snapchat is that their social graph hasn't moved to it, not that they don't want to share 'stories'.
> Status could also open up new advertising opportunities for WhatsApp. If it followed Snap and Instagrams lead, it could insert full-screen ads in-between friends Statuses.
I really liked WhatsApp's business model before the aquisition : user's pay a small annual fee to use the app. What was cool to see was that the network effects were so strong that people who had never paid for an app or subscription service paid for WhatsApp. If they kept the service paid I doubt it would have reached the 1 billion users mark so quickly, but just humour me here : with 1 billion users they would have atleast 1 billion dollars in ARR. That would have been cool. They could have focused on what they do best : provide a no BS end to end encrypted messenger which respects the user's privacy. (Yes, I am aware of Signal and I use it).
I am curious to see how Facebook balances the need to monetize vs to the need to maintain WhatsApp's reputation as a service that respects users' privacy.
WhatsApp owned by Facebook which also owns FB Messenger, its only a matter of time for this transition.
I think this could be the beginning of the end for Whatsapp's ubiquity. It's such a shame as Whatsapp has such insane market penetration here (UK/Spain) that it is going to be a huge mess to try to switch to an alternative. I literally haven't received an SMS from a friend in years.
Let's just locate the Olympics in a permanent facility -- Greece, if they'll have them, for tradition. I think between "not rebuilding everything from scratch every four years" and the expectation of recurring events, the single host country could both run the games more efficiently and it would benefit the economy a lot more -- sure, it'd only benefit the one host country, but it would be going from "not benefiting a lot host countries (who go over-budget and never make back the money on tourism)" to "actually benefiting one host country".
Let the IOC auction off the opening show if you want to give one country a chance to show off every four years.
The Beijing facilities, which were in a city that could use big sports facilities, are now mostly abandoned. The big "birds nest" stadium now has an ice rink in it, but the huge grandstands are unused. London is doing OK with their leftover facilities. The ones that aren't near a large city, such as Sochi, are abandoned.
 http://www.reuters.com/news/picture/ghosts-of-olympics-past?... https://www.theguardian.com/cities/davehillblog/2015/jul/23/...
The swimming and diving and velodrome facilities are state of the art, oversubscribed, and at the forefront of creating a new generation of athletes. I myself am having Kayaking lessons at the London White Water centre Olympic course.
Of course other major Olympic venues predated the bid and have continued to be in continuous use. Wimbledon, Wembley, the Excel, the O2, Eton Dorney...
London has not simply "abandoned" anything, and to pretend otherwise is dishonest.
To put it into perspective, in much of the country they have stopped paying their police officers a salary. ((boggle)) Not the safest city in the first place, and maintenance was never a strong suit of Brazil. So the stadiums are just a symptom.
Did they move the futebol games to Engenhao or something? Why would Maracana be abandoned? Where are they playing all the games that still need to be played? It would make sense to rotate them to keep the stadiums used at least part of the month.
I'm trying to think of a way to maintain them and provide sports activities for kids. Normally, I dislike the branding of stadiums, but perhaps a corporate sponsorship of each stadium is a solution, while the govt should focus on its citizens.
I don't think the ROI was positive, but I also don't think that level of abandonment will last.
I think it isn't as popular as it once was in attracting tourists; soccer world cups are probably better for that. Given that this is the case, they should just fix one host city/country and host the Olympics only there. It will save unnecessary expenses, especially for developing nations such as Brazil.
EDIT: Added source here
Fortunately the public opinion has generally recognized this fact and there are protests against organizers. I for one, am proud that we told the fuckers to GTFO of Krakw.
We can't be sure why the reporter was unable to find a WHOIS record, we can only confirm that validation properly succeeded at time of issuance.
I know there are historical whois sites, but as far as I know unless someone in the past checked for the domain with their service, they'd have no record of it otherwise. So maybe that would explain how it has a cert for a domain that currently does not exist and appears to never been registered.
* Domain apple-id-2.com is not currently registered
* Domain apple-id-2.com has (apparently) never been registered
* LetsEncrypt, on 2017-01-03, issued a valid certificate for apple-id-2.com
Since we can't know how validation was successfully performed, all we can do is speculate. Someone from LetsEncrypt will have to investigate and let us know. Fortunately, they should have very detailed audit logs for exactly this purpose.
There's some evidence that apple-id-1.com existed back then:
;; ANSWER SECTION:apple-id-2.com.500INA126.96.36.199
Creation Date: 2017-02-22T21:57:50Z
The cert was issued in Januaryparsed.validity.start2017-01-03T22:17:00Z
Not totally surprising as I have never seen Let's Encrypt do anything that could be remotely considered diligence in not doing shady stuff.
After having built Spash, I have much more appreciation for the difficulty of correctly computing spacial locations.
$ echo "install dccp /bin/true" >> /etc/modprobe.d/dccp.conf $ sudo rmmod dccp # in case it's already loaded
$ cat /etc/modprobe.d/disabled_modules.conf install appletalk /bin/true install bluetooth /bin/true install cramfs /bin/true install dccp /bin/true install firewire-core /bin/true *snipped* install tipc /bin/true install udf /bin/true install usb-storage /bin/true install vfat /bin/true
See also the "modprobe.blacklist=" kernel parameter, which you'll have to use for "modules" that are compiled into the kernel itself (i.e., they are not actually loadable kernel modules).
15 years ago, when building your own kernels was a normal everyday thing, I simply built my kernels with everything compiled in and modules disabled. This (would have) prevented attacks such as kernel-level rootkits.
In addition, "one neat trick" was that you could halt (not poweroff) the machine (!) -- such as in the case of a Linux box simply acting as a router/firewall -- and the kernel would still be running. Good luck compromising that! :-)
- goto discard; + consume_skb(skb); + return 0;
On my Debian kernel, CONFIG_IP_DCCP is set to "m" (in /boot/config-`uname -r`), which means that DCCP support is built as a module. The code isn't loaded until the first program tries to call socket(...IPPROTO_DCCP). At that point, the kernel will look at /proc/sys/kernel/modprobe and run that program, /sbin/modprobe by default, to load dccp.ko.
Automatic module loading is great when e.g. udev runs and detects what hardware you have, but it's probably not something you'd ever need once a system has completed boot. A very simple hardening measure for machines running untrusted unprivileged code is to echo /bin/false > /proc/sys/kernel/modprobe, late in the boot process (e.g., in /etc/rc.local).
The downside is that system administrator won't be able to run tools that require loading modules, of which iptables is probably the most notable one. A better option than /bin/false is a shell script that logs its arguments to syslog, e.g., `logger -p authpriv.info -- "Refused modprobe $*"`. The sysadmin can manually run modprobe on whatever module name got syslogged (or temporarily set /proc/sys/kernel/modprobe back to /sbin/modprobe). And you can alert on that syslog line to see if there's an attack in progress.
(Does anyone know if it's possible to disable module auto-loading for a tree/namespace of processes, e.g. a Docker container, but keep it working for the base system?)
In the CVE it almost hints that this specifically is UDP related.
Am I right in thinking this?
linux (4.4.0-64.85) xenial; urgency=low
* CVE-2017-6074 (LP: #1665935) - dccp: fix freeing skb too early for IPV6_RECVPKTINFO -- Stefan Bader <email@example.com> Mon, 20 Feb 2017 10:06:47
Quick solution: "echo "install dccp /bin/true" >> /etc/modprobe.d/modprobe.conf"
AMD had said for years their Zen goal was a 40% IPC gain over their previous microarchitecture, but they ended up with a 52% gain: http://www.anandtech.com/show/11143/amd-launch-ryzen-52-more...
Today's launch event by AMD's CEO:https://www.youtube.com/watch?v=1v44wWAOHn8
If these comparisons are real, my next build may be AMD again.
Even if the real world perf is close to the $1K Intel chips it will be a win. It's going to force price cuts from Intel and hopefully spark some competition again.
And when comparing the 1700/1700X to the $200-250 price range i5-7600 Kaby Lake.
Intel did not do anything as a market leader, 8 years back I could still buy 4 core machine. Waiting to see, how AMD does on Server parts, 32 cores / 64 cores ? Power9 does 24 cores/ 96 threads.
Beyond the product quality itself, I think AMD has had a pretty smart launch strategy by releasing the CPU chips first to show that it can beat "Intel's best".
But they really need to start focusing on notebooks ASAP. That's where they can steal most of the market from Intel, especially now that Intel is showing signs of (slowly) abandoning the notebook market, by prioritizing Xeons over notebook checks for its new node generations.
AMD should prioritize notebook chips either next year or the one after that, at the latest. They should be making the notebook chips first, before the PC ones. They need that market share and awareness in the consumer market.
In terms of how they should compete against Intel in the notebook market, I would do it like this (at the very least - AMD could do it even better, if it can):
vs Celeron: 2 Ryzen cores with SMT disabled
vs Pentium: 2 Ryzen cores with SMT enabled
vs Core i3: 4 Ryzen cores with SMT disabled. Or keep SMT and lower clock speeds, as Intel did it. This may help further push consumers as well as developers towards using "more cores/threads".
vs Core i5 (dual core): 4 cores with SMT enabled
vs Core i5 (quad core/no HT): 4 cores with SMT enabled + higher clocks and better pricing. Maybe even 6-core with HT, if AMD goes the 6-core route. I honestly don't even know why Intel decided to make "Core i5" a quad-core chip as well, and its Core i7 a dual-core chip as well. It's so damn confusing - but maybe that was the goal. For differentiation's sake, it may be better for AMD to have a 6-core at this level or maybe even an 8 core with SMT disabled - so same thing as Intel, but with twice the physical cores. I don't know why but for some reason 6-cores don't attract me much. It feels like an "incomplete" chip.
vs Core i7 (quad core/HT): 8 cores with SMT enabled
The guiding principle for this strategy should be "twice the cores or threads with competitive/better single-thread performance, and competitive/better pricing).
In a way it would be the inverse of the PC strategy where they maintain the number of cores but cut the price in half. This would mainly focus on doubling the number of cores (because notebooks come with so few in the first place), while maintaining similar or better pricing.
The only ones that don't really fit well in this strategy are the Celeron and Pentium competitors and that's because a dual-core Ryzen, even at low clock speeds should destroy Intel's Atom-based Celeron and Pentium. We could be looking at a least +50% performance difference, and that's what AMD should strive for there as well. AMD should show Intel what a mistake it made when it tried to sell overpriced smartphone chips for laptop chip prices.
The article does not appropriately adjust for multiple testing, and therefore none of its claims are well supported except the JNK2 decrease in post-exercise mice.
Full article is available at http://onlinelibrary.wiley.com/store/10.1113/EP086189/asset/...
"Lifestyle Changes Lengthen Telomeres"http://www.drmirkin.com/public/ezine092913.html
The study, by Dr. Dean Ornish, was published in "The Lancet Oncology" 17 September 2013 issue.
So while I believe the study will likely hold up; I do wonder why exercise adds telomeres. One answer is that exercise reduces cancer risk (it will get you to bed on time, mostly) so the body optimistically adds teleomeres. Alternatively, and perhaps more likely, exercise may trigger more cell division (for purposes of repair, all exercise causes some damage, to collagen if nothing else) so the extra telomeres are added as compensation in order to return to status quo cap on allowed cell divisions; maintaining the preventative but not actually extending it.