And that's only looking at the character's own "choices." (Is it really a choice if you can't stop yourself from becoming John Wilkes Booth?) The cruelty inflicted by nature would be much greater. Disease, famine, famine, disease, famine, typhoon, famine, rattlesnake bite, famine, tsunami, etc.
Now I wonder if a sugar-coated Lovecraftian horror story was the author's intent. No other kind of god would set up a system where you're forced to repeat the same mistakes for billions of years.
Dont worry, I said. Theyll be fine. Your kids will remember you as perfect in every way. They didnt have time to grow contempt for you. Your wife will cry on the outside, but will be secretly relieved. To be fair, your marriage was falling apart. If its any consolation, shell feel very guilty for feeling relieved.
It's just so human. It's almost confrontational in its degree of, "That's just how shit is sometimes," but it's delivered with utter compassion. That juxtaposition captures so much of how I feel about the human condition.
I always wished this is how Lost ended: with Jack being told by Jacob that he was actually everyone on the plane (which is why they all had a weird connection), and all these lives were him waiting to be "born" into running the island.
Edit: I suppose it's just as horrible regardless of whether or not you experience them...
Except that, coming from that background, I expected the big reveal to be that the Egg is talking to himself, hatched.
As unprovable speculations about the nature of reality go, I rather like this one.
The story is incompatible with free will. The only way the universe could be the way it is, with the one person living all those lives, yet always choosing such that the other people (him in another re-incarnation) also always choose as they (he) did, it would be necessary for free will not to exist.
But this would also mean that the "god" in this story also didn't have free will, because the man was "of his (god's) kind".
But if God does not have free will, he isn't the greatest possible being. The universe thus described therefore fails Anselm's Ontological Argument for the Existence of God. The hypothetical God who is identical to the God in this story, with the exception that He DOES have free will, is obviously a greater being.
I conclude that this story cannot possibly describe Reality, as It actually Is.
I don't get what's interesting about this story. It's pretty silly and not very enlightening.
: http://www.galactanet.com/comic/view.php?strip=1: http://www.galactanet.com/writing.html
... hmm, now I'm a little disapointed in myself that I didn't recognize the domain name too.
The story also suggests that the simulation is heavily parallel and complete knowledge of all episodes (rather all paths) makes you god.
But I realized, this is would be an absolute disaster if true. True story: life more or less sucks if you aren't near a local maximum of a food chain.
The rate of information increases.
Many gods, one god, whatever it is part of a being to know what is relevant at any point in time. If there are gods, your death is something that becomes information to them.
The implication here is that even in their case, their ends become information to something else.
The first to be surprised to introduce new information doesn't "win" or "sin". What's to be felt about the falsity that
The rate of information increases.?
It is not a new pattern, either. Look back to what we (americans) did to each other in the 50's (are You a communist? Do you act like one?). Had we had the tech then to spy like now, we'd have done so. Which means that despite the sensational news stories, I find almost apathy from normal people I meet - the paranoid has known for years (don't talk on that cell phone, they can listen in on those easily... does anyone remember this attitude?). Privacy intrusions simply exist. They are. You aren't escaping them. It might make one angry, but there is nothing people feel they can do about it. I personally disagree, but also feel the path to balance in this area is an uncomfortable one.
I know many programmers feel this way, but in my humble opinion it is a fallacy, and not a very healthy one.
No-one can deny that our industry is evolving at breakneck speed, and it is an exhilarating place to be. But just because there's a new technology every week on HN doesn't mean that we are losing old ones at a similar rate.
It is perfectly possible to have done nothing but C or Java for your entire career and yet remain extremely employable. And I wouldn't be at all surprised if there are highly paid COBOL jobs still out there, nursing some vast banking-industry mainframe which is too precious to risk replacing.
In fact I'm hard pressed to think of any programming language which I would dare declare 'dead' in a HN comment.
But even if you're a specialist in something which you feel is in decline, or for which there are newer, snazzier replacements, you've got every opportunity to learn something new, taking as long as you like to do so. There's extensive documentation for every technology under the sun available for free on the internet, and an army of friendly, helpful people willing to provide help and advice without expecting anything in return.
In fact, it's entirely possible you could even get paid to cross-train. In my own company we use RoR for which (in England at least) demand far outstrips supply. I've paid PHP developers to learn Rails, and I would consider anyone with an in-depth knowledge of any language as potentially employable.
Really, the only way an experienced developer is going to end up flipping burgers or flying a manager's desk is because they have lost the desire to learn - ie fallen out of love with programming in general. I believe few people work in this industry for money alone - you either love programming or you don't do it - and if you love it then you will pick up new technologies out of sheer intellectual curiosity.
Feverishly reading HN every day and feeling threatened by the emergence of every new 'next best thing' is not a good idea. I would advise anyone feeling this way to take a chill pill and remember why they took up programming in the first place.
I don't want to learn tech because I'm afraid of getting my bones crushed by a steam roller.
I would rather learn tech because of the new and interesting things I can do with it.
For anyone wondering, 4 votes are enough because Bdale's vote (the chairman of the Technical Committee) count as two votes in case of a draw.
EDIT: see this (https://news.ycombinator.com/item?id=7203479) comment below - if the other 4 members of the TC vote F, then systemd would not win.
For the most part it can get confusing for a non day to day system administrator when I am trying to get a program to "run on boot". Between rc.local, init.d, run levels, etc. sometimes it is just frustrating.
For those unfamiliar with Pulseaudio development, it's a third-generation audio subsystem (after OSS and ALSA with JACK forming two previous ones, and ESD being the beginning of the third) that is famous for overcomplicated, non-human-writable barely human-readable configuration procedure, development marred with huge number of bugs that ruined audio on Linux until recently, and suffering from immense number of internal interfaces and system being presented as a huge monolithic piece of software that can not be used in a modular manner except as modules that only talk among themselves.
systemd seems to suffer from the same problems, plus it tries to "integrate" init, udev and syslog into a single "product", with arcane internal interfaces and formats -- just as non-human-writable as Pulseaudio.
Pretty unfortunate it took this long. Even as a casual observer it's looked like they'd eventually pick systemd a month ago. Hopefully this vote is the last one.
tl;dc: systemd replaces the bash scripts you're used to grepping through with unix .conf files that fit on the screen.
The whole thing is great, I'm glad I stuck with it past the first few images to see where he's going. These bits stuck out though and his work is so often things I wish I had thought of.
But isn't that true on earth too (to a much smaller degree)? As long as the sun's radius is larger than the earth, then sunlight will fall simultaneously on more than a half of the earth's surface, no?
What's your twist that sets it apart from the usual variants? (Sure, building it on top of Facebook could potentially provide some additional benefits, but what are they?)
1) Make it impossible to game.2) Chicken and egg.
I have no idea how to solve these problems, but it's a concept ripe for disruption if you do.
There are more examples of apps just like this that came before (this is the only one that comes to mind at the moment), the only way I would be remotely interested is if they figured a way to solve the inherent problem of selecting all/some of your friends as crushes to see who had you as a crush, and then what happens when someone legitimately adds the person who was just checking everyone's crushes.
What I dislike (not specifically about your project, that is!) is the idea that, as a society, we're incentivized to increasingly hide behind our screens instead of growing some balls an actually live in the "real life". Here, we're talking about dating. The other current topic: how we intend to fight democracy-destroying mass surveillance by (apparently) simply sitting behind our screens.
Let me caution you though: in most applications, if you concede to an attacker INSERT/UPDATE/SELECT (ie: if you have SQL Injection), even if you've locked down the rest of the database and minimized privileges, you're pretty much doomed.
Most teams we work with don't take the time to thoroughly lock down their databases, and we don't blame them; it's much more important to be sure you don't give an attacker any control of the database to begin with.
Edit: Oh, the article is from 2009 (I'd say it was bad practice even back then though).
The section under "the ideal administrator" is quite eye-opening. I pretty much use PostgreSQL exclusively, and I've found that every time I learn something new, there is another mile of learning to go, and that feedback cycle never seems to end.
I have a few PostgreSQL-specific book on admin and server programming, but I wonder where I would be able to go to really learn this stuff. Are there any classes or places to go for this sort of SQL training?
How does one go about becoming a total master at this? I find that, out of all the programming that I do, I love working with SQL the most and I want to dive deeper into it.
0 - http://www.ibm.com/developerworks/library/l-async/
> Common practice dictates that passwords have at least six characters and are changed frequently.
Has anyone gotten it to work transparently?
They created a new office that has the authority to block any website url / domain / ip address without prior notice or court order. All ISPs are required to apply restrictions within 4 hours. Plus, all ISPs are required to log HTTP access + IP + Date pairs and store them for two years for government access (again, no court involvement)
This is going to be used/abused by the Islamic government to silence opposition or to cover up corruption news. That's the main goal of the law.
"The law requires service providers to take down objectionable content within four hours and any page found in violation by the country's telecom authority or face fines up to $44,500."
Regarding the law, with all these new technologies floating around, I think censorship will become more and more difficult for the government to enforce it on geeks but I firmly believe that people of Turkey should rely mostly on overturning the part where the ISP blocks whatever website without court order and not on tech to circumvent it. That might turn nasty and jails in Turkey are not fancy.
Given recent events, I think the law is going to be used in order to fight anti-establishment (as in anti-current-government) websites mostly and not pedophilia (surprised no one mentioned pedophilia yet...), copyright, etc.
It'd be nice if SOME country started leading by example. 'Safegaurds' like these are gambling on the premise that they will -never- be abused, which is always a lofty bet if not flat out disingenuous.
Yep, I know that if I don't take care of my own training, it's just not going to happen, most companies are not so altruistic that they'll hand me everything on a silver platter.
But at the same time, a company that never hires people unless they already have the exact skillset they're looking for, a company that fires people on a whim because priorities change, and a company that provides zero incentive for people to keep learning (e.g. with 20% time or a willingness to let employees experiment with new tech) well, those are not companies I want to work for.
I got into software because there was so much to learn and explore, so this realization still baffles me. Why on earth would someone want to do this job and not want to learn new things? It's like a baseball player who hates being outside.
Not only that, but often times I'm faced with the prospect like the author's, where people I've worked with actively prevent those around them from learning new things on the job. "No, don't write this standalone module in Python, our standard is PHP; it was good enough 5 years ago, it's good enough now!" (in a four man shop).
As someone who loves constantly learning more, it's suffocating to be around people who are so paralyzed. I simply cannot fathom the fear that drives someone to say to the offer to learn something new on the job, "no thanks, I'm happy becoming obsolete, and you can't learn it either, because I might have to one day support it, and I'm not interested in learning anything new!"
Now, what about an alternative world where he did not "get oo" or perhaps a lifestyle where he had children and no time at work to learn. Or one of these newer not quite as successful software companies which has no money and no extra time.
Keeping up with new tech requires time, and money. Start ups provide neither of these. Even bigger "start ups" attempt to keep up the illusions of a smaller company including mandatory over time and no extras (eg tuition reimbursement, sabbaticals, more than 2 weeks of pto a year, etc).
The other thing, computing as a career is quite a bit harder, more complex, and highly competitive than when the author had their formative years.
The real rallying cry is how do you make an industry that respects career advancement?
I guess that's a combination "back in my day/get off my lawn" statement, plus a little whining, and maybe a humblebrag, but I don't think that's an unusual story at all for software developers.
I interned at IBM during grad school with a team of consultants that all did enterprise Java stuff for financial institutions- that was very different. IBM would frequently send those developers away for a week or more at a time, multiple times a year, to get training on specific technologies. I'm not sure how common that is anywhere other than IBM though, or if IBM even does that anymore. Maybe Google does it? I don't know.
Sometimes I deal with developers who either can't or won't teach themselves anything, and can't or won't learn by doing. They absolutely need someone to hold their hand and explain things to them every step of the way, and they will just throw their hands up in the air and fail before putting any time into trying to read up on whatever topic is giving them trouble. I don't know what to attribute this to, so I'm trying really hard to not jump to the conclusion that they suck or they don't care or whatever. I'm sure a lot of them do just suck at their jobs and/or just don't care, but maybe some of them have genuine problems with learning that aren't their fault. The only thing I can say for sure is that this is a trait that is a major impediment to their careers and getting their jobs done without sucking up too much of their cow-orkers' time (as we all know, orking cows requires long stretches of uninterrupted concentration).
TL;DR Spot on, and being able to develop your own technical skills to keep up to date and expand your horizons is absolutely critical to being a really successful developer. You are also the only person that you can count on to do this for you. You can't really count on any employer, even some mythical ideal company with bottomless resources that treats each employee as a magical snowflake, to do this. Even if your company does provide training, it's not necessarily going to be the training you want or need to receive.
> Today keeping up is a ridiculous job sometimes.
I think ultimately, many people in the industry, they only get to learn new tech when they leave for their next job. The pressure is momentarily reduced while they learn at their new job.
Just my 2 cents.
It was just three years ago that my main responsibility was maintaining code on a black & yellow terminal for a VMS server. Another couple years and I could have easily have been one of those people pushed out of the industry with no easy way back in.
Although my company has provide an avenue for me to transition to doing things with the LAMP stack it is still in some sense legacy. It's a large website base that started over a decade ago.
I have made the choice that I'm done with being legacy and am doing whatever I can to learn current tech. I will even be willing sometime later this year to get a new job at a junior level just I can cut loose the legacy code crap I am tied to. At this point it feels mostly like a bunch of anchors holding me down. I want a new job where I can learn from the people around me and truly be focused on my direction.
Of course it's all a product of culture and supply-demand (systemic), if there are enough great programmers that are willing to learn everything on their own time, then of course it will become the norm that programmers should learn everything on their own time. And, of course, that's great for the employers.
It feels to me that that's the sense in which the young man's comments were meant. It doesn't seem unreasonable in that light. So the compensation he'd like isn't entirely monetary in nature, that's hardly unique.
Got to say, I pity the person who eventually deposits too much money at once, causing the payments to pause while they build up enough to cover his large deposit, in turn causing everyone else to think that the money has stopped paying out and causing no further money to be deposited. That seems like the likely end of this eventually...
Ponzi Scheme Enforcement Actions
Curtailing Ponzi schemes and holding those responsible for these scams accountable is a vital component of the SEC's enforcement program.
Since fiscal year 2010, the SEC has brought more than 100 enforcement actions against nearly 200 individuals and 250 entities for carrying out Ponzi schemes.1 In these actions, more than 65 individuals have been barred from working in the securities industry. The SEC also has worked closely with the U.S. Department of Justice and other criminal authorities on parallel criminal and civil proceedings against Ponzi scheme operations.
Keep in mind that BTC or virtual money is just another asset class.
The experiment is over. We will pay back everyone we can. We are not making money from this.
their "debt" - meaning, what other people have to bring into the system - is their balance + 20%. Right now, their debt is about 42 bitcoin.
Somehow their pulling the plug just really bothers me much more than losing it would have been. Especially because at the rate at which it was going, payout was more or less ensured (300btc * 1.2 = 360. They quit 16 coins short). Right now, two hours after they supposedly would pay everyone back, I still got nothing.
I'm going to watch and see what happens to them. If they don't lose their 1 BTC, then that's at least slightly interesting.
EDIT: It's been more than an hour; no repayment yet.
The best solution is one that doesn't infringe on the "correctness" of the game, and it's a simple one. Simply play the game yourself. Send money in, let the system send money back. Do it a lot. Many small transactions. You will never lose, because when the game ends you are the owner of the actual account and won't get screwed like everyone else.
Right? So, perhaps many of the transactions we're seeing go into this are suspect and the total amounts aren't to quite be trusted?
This is personally why I wouldn't trust any programs to handle currency so openly on the web. The inability of the average user to stress test or put proper testing through applications can cause quite a fault. Having experienced the methods that banks undergo for software cycles there is a tiny chance someone would have the resources to properly engineer something so fragile (relative to money) properly.
Because of this I would assume the main reason the author actually shut the site down (or at least so suddenly) was because of scaling technical issues.
Deposited 0.05859407 BTC ( https://blockchain.info/tx/c5411ae7ad41d6dab5dd879c158cb81f0... )
Recieved 0.0599 BTC back ( https://blockchain.info/tx/ef7f32df518dabb104812ea4a12719026... )
1.2 * 0.05859407 = 0.070312884 BTC
So, somebody owes me 0.010412884 BTC
First, as per the central problem of Ponzi schemes, it is missing an "... if ever" at the end.
But further, it is oversimplified, because people can submit different amounts of bitcoins. Covering up that uncertainty -- which while apparent is not as clearly disclosed as the "Ponzi" aspect -- makes it harder for people to assess the likelihood they will get the promised return.
"The experiment is over.We will pay back everyone we can. We are not making money from this."
This is totally legit. Seriously.
"Warning, if people stop depositing money, you won't get 120% back, and you could lose all your money."
Also, it seems like the person running the site is not taking a cut. If that's right, he's not making a profit, and he's less likely to get in trouble when things collapse.
1.5 hours later 0.30BTC showed up in my wallet! Wow, did not expect that!
I first deposited .1 bitcoin to see if it was real and got 0.11978 back.
Ok, this will be fun.
I then deposited .8591 and got back 1.1999. Ha this is hilariously fun. More gambling!
I then sent 1.2001 and got back 0.8589. Wait what..
The game ended and he skimmed from me to hopefully pay someone else back and not pocket it.
* 100% C code, * support for linux and mac platforms, * console based: uses ncurses, * home grown async network i/o stack, * home grown poll loop, * home grown bitcoin engine, * supports encrypted wallet, * supports Tor/Socks5 proxy, * multi-threaded, * valgrind clean.
The human stupidity in one web, from the same Carlo Ponzi to Bernard Madoff. All Ponzi scheme portrayed in a web.
I love it.
needs more parentheses
Who wants to make some money before the prices get too high ;)
Code like this is extremely brittle with or without that git history, don't rely on it like a crutch as it makes the code obscure, and updating a line somewhere may leave other similar lines updated or not. If you wanted to update the triggerLayout function, you would have to go through the git commit log for every .clientLeft line to see if that one was or wasn't used for triggering layout...
Why on earth wouldn't you place this explanation in a comment preceding the line of code itself?
One of the best things I ever learned in programming is that you shouldn't write code to be executed -- you should write code to be read and understood by other people.
It's going to take me forever to read your file and understand what's going on if I have to do a git blame on every other line.
Just use short purpose-based commit messages (fixes a bug where..., so now...), and then put the actual why behind the implementation in the source code comments itself!
But if you're going to do this trick and you use a code compiler of any sort, you'll need to assign the value of that clientLeft somewhere. Otherwise your compiler will notice it not doing anything and helpfully optimize it away. So your users in production will see your layout bug and you'll never be able to reproduce it in development.
As to the article itself, I'd prefer to see a comment on a line as wacky as this one. It's one well-intentioned lop away from vanishing from that git blame entirely, and then six hours of debugging and research away from finding its way back into place.
Trying to sift through file history to understand what's going on is hard enough on code I wrote myself only a year ago. I wouldn't want to rely on it as the only way of digging into a large shared code base. Yikes.
Digging through version control comments seems to me a last-ditch effort to figure out what some code is doing. If the project had any significant history you could be digging through hundreds of commit messages. Why not just put a 1-liner comment above that line and save every subsequent developer the hassle of wondering what the heck that seemingly useless line does..?
Apart from that, to rigorously apply it would break the author's own advice or common sense source control practise - suppose I make 4 changes in separate files to fix a bug. Do I check them in separately so that I can put my pseudo code comments into the revision history? Now I have broken the atomicity of my revision history. If I check them in all together do I type a whole essay into the revision history about why each change was made in each file? It'll quickly all fall apart.
It also relies on the reader recognizing that they need to be curious about the code here. What it if didn't look so curious? It could easily get cleaned up or modified without a comment to alert the reader.
For people and projects in specific contexts it can work but there are plenty of situations where this is a terrible idea.
Spelunking through commit history shouldn't be necessary learn the intentions behind those kinds of actions.
I suggest we take a step back and ask if modern version control is the best way to store historical information. Modern version control systems (git, mercurial, etc.) were built within the last decade or so but they were built with the same constraints as the original version control systems of the 1980s. They are optimized to be disk efficient (and dont get me started about their command line interfaces). This is crazy!
We should store much more about the programming process than the data gathered if and when a developer chooses to commit. We should record it all- every keystroke. No human generated source of data is ever going to fill up our hard drives or the cloud. Dont optimize for the disk!
This data can be used to replay programming sessions so that others can learn exactly how the code evolved. Developers could then comment on the evolution of their code. Think of this as a modern commit message. I am working on a project that attempts to do this:
What he didn't point out though is that he was actually the one that contributed that code in the first place.
As others have commented, long explanations like this have no place in commit messages. Comments should always be used to explain what you're doing and why you're doing it. Commit messages should simply summarize what you did.
A better commit message would have simply been:
fix animate() for elements just added to DOM
See included comments for explanation.
, two = "bar" , three = "baz"
And you avoid having to write your comments when you commit. You'd do it in the code when you are more focused on the change.
Even better would be detecting when you are changing code and prompting for the comment or let you select from recent comments.
Then on the SCM side when you commit, each comment could be handled as a separate commit.
Any IDEs already doing some or all?
If I can't understand from the commit message, what the change is trying to achieve, I won't even look at the code. Instead I'll ask to clarify the commit message first.
If you remove the line you should end up with a failing test.
gem install stefon
On the flip side, you could go about doing what you're doing under the presumption nobody is maliciously targeting your user base. In this scenario, it's possible you have a couple bad actors that see a net benefit greater than your bug bounties and are silently stealing and selling supposedly secure code from your users. You could be supporting a hacker black market where they sell and trade codebases to popular online sites. Imagine how easy it would be for them to find vulnerabilities in these sites if given access to the source code.
That, my friends, would be a catastrophe.
Could someone explain in simple english, how did they overlook known & well documented bugs that got them hacked (e.g. Bug 3 about cross domain injection). I'm wondering if someone of Github's caliber can be hacked so easily, what about the rest of the masses developing web apps. Especially all those new crypto-currency exchanges popping up left & right.
I've been toying with Django. Reading through the docs makes me feel that as long as I follow the safety guidelines, my app should be safe. It feels as if they've got you covered. But this post rattles my confidence.
$4000 !? Wow, I'd love to be able to make $4000 on the side just doing what I love.
> Interestingly, it would be even cheaper for them to buy like 4-5 hours of my consulting services at $400/hr = $1600.
This sounds like a pretty clever strategy for marketing yourself as an effective security consultant.
EDIT: $4000!? wow. so money. such big.
Maybe it's just me, but asking for donations after saying you bill clients at $400/hr seems weird to me. I wish I could bill at that rate.
Is there an easy way to see what vulnerabilities other websites have had and fixed, and to check if your site has them as well?
Can anyone recommend some reading material or some first steps I can take to work towards moving to a more security-focus career?
> Oh my, another OAuth anti-pattern! Clients should never reveal actual access_token to the user agent.
From what I understood by reading the OAuth RFC is that front-end intensive applications (a.k.a. public client) should have short lifespan access tokens (~ 2 hours) and the back-end takes care of reissuing a new access token when expired.
Can someone clarify on how to make a those calls from a front-end application without revealing the access token?
Friends don't let friends code in Fails frameworks.
But github, seriously? Why do you guys fail so hard at security?
Too much Brogrammer rather than programmer methinks.
The best strategy is to continuously brush up on skills. Experiment and dabble with new languages and frameworks as often as your time allows!
If you can turn a design into code, learn to turn a spec into a design.
If you can turn a spec into a design, learn how to understand a problem and produce a spec to solve it.
If you can understand a problem, learn to talk to people and discover the problems they have so you can solve them for them.
If you can do that, learn a million other things and run your own business.
[You can also skip any of these steps if you're happy managing people to fill in the downstream aspects rather than doing it yourself.]
Personally I find Go interesting and it's something I'm hoping to pick up in the coming year. It seems like a fun language, well suited for building web services that handle lots of traffic.
Lua might also be nice to learn. It's used for scripting in a lot of games. For example: in World of Warcraft you can create your own Lua add-ons. Lua can be easily integrated into your own apps / games, since it's just a small C library. It might be a good language to learn if gaming interests you, since lots of games make use of Lua in some way.
And as someone else already mentioned in this thread: functional programming will become bigger in the future. You can use the functional programming style with .NET if you choose to learn the F# language.
If it was easy to guess the next big thing everybody would do it ;-) A 5-10 year horizon is a very long time in computing years. Look back ten years. How many people were accurately guessing the current environment? How many of the big-things now even existed ten years ago?
When I look back at my career I can't point to a single instance of seeing the next-big-thing.
I can point to lots of great things that have happened because I'm continually poking at new ideas, new processes and new bits of tech. So I'm ready to take advantage when one of those does become the next-big-thing.
However, just a decade ago, a large majority of Internet used JS primarily for form validation, which was sad. A lot of web developers were not comfortable leaving their code open for the visitors to see.
I personally believe JS will continue to soar but I also believe that nobody can answer this question perfectly as nobody knows the future.
In any case, if you spend a lot of time learning any language very well, the time required to learning another language after that, decreases substantially.
Then Node.js and Angular. You should be set for the next 10 years.
Outside enterprise, .NET is sinking into irrelevance. I don't know for Xamarin though.
Learning only programming language (PL) people are limited to scope of that language. Learning paradigms (better in terms of one of PL) you gain knowledges which are "portable" between PLs of the paradigm. You will have a boost when switching PL of the same paradigm: learn PL faster, looking into PL's features and not its basics.
I recommend to check out before you choose what to learn: Lisp dialects like Closure, CL, etc; Ruby; Go / Rust; Java.
Consider: North Korea is run by madmen who have at least some primitive nuclear capability. They regularly make wild accusations and threats against the US. Say they do manage to mount a nuclear device on a long-range rocket. What's to hold them back against launching it at a major US city? Whenever people discuss North Korean (or Iranian or...) nuclear capability, the usual line is that we don't need to worry about it that much, since it would obviously be crazy for them to use them against the US, or any other nuclear power. What is it that makes it crazy, when they've already done so many terrible things to their own people?
It's crazy because, according to MAD, any such attack, or even a specific threat to make such an attack, would result in a full-scale launch against their country. Millions of casualties, the total destruction of their culture and way of life. Everyone in the world, most especially leaders in North Korea, and China, fully believes that the US will carry out this threat if attacked with nuclear weapons.
Now, let's say Iran manages to detonate a primitive nuclear device in a coastal US city. The textbook MAD reply is a total destruction of every Iranian city. You can make the case that this is crazy on it's face - there is no imminent threat to stop, and those millions of people who would die didn't do anything to deserve it. Say that what Hering and the article author seem to want happens - that no weapons are launched, and a more measured, conventional reply is used. What do you think the North Korean leaders will think then? Or China and Russia? That's the more important question to ask.
After that, might North Korea think that they can use a nuclear attack to try and extract some sort of diplomatic concession from us? They're a harder nut to crack with conventional weapons, and they have more firm backing from China. If they get the idea that our MAD policy is toothless, they might try something that could lead to a much greater war, even possibly a much bigger nuclear war.
We've been living in a world for a long time now where the Kim Jong-uns of the world have very good reason to be terrified of using nuclear weapons against the US. Are you willing to see what happens if that is no longer true?
There are terrible people in this world who are prepared to do terrible things to everything we hold dear. To keep the world safe and stable, those people must believe that we will do even more terrible things to them if the situation calls for it. Keeping that belief in place may sometimes require us to actually do some terrible things ourselves,
The President cannot order the use of nuclear weapons on his own; he can only issue the order jointly with the Secretary of Defense. The article mentions that Nixon's SecDef asked people to "check with him" before carrying out orders from Nixon, which may be a sort of garbled reference to the two man rule, but if so it's very garbled.
Whether this rule actually answers Maj. Hering's question is a separate issue. But I find it disappointing (though unfortunately not surprising--journalists often get things like this wrong) that the article repeatedly talks as though a single person can issue the order, when that's not the case.
The "emergency" judge simply declared that what they asked for was not following the specific emergency criterion (basically they didn't believe the irremediable damage part), and he simply let go the sanction for now. Another Court will actually judge the appeal itself. If they win the appeal, they get their money back, and some bragging rights.
here is the PR from the actual court: http://www.conseil-etat.fr/fr/communiques-de-presse/sanction...
But this is a good news, small.. yet a welcome. Though this does not mean that France is not in with the mass surveillance itself(it might just be a diplomatic maneuver). But it does mean that at least a few have realized that privacy of citizens is not something you can mess around with.
(You can get out by hitting Escape, but that doesn't make it okay.)
The article talks about how unlikely the shuttle was to achieve the expected 500 flights, and would more likely only have 200 flights. (Real number: 135)
Some of the quotes from the article are scary in retrospect:
Quote: "Here's the plan. Suppose one of the solid-fueled boosters fails. The plan is, you die."
Another quote: "When Columbia's tiles started popping off in a stiff breeze, it occurred to engineers that ice chunks from the tank would crash into the tiles during the sonic chaos of launch: Goodbye, Columbia."
Remember, this article is from 1980, before the shuttle launched.
For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.
It captures in a capsule form the reasons for a huge fraction of all the big engineering catastrophes, maybe even most of them. For everyone interested in similar case studies, and in reliability from a wide engineering perspective, I strongly recommend the book "Design Paradigms: Case Histories of Error and Judgment in Engineering" by Henry Petroski.
Thus, a failure with loss of vehicle and of human life of 1.48 in 100.
The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management.
The reality was even more dangerous than the engineers had predicted, and far more dangerous than management had.
I constantly see the dynamic observed in the first paragraph, and it would seem that the question "What is the cause of management's fantastic faith in the machinery?" is eternal.
The first loss was after careful monitoring of near-burn throughs of the SRB o-rings on many flights, but no decisive action.
From the New York Times review:
In "The Challenger Launch Decision" Diane Vaughan, a sociologist atBoston College, takes up where the Rogers Commission and Claus Jensenleave off. She finds the traditional explanation of the accident -- "amorally calculating managers intentionally violating rules" -- to be profoundly unsatisfactory. Why, she asks, would they knowingly indulge such a risk when the future of the space program, to say nothing of the lives of the astronauts, hung in the balance? "It defied my understanding," she says.
Look at the closed source services google adds. As far as I know they are all related to google services(Someone please correct me if I'm wrong). Their store, their maps, their email, their location services. This isn't needed in the open source distro and it works great without them. Also there are a lot of restriction on the brand and how you use it when release an Android product. That isn't unique to Android. See firefox vs iceweasel.
Sure sometimes it gets annoying that a lot of Android design decisions are made behind closed doors and in working in the framework sometimes I have to play the game of "Guess what the Google engineer was thinking" because documentation can be scarce. Also from what I've heard sending changing upstream isn't an easy process. These things are nice and make it an easy product to work with... but aren't required for it to be truly open source. The code is there in a series of open git repos and under Apache license. That is open source in my book.
The article, in the course of explaining why it can't be done, names two major examples of where it already has been done successfully -- Amazon's Kindle ecosystem and any number of Chinese OEMs. It doesn't mention other (admittedly less successful) forks, like B&N's Nook tablets or the Ouya. It also doesn't mention how far along the road Samsung was to having the ability to ship Android without Google Mobile Services, until Samsung and Google made a peace treaty that involved sending Motorola off to live with Lenovo.
Yes, if you fork Android, you lose Google's ecosystem. It's not impossible to duplicate, though -- Amazon's done it, Samsung just about did it. And Microsoft already owns all the things it'd need to do it -- that's how Windows Phone has an ecosystem. Losing Google's ecosystem isn't the downside of forking Android, it's the entire point.
Once you've done it, though, you need to convince people to use your fork instead of Google's. Microsoft's success at prying people towards Windows Phone and away from Android can basically boil down to:
1) The ability to run on lower-powered and thus cheaper hardware and still provide a polished experience, and2) Nokia's build quality.
Switching OS cores to AOSP instead of the current Windows Phone OS wouldn't entirely solve Microsoft's app problem (look at the Amazon app store), and it would piss away the only competitive advantage their platform (as opposed to their OEM partner) has against Google's Android experience. Microsoft isn't Amazon -- they aren't a cloud company looking for an OS to give to consumers, they already have an OS. They just need to make their ecosystem more appealing, and giving up on Windows Phone now wouldn't do that.
"If it's a core business function do it yourself, no matter what." http://www.codinghorror.com/blog/2008/10/programming-is-hard...
MS certainly will view "a computer in every pocket, and all of them running MS Software" as core to their future. If they do abandon Windows phone, it will be because that has changed.
In the meantime, MS is a company with a track record of plugging away until version x of the product is good enough to succeed.
Hugo Barra ended up at a Chinese company (Xiaomi) and Google just invested in Lenovo, but unless there's a big policy change on the way the Android/China beast will get further and further out of control.
MS absolutely should weigh in with a privacy hardened Android with great Exchange and Active Directory support (and Nokia Maps). It would be huge, and it would force Apple and Google right on to the defensive.
From my own experience with developing FOSS is also that other people don't try to integrate their solutions into your system. Everybody heads off and makes their own stuff.
Stupid forking is no problem for a project like GCC. But Android is a brand name. And if other people fork it and head off doing their own stuff and fail, then it is always Android that fails.
And really as a developer I also fight for the freedom of software, but as an end user I want to be able to go to a shop and buy an "Android phone" and it just works. Therefore, yes, please, Google take control! Good job!
I wouldn't go back to android or ios. I don't think they're as usable or fit me as well.
Anecdotally, my wife also joined me on Windows Phone recently after dropping her android phone. She started on iPhone, lost it, was gifted my Razr Maxx, then broke it. She liked the UI of ios, but loved Swype on android and said she'd never go back to ios. Then while we waited a couple weeks for our phone upgrade, she played with my phone and ended up really liking it. She now says it is her favorite phone (Nokia Lumia 920). She likes the camera and the excellent apps from Nokia.
Obviously I don't want MS to switch to android. There may be more apps, but so many are of such poor quality that it is entirely irrelevant to me. Same goes for the apple store. It's almost overwhelming how many bad apps there are.
Suggesting that Microsoft would fork Android is more wishful thinking than anything else.
Why not just say - "Android is unforkable" and leave it at that.
The only question would be "what for"? They'd gain nothing from it. They already have a more or less portable kernel upon which they can build phones.
"Firefox ships, but we shouldnt really pay attention"
"Android OEMs should hear Microsoft, Nokia out on Google-Motorola combo"
"Android tablets may provide sales, but profitability is another matter."
Sadly, judging by the old argument rehash going on here so far, it looks like it's working :(
Seems that if a lack of apps is your main problem then easing the time to port an app to your ecosystem should be a high priority. Sure you'd have to re-implement some proprietary api features, but they likely already have equivalents in the Windows phone SDK (location services, in-app purchasing, etc).
From a third party app point of view you can build your against Android API level X, or a corresponding Google api release associated with the given API level. An app built against API level X will work seamlessly on the corresponding AOSP release.
So far, unless youre interested in more tightly integrating into Google services, you dont need to build your app against the Google api, but obviously Google are interested in app developers using their custom apis.
As far as the increasing integration of core applications into GMS is concerned, it is rather overblown to call the AOSP versions broken or buggy. AOSP remains the base platform for the hardware ecosystem to develop their reference designs, AOSP has to work and does work well.
The hardware domain is a big problem for rolling your own OS from scratch, the associated software stack to support a given piece of hardware is non-trivial. Even generic Linux is now being supplanted by Android variants in the embedded space especially if youre interested in graphics or multimedia.
However, also consider that the most successful player in the Android space, Samsung, have pursued a strategy of lightly forking Android with their own features and customizations without breaking compatibility deliberately.
In 5 years smartphones will be cheap commodity and there will be also good opensource community fork. I can see Debian on Android.
I think a more interesting question is whether Microsoft should fork AOSP as a whole, or place an Android compatibility layer atop Windows Phone, a la BlackBerry 10.
What prevent MS(or any other big player) to re-implement GMS?
If that sentence was true than stuff like the phone app would not work, obviously it does :)
Author does not realize that AOSP is a snapshot of the full android OS
Linux' biggest problem has been that Microsoft moved much faster to get Windows on as many PCs as possible through certain corporate deals, but in terms of gaining market share, the Linux ecosystem has also worked against itself, but allowing everyone to fork it into hundreds of different distributions, all doing different stuff, and with barely even a weak app store across several distributions.
Linux is "everywhere", because everyone can fork it, and Android has certainly benefited from this strategy in the early years, too, but that seems to be an antithesis to an "ecosystem". As we can observe, even though "Linux is everywhere" in all sorts of devices, there's no significant "ecosystem".
Google wants to keep and evolve the Android ecosystem, because that makes it much easier for users, and also developers to develop on top of a well standardized ecosystem of devices and OS images. I guess for an proper ecosystem to thrive, it needs to be controlled and standardized as much as possible, with restrictions for OEMs and carriers.
The only alternative for the others, if they really want to start from the Android base, will be to form their own ecosystem, but that's very hard, unless we get to the point where only the web matters on mobile devices, too.
Gingerbread: 20% Ice Cream Sandwich: 16% Jelly Bean: 60% KitKat: 2%
What I don't understand is why does it first ask you for a pair code from another device? My only other device was a desktop far away. After I logged in and clicked "reset sync key" it apparently lost all of the synced data!
I seriously hope this solves the currently heinous sync process. Just let me log in an authorize myself for goodness sake.
If a user couldnt figure out how to set up Firefox Sync previously by following the instructions and taking a set of digits from one device and entering them into another, what hope have they of picking a strong and unique password?
Not coincidentally, they just put the Cloud and Enterprise guy in charge of the company.
Although I hear that neither Sony nor Microsoft has made a lot of money from their console adventures.
However, on the consumer path this interested me:
>As the console gaming industry evolves (dies?), Nadella needs to convince America that the Xbox is truly a living room feature, not simply a gaming device. If he can sell that concept, Microsoft will leapfrog Sony and recapture the lead.
Despite the claims of the "living room console" being made for years, I really can't find any evidence for it. I can't see why a $499 console will beat out an $99 Apple TV for OTT entertainment in the larger consumer market. I still think this idea that the gaming console will become "the box" is an idea that has roots in the early 2000s, where people only have 1, maybe 2 televisions at home. If "John Jr." is playing video games for 4 hrs/day, I commonly see he does it in his room, not in the living room where he can impede on "John Sr."'s decision to watch the Netflix. Sure they could buy 2 Xbox Ones, but John Sr. doesn't really need all that gaming power.
OK, here it is: http://instantclick.io
And probably a ton of other application bugs as style and script stuff wont load. like they normally would
There are a couple things that I had on my TODO list that could be handy though:
1) Caching - if you hover back and forth over two links, it will keep loading them every time. Dunno whether this can be alleviated or not.
2) Greater customisability. It'd be great if I could customise whether it was a hover or mousedown preload, on a per link basis. Some links benefit from hover, others it might be overkill.
3) Lastly, it would be cool if it could link up with custom actions other than just links. For example, jquery ajax loading a fragment of html to update a page. This is probably lower down on my priority list though, as the full page prefetch works remarkably fast.
Keep up the great work!
For many companies (Facebook, Twitter, etc) the desire for instant user gratification is paramount, so the push toward instant browsing experience is a very real possibility. One problem is that most people wouldn't really notice, because these websites load pretty quickly as it is.
One interesting direction is if there was some kind of AI in the background that knows what pages you're likely to visit and preloads them - Facebook stalking victims would become an instantclick away.
One interesting reaction I had: things loaded so fast that I didn't notice one of the page changes and thought it was stuck. For sites like this one where different pages look very similar, maybe it could be worth experimenting with some sort of brief flashing animation (to make it look like a real page load)?
I set the preload to occur on mousedown rather than mousover, as per the docs, but even with this I noticed near-instantaneous page loading.
unless you use vimperator or similar. the demo handles this though, giving a hover time of infinity.
>Click Mousedown = 2 ms
>Click Touchstart =
Thanks for sharing :)
can you provide some specific definitions? thank you
I can't think of anything more distracting than trying to fit in 5 meals/day. Lost a lot of weight doing the slow carb diet, which is eating 4 times/day and the gradually shifted to the warrior diet, which is essentially one large meal/day. Takes a few days to adjust, but once I did my ability to focus has been unreal. Not everyday, but somedays I feel like I am on adderall and it is awesome. Anyways, just want to throw it out there, eating 5 times a day can be a lot for some, and I have found other hn'ers out there that practice one meal per day. I really really enjoy it.
I'm reminded of following Daniel Kahneman's closing notes in his books 'Thinking Fast & Slow'. So, after listing all biases and quirks (to which he has devoted his life), he writes:
What can be done about biases? How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely: This number will be an anchor..., The decision could change if the problem is reframed... And I have made much more progress in recognizing the errors of others than my own.
" Eliminate distractions, and I mean eliminateFor a long time theres been a fad in our industry of having open workspaces. While being right next to someone and being able to just look over and ask a question is ideal for communication, it can be the opposite for concentration. Headphones with loud music dont solve the problem either. What I believe works best is quiet. Can you imagine taking a final exam in college with someone blasting music? You cant concentrate at your best when any sort of external stimuli is demanding some of your attention. It needs to be quiet, and free of any visual distraction as well. People walking by, a television, anything like this should be avoided for you to stay in the zone. If your office doesnt have a quiet distraction free area to work in, take it up with your manager. Im personally lucky enough to get to choose when to work from home, and I often do so when I have a large piece of work cut out for me that I dont need to communicate much more on."
But, I honestly don't see a ton of value in open-sourcing the entire codebase as-is for my company. Not that I would worry about people stealing our ideas, because like most businesses ours is about relationships, support and various other things rather than the "secret sauce" in our code. I just don't see anybody bothering to contribute since our customers are not programmers (not even very technical in many cases). I have no idea why any programmer in his/her right mind would want to go through our product to fix bugs or add features.
I can see it being a different thing if your product is aimed at other developers and/or it has a broad appeal. If there are developers who would be interested for some reason to participate, then I can see it being a great thing. I just don't think it's necessarily helpful for every company.
Here are some other costs to consider, courtesy of Yehuda Katz:
1. Reviewing all of the code that you want to open source for secrets that could compromise security.
2. Improving parts of the code that are embarrassing or too coupled to infrastructure that isn't going to be made open source.
3. Additional communication overhead for communicating with the open source community so that contributors don't do work that you're already working on.
4. Time spent triaging and working with features that may not have been high internal priorities (or risk pissing off the open source ecosystem).
5. A general willingness to cede control over the precise direction and priorities to a larger group of open source people.
Aaron Parecki adds:
6. Support costs of helping people get their dev environments set up.
But Yehuda, obviously, is in favor of open-sourcing as long as you understand those costs, and lists these advantages, most of which the article also notes:
1. Gaining additional contributions from open sourcers that would have been expensive or technically impossible to do in-house.
2. A vibrant community of people that are interested in the product, its direction, and are knowledgeable in the implementation.
3. People willing to do cleanup work in order to become familiar with the project and become contributors.
4. Getting insight into product direction by people willing to put their money where their mouth is and dedicate time to implementation (this is the flip side of some of the negative above).
5. A recruitment pool that is already familiar with the product and its implementation.
Even if we got contributors banging down the door tomorrow, I am not sure we could spare the people-hours to properly make use of them - catch 22! Right now most of our community contributions have come in the form of help translating the app (more than a dozen languages now!) which is great, but only one part of the puzzle.
This is our github: https://github.com/loomio/loomio - as you can see we have pretty basic documentation. What would be the most important information we'd need to help people be able to contribute more easily?
So now it looks like the CodeCombat blog is doing some crazy spammy thing with popups when you hit the site. Could a moderator fix the URL?]
Also, there is minimal danger of a bank run. MtGox makes a profit off of every transaction. Therefore they control more BTC than users are able to deposit. Hence, since Magical Tux (the owner) is motivated not to go to jail, he will not run off with people's BTC.
For anyone who wants the truth, rather than speculation, here it is: http://www.reddit.com/r/Bitcoin/comments/1x93tf/some_irc_cha...
Mtgox is the giant raging zit on the face of Bitcoin. The software and service are so terribly, inexcusably bad, and they have always been that way. The market price on Mtgox has been artificially high ever since I can remember, because no one can withdraw their money. Users wait for their withdrawals for weeks, and sometimes months. As a result, a dollar in your Mtgox account is worth less than a dollar. Now it is artificially low, because people can't transact Bitcoins either! The other exchanges, meanwhile, tend to agree pretty closely, because it is actually possible to move bitcoins and dollars between them.
Imagine if ETrade, say, was so bad at moving money that you had to pay an extra $100 for a share of Google stock. That's what it's like. Or if everyone used Google's DNS servers but they typically took 3 seconds to respond. And yet people use Mtgox, and talk about it, and it's listed on all the sites and apps that track the market price on different exchanges.
At first, everyone chalked up Mtgox's problems to the sheer difficulty of running a Bitcoin exchange, and then, to the difficulty of running the biggest. Well, now it's no longer the only or the biggest exchange, and no one else is having the same problems, at least at this service-destroying level. I can't comment on the difficulty of establishing relationships with banks to move millions of dollars -- I'm sure it's hard -- but Mtgox has also had a lot of plain old scaling issues, and the transaction load is not even that high by web scaling standards. Last time I checked it was 30 per second or so. Their security is nothing special; they've been hacked before. I have heard only negative things about their engineering competence.
I'm a casual fan of Bitcoin but a big hater of bad software, bad customer service, and companies that act in a sleazy manner. I hope Mtgox dies.
On the upside, Bitcoin seems like a perfect laboratory to re-run the experiments in banking leading up to the 1930s when you know, the "communist" FDR imposed banking regulations which are still hated today by some. :)
This analysis describes a Bitcoin transactional defect of which I wasn't previously aware. However, from a vaguely-similar much-smaller incident involving the default client, I know that having a local wallet confused about what prior transactions are truly spendable can take significant time and custom effort to unwind.
For science fiction I'd change Dune... Even though it's a great book and I like it (I even read Herbert Jr.'s sequels... spoiler: don't) I'd pick something "shinier." Heinlein's Stranger in a Strange Land, or 2001: An Space Odissey... or go with the weird and pick Rendezvous with Rama. Or just Foundation. Sci-Fi-wise you can't go wrong with Foundation.
Ninja-edit: How could I forget The Starmaker by Olaf Stapledon? Written in the late 40s, I didn't give much for it as Sci-fi goes. I read it in one sitting. Ended with a headache, dizzy and hungry. It was well worth it.
For Murakami, I'd pick instead a relatively unknown book by him: Hard-boiled Wonderland and The End of The World. An inception-esque plot-inside-plot book, set in an almost Neuromancer-like setting. I love it.
For Dystopian... Even though I have not read The Giver, just classification-wise I'd have to pick Shades of Grey (Jasper Fforde's book, not to be confused with another, numbered, similarly named book.) It was a thrilling read (I think it's the best novel I've read in the past 3 or 4 years, but well, I don't read that much fiction lately), sadly part of a trilogy waiting to be finished. Beware: once you are done with the book you'd want to go to Britain and tie Fforde to his desk until he is done with the next book.
The books that surprised me though are incredibly well-spotted. I like that Guns, Germs and Steel is there. It's been on my reading list for... 3 years already (I have it, but it's a heavy book so I'm always eager to pick an ebook or thinner material for a commute,) because the theme is so compelling. The Right Stuff is not the usual book you see in a best-of list, but for me, it should be in all these lists. Heck, writing from it is used as example of good writing in On Writing Well (which is surprisingly a very good read).
The Long Goodbye, by Chandler. Chandler is great, period. Having one of his books in this lists validates other books I'd never consider... Even though you can't have a Chandler and don't have a Hammett. You can't go wrong with a book by Hammett, I'd probably pick The Maltese Falcon. A classic.
Of course, there are some books that I'd personally treat to a Bradbury process... Catcher in the Rye and On the Road are two books I was looking forward to reading (not being English-based meant I didn't get to read them on high school) and found dull. I guess read in a different context would have made it different, but I couldn't see all the praise. Personal opinion, though.
Didn't expect quite that much young adult literature - Harry Potter, Limony Snickett, Golden Compass, The Giver (plus Lord of the Rings). All major film franchises (well, The Giver under production).
Was amazed to see The Phantom Tollbooth. One of my favourite books of all time, though I haven't read it since I was a child and honestly have never met another person who'd heard of it!
The Handmaid's Tale is far better, but is listed as "Feminist Speculative Fiction"; if I'd read the category first, I would have skipped the book, but the book is great, along with her other writing.
In a list of 100, I'd probably include 3. The Handmaid's Tale is fine; Snow Crash or maybe a Heinlein or a "golden age of sci-fi" choice.
(Edited to fix incorrect title, thanks)
I half expect Justin Bieber's biography to make the list.
Goodreads and Amazon have become as useless as IMDB when it comes to ratings. Both are prime examples of why the "wisdom of the crowds" is bull. Guess Nicholas Carr was right after all.