My name is Jono and I started as Director of Community back in November at GitHub. Obviously I am pretty new at GitHub, but I thought I would weigh in.
Firstly, thanks for your feedback. I think it is essential that GitHub always has a good sense of not just what works well for our users, but also where the pain points are. Constructive criticism is an important of doing great work. I appreciate how specific and detailed you were in your feedback. Getting a good sense of specific problems provides a more fruitful beginning to a conversation than "it suxx0rs", so I appreciate that.
I am still figuring out how GitHub fits together as an organization but I am happy to take a look into these issues and ensure they are considered in how future work is planned. We have a growing product team at GitHub that I know is passionate about solving the major pain points that rub up against our users. Obviously I can't make any firm commitments as I am not on the product team, but I can ensure the right eyeballs are on this. I also want to explore with my colleagues how we can be a little clearer about future feature and development plans to see if we can reduce some ambiguity.
As I say, I am pretty new, so I am still getting the lay of the land, but feel free to reach out to me personally if you have any further questions or concerns about this or any other issue. I am at firstname.lastname@example.org.
> Weve gone through the only support channel that you have given us either to receive an empty response or even no response at all. We have no visibility into what has happened with our requests, or whether GitHub is working on them.
I'd like to call out that the GitHub user @isaacs maintains an unofficial repository where the issues are "Issues for GitHub". It's not much more than a token of goodwill from a user to open a place like that to organize bugs (GitHub: you are lucky you have such a userbase!), but it's the best thing I know of for "has someone else thought of this?". Many of the issues that have been filed there are excellent ideas.
: though I'd say if you also think about it, you should also go through the official channel, even if just to spam them so they know people want that feature.
Gitlab is an open source repository manager that supports local installs as well as public hosting at gitlab.com. If author appreciates open source, perhaps they should put their efforts into improving an existing open source option rather than relying on a proprietary solution.
GitHub used to bill itself as "Social Coding", but the "Network" graph has not seen ANY updates since its original introduction in April of 2008. Issues has seen very few updates. Even the OSS projects that GitHub uses internally have grown stagnant as GitHub runs on private, internal forks and maintainership passes to non-GitHub-employed individuals (e.g. https://github.com/resque/resque/issues/1372).
The word "Social" no longer appears on GitHub's landing page. They're chasing some other goal...whatever it is.
If GitHub is kicking back and sitting on their huge valuations, then it's time to pick up this work again. If issue tracking and code reviews were based on a common, distributed system like git itself, then all these companies could compete evenly for features and UX on top of such a system, without ever having the advantage of "locking in" its users with extremely high migration costs.
Interesting side note: With the exception of Selenium, most of signees are maintainers of JS/HTML OSS projects. I wonder if we could objectively compare JS to <lang> projects in terms of the problems mentioned in the document. For example, there is a strong correlation between +1'ers and JS repos vs. Python or vice versa. Perhaps, we could walk away with JS devs are more chatty than CPP developers when discussing issues... I don't know, just a thought.
>Issues are often filed missing crucial information like reproduction steps or version tested. Wed like issues to gain custom fields, along with a mechanism (such as a mandatory issue template, perhaps powered by a newissue.md in root as a likely-simple solution) for ensuring they are filled out in every issue.
Every checkbox, text-field and dropdown you add to a page adds cognitive overhead to the process and GitHub has historically taken a pretty solid stance against this.
From "How GitHub uses GitHub to Build GitHub": http://i.imgur.com/1yJx8CG.png
There are tools like Jira and Bugzilla for people who prefer this style of issue management. I hope GitHub resists the temptation to add whatever people ask of them.
Taken all together, it seems like github is on a path of alienating their most valuable members. Github was unresponsive to Linus' feature requests and it turns out that theme continues almost 3 years later.
If github plans to evolve into a full-featured ALM like MS Team Foundation or JIRA instead of being relegated to being just a "dumb" disk backup node for repositories, they have to get these UI workflow issues fixed.
Ummm ... anybody getting the irony here?
And, from a GitHub business perspective, why do I hear Lily Tomlin: "We don't care. We don't have to."
Everybody anointed GitHub as "the chosen one" over strenuous objections from some of us that creating another monopoly for open source projects is a bad idea.
Pardon me for enjoying some Schadenfreude now that GitHub leveraged the open-source adoption into corporate contracts and now doesn't have to give two shits about open source folks.
Lily Tomlin's Phone Company Sketch:https://www.youtube.com/watch?v=CHgUN_95UAw
Making an issue or a pull request feels like having a casual chat with the project maintainers. Adding fields and other hoops to jump through puts distance between people.
Well, there's your problem right there.
(I have sooooo much more in this vein but I'll spare you. ;-)
EDIT: No I won't. Fuck it. This is too ridiculous.
These guys (and they are all guys) chained themselves to github's metaphorical car and now they're complaining that the ride is too bumpy and the wind is a little much.
Don't whine about not getting to sit inside the car! Unchain yourself and go catch one of the cars where the doors are unlocked and open and the driver and other passengers are beckoning you to join them. (Apologies for the mangled metaphor.)
These folks come off to me like masochistic babies.
The only aspect I could think of where Github has the pro is the community of developers it has, but does it really matter that much? Especially for established/big projects that probably don't care about the fork/stars numbers, or the random look around-ers that pass by.
That said, we do sometimes consider setting up an official mirror on GitHub. Ideology aside (some team members might think we shouldn't promote a propriety solution for free software project), the main thing that puts us off is that there is no way to disable pull requests. Closing all pull requests by hand is not appealing; leaving all pull requests open is not desirable. We can probably write a bot to close pull requests, but that is just yet another administrative burden.
Not sure if GitHub will ever consider allowing users to disable pull requests though. That seems to go against GitHub's core interest.
React Native, the open source project, is using Product Pains instead of GitHub issues for bug reports and feature requests. This is because there were thousands of open issues and, just as this document mentions, it's impossible to organize them. The comments are all "+1" and it's really hard to tell what's important and what's just noise.
If you take a look at https://productpains.com/product/react-native?tab=top you'll see the power of being able to vote on these issues.
So why's Product Pains relevant?
1. It's a temporary alternative to GitHub issues. I'm guessing GitHub will get to adding votes eventually. If you want to use Product Pains for organizing issues for your open source project, go for it. I'll even give it away to you for free.
2. It's a community dedicated to improving products. This document is chock-full of great, constructive, actionable feedback. Product Pains is a community built for posting exactly this. You can post feedback publicly, about any product, people can vote on it, and posts with a lot of votes create a social responsibility for the company to respond.
3. It's a way for your voice to be heard. Posting on Hacker News lasts a day and will get your voice heard. If you post actionable, constructive feedback on Product Pains, and 150 people vote on it, it lingers waiting for GitHub to do something about it. Around 600 users on Product Pains are also React Native developers. They'd probably be ecstatic to vote on constructive feedback for GitHub.
For example, go make an account and vote here: https://productpains.com/post/github/implement-voting-for-is...
- Multiple assignees for an issue- An "Approve" button so that maintainers can stamp a PR with the seal of approval
It'd be really nice if I could custom sort the queue of issues so that I know what's next up in my queue of things to do; right now I've got 5 tags called NextUp:1 -> NextUp:5 on each repo; this takes way more manual updating than a simple drag/drop widget.
Like they mentioned, having a voting system would be super useful for knowing what matters -- I cringe every time I leave a +1, so I've gotten into the habit of at least adding a comment after it --- but the premise and the pain are the same.
That said, I'm extremely grateful to the platform for enabling collaboration on open source and to the company for its work on Git, Resque etc.
GitHub's strategy is to open source everything except the business critical stuff, but it seems to me that their business is in enterprise support rather than in actual software. Perhaps they should just open source the whole platform and count on their service business being enough to carry the company?
I find it strange that some project maintainers get annoyed when people use the issues section to post questions. What's wrong with that?A question can reveal design failures about your software... Maybe if your software was better designed, people wouldn't be asking the question to begin with.
I do think there should be a +1/like button though.
For the opposite side of the spectrum, there's the Bitbucket+Jira combo. It is customizable to a PM's heart's content, and in the process can become a mess of a tool.
Their 'control' of code and lack of respect to the people running projects is very disappointing and they seem to not want to move forward on the issues.
I'm surprised the open community is allowing this de-facto ownership of the worlds code and how it's written to take place, I'm not so sure they are a benevolent dictator.
Were really pumped about improving dev team collaboration in the GitHub ecosystem by (soon) letting anyone use Sourcegraph.coms code intelligence (semantic search/browsing), improved pull requests, flexible issue tracking with Emoji reactions instead of +1s (example: https://src.sourcegraph.com/sourcegraph/.tracker/151), etc.all on their existing GitHub.com repositories.
All of Sourcegraphs source code is public and hackable at https://src.sourcegraph.com/sourcegraph, so it can grow over time to solve the changing needs of these projects. (Its licensed as Fair Source (https://fair.io), not closed source like GitHub or open source.)
Email me (email@example.com) if youre interested in beta-testing this on your GitHub.com repositories.
My workaround has been to use email notifications exclusively. I have a Gmail filter that applies a label to all notifications and skips the inbox. Then in my mail client I have a smart mailbox that only shows me unread notifications with that label (or that folder, from an IMAP perspective). The smart mailbox then shows me a counter of unread notifications. This way I don't oversee comments when multiple ones are made in a PR.
Problem 1: No context in these notifications. It would be nice if these emails could show the code in question for diff comments or the entire comments thread.
Problem 2: Now what is really bad with these notification emails is that the link "view it on GitHub" sometimes no longer links to the comment I'm being notified of. This happens when the comment was made on a PR on a line of the diff that no longer exists, as sometimes is the case when new commits are pushed. I then have to go to the main PR page, expand all collapsed "foo commented on an outdated diff" comments and manually search for the comment in order to get the context and be able to reply.
By fixing problem 1, problem 2 would be automatically fixed with it and make my workflow much more productive. Is there anyone else annoyed by this?
But I read some complaints about the users and the issues they tend to open and I fully agree. They are a minority but I can't only imagine what people with bigger projects have to deal with. This is what I've found:
- People with little to zero experience in the language/framework that simply state that my project doesn't work without providing more information and sometimes they didn't reply to my "give me more info" inquiries.
- Guys who just want to get their homework done and They are basically trying to get it done using me as non-paid freelance.
- And my favourite one, junior dev in a company, he needs to get their work done with more pressure than the previous one so became anxious about their problems and I feel it even via email. Eventually He gets the thing done but He notices I changed the build system to Jitpack for better dependency handling and and start to complain about Man in the middle attacks to his company and black-hat hackers replacing my lib with a malicious one (I guess it could happen but come on).
But it is a very rewarding experience besides these anecdotical cases
Ability to block users from an organization.
Why blocking a whole organization from an open source project? What would prevent such users to use a personal account instead to do what they organization counterpart is blocked from anyways?
They all seem to stem from the fact that github is too successful. And too many people are on github and too many people are using it, often in wrong ways.
Of course github should solve them all. But still, it's still better to have problems with too many people and too much interest, than have the opposite problem - dying platform that people are leaving (see: sourceforge and Google Code).
GitHub is fantastic because everyone is on it, but the issue system has not improved since inception - and I felt the UI changes have actually stepped back.
We had to implement our own bot to comment on tickets that did not appear to follow a template, and I would have given a kingdom for a template that let people filter their own tickets into whether they were bugs or feature requests or doc items.
We also had a repo of common replies we copy and pasted manually (this because there was so much traffic and me replying quickly would likely tick someone off - but this too could have been eliminated mostly with a good template system). Having this built-in (maybe I could have picked a web extension) would have also been helpful.
So many hours lost that could have been features or bugfixes - and by many, I mean totally weeks, if not cumulative months.
GitHub does the world a great service, and I love it, but this would help tons.
I always got a response when I filed a ticket - ALWAYS - but a lot of them were in the "we'll take that under consideration" type vein.
I feel opening GitHub RFEs up to votes is probably not the answer to serve the maintainer side of the equation, since users outnumber maintainers, but these needs to be done and would greatly improve OSS just based on expediting velocity.
If you don't use the GitHub tracker you lose out on a lot of useful tickets. However, if you use it, you are pretty much using the most unsophisticated tracker out there.
It's good because there's a low barrier to entry, but just having a template system - a very very very basic one, would do wonders.
A final idea is that GitHub really should have a mailing list or discussion system. Google Groups sucks for moderation, and I THINK you could probably make something awesome. Think about how Trac and the Wiki were integrated, for instance, and how you could automatically hyperlink between threads and tickets. The reason I say this is often GitHub creates a "throw code at project" methodology, which is bound to upset both contributor and maintainer - when often a "how should I do this" discussion first saves work. Yet joining a Google Group is a lot of commitment for people, and they probably don't want the email. Something to think about, perhaps.
Also think about StackOverflow. It's kind of a wasteland of questions, but if there was a users-helping-users type area, it would reduce tickets that were not really bugs, but really requests for help. These take time to triage, and "please instead ask over here and join this list" causes people pain.
I love all the work to keep up site reliability, maybe I'd appreciate more/better analytics, but I totally say this wearing a GitHub octocat shirt at the moment.
+1 from the Kubernetes project
The main bread and butter of GitHub is from private or organizational projects and do not have these issues
The majority of accounts on GitHub are folks like the majority of HN readers - developers, coders, hackers and do not have these issues.
So all these complaints are in a sense not applicable to the vast majority of both GitHubs revenue generating customers and the vast majority of GitHub users.
What I like about GitHub's issue tracking is that (compared with alternatives, such as Redmine or Jira) it is free form. It doesn't force users to fill information such as steps to reproduce and I don't think it should. And that's because the needs of every project is slightly different. Consider how different the "steps to reproduce" are for a web user interface, versus the usage of some library. Yes, it can be painful for an issue to not provide all the information required, but on the other hand GitHub does a better job than alternatives at fostering conversations and keeping people in the loop. I've even seen projects use the GitHub issues as some sort of mailing list.
On the second point, I do agree that GitHub needs a voting system for issues. Given that GitHub has long turned into some sort of social network, adding a voting system for issues is a no-brainer. But then a voting system doesn't address the problem of people getting frustrated about issues taking too long to get fixed. +1's are annoying, but sometimes that's a feature and I've been on both sides of the barricade.
* Issue templating.
It's one thing to prefill the entry box, it's quite another to add fields that everyone must fill out. I quite like that filling out something on Github is totally the opposite of filling out something on Jira.
* Issues and pull requests are often created without any adherence to the CONTRIBUTING.md contribution guidelines
This is a people problem that has plagued open source from day one. You cannot engineer your way around it in a manner that doesn't annoy your contributors.
There was a blurb in here about getting rid of the big green "new pull request" button, but that was when this link went to a google doc. Good - if someone doesn't want to take PR's, then they have almost no reason to be on Github in the first place. Put another way, it's the mark of someone that wants a repo as a signpost of sorts without actually interacting with its community.
So every time someone who knows a "Sam" uses @sam incorrectly in an issue I get notified, have to unsubscribe, ignore, and leave a polite message to let them know they're doing it wrong.
It's really lame that they've never fixed this.
Next, we'll see public complaints to Microsoft because MS Word doesn't properly support the way they want to maintain their project's documentation?
I mean, sure, feel free to complain all you like, but how is this not exactly what was to be expected from the beginning, and why do you expect them to care in the future, given that you just seem to have realized that they didn't care in the past, for obvious reasons, and given that their incentives haven't changed, and there is no reason for them to change in the future?
GitHub needs to step it up. They got to the top first, but can they stay there?
As an example of how this would be used, we have a Github team within our organisation which is used for non-technical people to post bugs. These people have no reason to be able to see or push code to the repository, they only need to be able to create issues. This applies to every repository in the organisation. As far as I can see, and without manually adding every single repository to the team, there's no way of setting global permissions permissions for a team. This seems like a major oversight to me.
Anyway, I found that http://feathub.com/ addressed my frustration about the absence of a voting system.
They make a facility available as a nicety, but if your project has legitimate Global impact, you should be looking at (or bootstrapping) a counterpart.
Don't have the revenue for JIRA? Apply for the Free license.
Don't have the stomach for Bugzilla? Turn out a Node/Go alternative.
Don't have the business alignment with Clearquest or Rally? Lower your expectations to suit your Free (as in beer) SCM tool.
Yet, I 100% agree with them. I do not understand why Github issues are so basic. The only feature I feel was added in all of 2015 was making the logging of every metadata change extremely verbose (read: maybe too noisy now?!).
"Person assigned to the issue"
"Person added label"
"Person removed label"
Wed like issues to gain a first-class voting system, and for content-less comments like +1 or :+1: or me too to trigger a warning and instructions on how to use the voting mechanism.
Dont make it so easy to submit bad PRs
> Hopefully none of these are a surprise to you as weve told you them before. Weve waited years now for progress on any of them. If GitHub were open source itself, we would be implementing these things ourselves as a communitywere very good at that!
LOL. I can't tell if this is "go-fuck-yourself"-level passive aggression, or mindless hopefulness that there might actually be a universe in which Github (or a company like it, with hundreds of millions of dollars of venture funding) could be open source. If I worked at Github, my first thought after reading this would be "mmmmm yeeeeaaaaaaa y'can g'fuck yr'self", while the second thought would be "yea, you're not wrong". Generally, passive aggression gets you nowhere when you're asking for something from someone/something who owes you nothing (I know, I know, they "owe" their customers everything).
The Node/React/JS community is hilariously entitled, petulant and childish. The tone of this whole letter is so god damned millennial, it's mind-boggling, because they're not wrong about anything they're asking for. But it's how they ask for it that leaves a dry, acid-y taste in your mouth.
That, and something for code review. Pull Requests are terrible for code review, and it wouldn't take that much to make them so much better.
There are tradeoffs, so pick services you like.
Its pretty neat as a general user, but at least you get the impression with BitBucket that they prioritize productivity and project management. And the task system hasn't received any significant updates since their inception - which is a shame, because tasks are an awesome invention, they just have to be implemented awfully with issues.
I also remember that we recently had to move the entire decision-making process to Slack instead where I suggested we just use the emoji voting system to make our decisions with.
What really gets to me is how adamantly GitHub has ignored all the people who've gone on about this forever. Last time they seemed to care marginally was when jacobian finally managed to twist their arm and get them to implement the Close Issue feature, because one repo issue was a radioactive pit of abuse and invective.
Anyway, implementing just voting won't be a such a good idea in the time of Emoji Reactions!
PS, it was moved to https://github.com/dear-github/dear-github
What is being done in the JS community by those who lead it to make progress on this and who is leading that charge? If the answer is "Nobody", why is that true?
I've been fascinated and appalled in equal measure at the fanboy community, at the intolerance of criticism that sprang up very quickly, and how strong feelings ran (likely because of financial investment in the tech).
It's also been interesting watching it go from simple CPU mining, to multiple GPU rigs in dorm rooms, all the way through FPGA and then to massive installations of custom ASIC miners.
But I've always hoped it wouldn't go mainstream for two reasons - limited supply with weighting in favour of early adopters, and the massive electricity costs of the 'mining' and transaction validation process. Scalable, competitive proof of work systems for a widespread currency are an ecological disaster in the making, and deflationary currency with a handful of early users controlling a huge proportion of the total currency supply... these aren't "features".
I'll be very interested in what happens next, and for the reasons given I hope it's not just a BTC clone with better governance.
People disagree about the reality of global warming. Does that mean we throw out the entire system of laws of the United States and other world powers because it hasn't yet addressed this problem?
This is exactly why I never bought the concept of BitCoin as a 'libertarian' currency. There's always politics, there's always governance. It becomes political as soon as more than one person is involved. And as soon as it's political, institutions, processes, procedures, and laws become necessary - also known as "government."
I still believe in BitCoin, however. Ultimately, there's a way out of this tangle, and like with most political problems, it's a political solution. BitCoin will either adapt and scale up or stay the same and scale (way) down.
In the conclusion he states: "<i>Even if a new team was built to replace Bitcoin Core, the problem of mining power being concentrated behind the Great Firewall would remain.</i>"
Bitcoin's decentralized nature encourages power pool formation by promoting economies of scale. It is not surprising that like the production of electronics, clothing, toys, etc. the lowest cost center is in China.
Read the article, he was clearly laying the groundwork for this move back in Thanksgiving.
The current Bitcoin system, I mean the system we actually use today with the block chain, isn't going to change the world at all due to the 1mb limit. So if I have a choice between helping the existing financial system build something better than what they have today that resembles Bitcoin, or helping the Bitcoin community build something worse than what they have today that resembles banking, then I may as well go where the users are and work with the banks."
People want to protect their investments. But because we are talking about money, don't confuse this for meaning that the investments are just about money.
Investments in code contributions, investments in all the articles read, investments in community, friends, social networking, investments in belief systems, investment in the justification for choosing one thing rather than another.
It's simply not consistent to say "oh you only have 20BTC, so you've nothing to lose" or "oh, you made no code contributions, so why are you complaining" as both ignore the potential for massive psychological and personal investments.
All these investments act as a barrier to change. It hurts, it hurts physically to lose big investments.
There is a cost benefit analysis that humans perform internally. Is the hurt of losing this investment now worse than the pain by keeping the investment later.
If we go back to the article, we see Mike repeatedly tell us that Bitcoin is an experiment. He is saying to us now "look, don't invest your time, effort and money into it" - and he is telling himself "I have made the change, I have accepted a loss by investing so much of my time and effort into this, and am moving on".
By the time independent implementations did begin to develop, it was too late to introduce diversity into the ecosystem.
The result is what we are now seeing.
A political entity -- not necessary a sovereign government, but perhaps a bank or financial institution -- will offer a currency swap to existing blockchain holders to adopt their crypto currency. The inducement will be a limited time window to put in your claim, with all unclaimed but mined numbers going to the financial entity to reward their followers or stakeholders.
In the real world, this is called escheat and it is a power of the crown. Bitcoin is essentially a system for recording deeds to digital land. They aren't making more numbers, so the problem is the political resolution of competing claims to the same resource. This sort of claim comes, in the end, to a network consensus of who is the sovereign.
Bitcoin will be more interesting to me once the mining pool is exhausted. At that point, we'll see how much of Bitcoin's value is in use instead of speculation.
 https://www.blocktrail.com/BTC scroll to "Pool Distribution", today more than half the mining capacity is in two pools)
"The rank-size rule (or law), describes the remarkable regularity in many phenomena, including the distribution of city sizes, the sizes of businesses, the sizes of particles (such as sand), the lengths of rivers, the frequencies of word usage, and wealth among individuals. All are real-world observations that follow power laws"
Why would switching to a cryptocurrency that is better designed be a bad thing?
So we're at what, 0.9 Exahash?
Say you want to force the change. You'd need to add three times that, or 2.7 EHash/s.
Let's say you buy a ton of AntMiners to cover that, at 3.3 GHash/s/$.
So that's a paltry, what... $820 Million?
Less if you just buy the factory in Shenzhen.
Basically just one winning Powerball ticket though.
Caveat emptor: my ability to eyeball math in the peta-exa-yotta range is spotty at best. These results may be off by a factor of... any factor.
There was no way in hell a normal, non-tech guy could ever understand it enough to use it everyday.
This was a case of tech folks missing the woods for the trees. Even this article will go way over the heads of 99% of people on the planet.
Why would you go through all that pain when cash is everywhere, easy to access, and easy to understand?
"Simple sabotage is more than maliciousmischlef,and it should always consist ,of acts whoseresults will be detrimental to the materials and man-power or the enemy"
It seems like the quotation, "One of the great things about Bitcoin is its lack of democracy" is grossly out of context. In the original comment, by the person that @octskyward is talking about, it seems to be referring to the fact that it is not a majority rules democracy.
What "feature" was recently added? This has always been a problem with BTC.
For events to have taken place like described in the article, several parties would have been required to work together with the common goal of keeping the blocksize restriction in place:
The Chinese miners who hold the majority of the hashing power, the developers of Bitcoin Core, the admins of bitcoin.org and the as of now unidentified operators of the DDOS attacks.
If you assume it's not a conspiracy, then each party must have reasons why such a decision would be desirable. But as the author describes it, there are no reasons. This goal wouldn't just push Bitcoin into a questionable direction, it would be downright suicidal: Over time, Bitcoin would become unusable for any kind of transaction. Not even the greedy miners could want a cryptocurrency that no one uses.
So I think there has to be some upside to the blocksize restriction. If anyone has more info on that, I'd be happy to know.
Hearn's post may be technically accurate in terms of the data he's collected. But the conclusions he draws are not correct. Usually in any entrepreneurial project, the fact that the service is over-subscribed and increasingly valuable is a sign of success. If one views bitcoin as an open-source project which should have some ideal technical implementation, then yes Hearn has failed to convince everyone to run his preferred implementation of bitcoin, or to agree on exclusively running a different protocol that is not bitcoin while calling it bitcoin.
There is plenty of room for Hearn to run a bitcoinXT altcoin. The only failure here is one of logic, forced by the concept that there can be only one successful network based on nakamoto consensus protocol, and that that network must either be bitcoin or a replacement bitcoin which supplants the original.
In open source communities, impactful contributions yield influence. Here are the top 100 bitcoin contributors:https://github.com/bitcoin/bitcoin/graphs/contributors
Guys like Wladimir, Pieter, Gavin, Cory Fields, Gregory Maxwell and Luke Jr have a voice because theyve contributed many thousands of lines of code. (Lines of code are only a proxy for impact).
You may have noticed Mike Hearn isnt in the top 100 contributors list. He is the primary author of the Java implementation of a bitcoin library:https://github.com/bitcoinj/bitcoinj
He started it in 2011, definitely early. But a substantial amount of bitcoin core work had already set the path. There are also similar implementations in many different languages but they are not the primary reference implementation for full nodes.
According to Hearns blog post:Ive talked about Bitcoin on Sky TV and BBC News. I have been repeatedly cited by the Economist as a Bitcoin expert and prominent developer. I have explained Bitcoin to the SEC, to bankers and to ordinary people I met at cafes.
Being cited by journalists is not the same as being a primary contributor.
The disagreement between Hearn and the other developers isnt about whether to increase capacity, its about how. Many of the primary full-node contributors believe a hard-blockchain-fork is a risky approach. Lots of work is being done to explore better options, like segregated witness (http://gavinandresen.ninja/segregated-witness-is-cool).
Mike Hearn tried to (very aggressively) push the idea of increasing the block size with a hard fork. In fact the patch allows node operators to vote and 75% adoption is needed. When it looked like that wasnt going to pan out, he created Bitcoin XT where Decisions are made through agreement between Mike and Gavin, with Mike making the final call if a serious dispute were to arise.
So the claim that Hearn not being able to take over decision making power for the bitcoin community is evidence that bitcoin has failed seems to show something slightly different. It shows that open source software methodology of forking and adoption lets the best implementation win and prevents hostile takeovers.
Mike Hearn is not as impactful to bitcoin development as he or recent news would indicate. Mike leaving the bitcoin community has little impact on the future success or failure of bitcoin.
Also a great exhibit of some stereotypical programmer social problems; we don't get a lot of middle ground, most programmers are either openly hostile and combative or so deathly afraid of confrontation and responsibility that they give away their authority so that they don't have to handle the pressure. Gavin should've kept control. Bitcoin is learning exactly why a strong central authority is so desirable in money exchange: it keeps the value of the currency stable by preventing panic and confusion over issues like this.
Many discussions of Bitcoin claim that this power is transferred to "the network" to make the final decision, which sounds very egalitarian and democratic, but Bitcoin failed to provide the controls that would prevent power hoarding and ensure that the people who depend on bitcoin were fairly represented. This is one reason why modern democracies are structured within the framework of a republic. This is probably one of bitcon's hardest to solve problems, since the hardware to get respectable hashrates is unobtainable for quite literally everyone who doesn't have access to their own microfabrication facilities. Even if one of the specialty bitcoin hardware makers had a really good, cheap chip, why would they share it? They'd hoard all the hashpower for themselves. Litecoin attempts to address this by hasing with scrypt, under the belief that it's harder to hoard power with custom hardware if the algorithm uses a lot of processing power and memory instead of just a lot of processing power.
Mike failed to mention one incentive that exists to prevent increasing the block size: miners get the transaction fees attached to each block they mine. If the block size is large, there is little contention for space in the blocks, and ergo there is not much reason to incentivize miners to include your transaction in the next block. By keeping the artificial constraint on the block size, people who own a lot of hash power will be gaining a lot more bitcoin for themselves.
I don't think this crisis is insurmountable. So much money has been sunk into bitcoin that I can't believe people are just going to let this cabal take it out. BitcoinXT will gain notoriety through the mainstream press and the resultant sell off among casual investors will freak the big players out and force them into running XT nodes.
The author is complaining that Bitcoin is working exactly as designed.
From Satoshi's paper:"Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it."
If you want to raise the block size, out-vote the Chinese miners.
- Gotta say, even if bitcoin "fails" I don't feel it will be a "failure". The community and people have learnt so much, I mean, Bitcoin became...big and it's the first cryptocurrency to reach this level.
To have made a perfect system would be unrealistic really.
This "people problem," as Mike calls it, is undoubtedly a result of the mechanics of the blockchain. Slightly different rules may lead to dramatically different (and less insurmountable) people problems.
At least, my understanding is that he recently began working for a group that competes with Bitcoin in trying to connect traditional banks and (non-Bitcoin) blockchain technology. He has an incentive to scare people away from Bitcoin.
Not saying there are no scaling/social/whatever issues in Bitcoin. But the author seems to be conflicted.
Disclaimer: I own a few Bitcoins, and thus hold the opposite conflict of interest.
Typical template of building up hype linkbait case - start with a couple of puffed up credentials, pose yourself like a well wishing visionary and then shed few crocodile tears of sadness over "it failed" statements which of course conveying an absolute truth.
Nothing helps to share links better than controversy.
Although is it true that if there are 2 large network which are separated (by the great firewall), does that mean the ledger could be split in 2 ?
Simply have 5 servers running that as a network server and you won't need proof of work, nor will you need a blockchain. Homomorphically encrypt a basic ledger with an encrypted backend, and throw away the key. Done/done.
One might say encrypting 52 integers in 36 hours is somewhat less than acceptable performance, but how does it really compare to bitcoin in total effort ? This is certainly good enough that anyone with a decent pc could run it. Hell, you might even reward them for running it just like bitcoin does. And it ought to be a lot cheaper to run than bitcoin.
There are three types of people who are into Bitcoin:
1. People who are in it out of sheer curiosity.
2. People who are in it to get rich quick.
3. People who have been scammed into it.
The people who are in it out of curiosity are the people I don't take issue with. At its beginnings, I found Bitcoin to be a curious thing because it was a novel and new idea. However, as things progressed and I learnt more about how it all worked, I saw it as a cumbersome idea that wouldn't effectively replace anything and as a result now I'd rather make jokes about it than take anything about it seriously. I've never spent more than $20 CAD on Bitcoin and I have gotten it all back for that matter too.
People who get scammed into Bitcoin typically get scammed either one of two ways: they're either being coerced into using it because they've gotten something like malware on their machines (CryptoWall and its variants) or they see it as an investment alternative. The only times I've ever seen non-technical people experience Bitcoin is when I have to tell them that the malware on their computers will only release their unbacked-up data requires a payment using the cryptocurrency to get it all back. And that is really what a non-technical person's experience with Bitcoin is going to be: it's a way to pay thieves.
As for the get rich quick people, they tend to fall into the third category or they themselves are scammers.
Right now there are two forces dominating the Bitcoin community: the miners and those who are holding out on whatever magical unicorn rainbows makes the coins have value. The miners don't want to see changes to the software because it'll hurt their bottom line and the people holding and exchanging it want to see these changes so they can benefit. So as a result, Bitcoin has entered a war of attrition and is starting to show its problems. Mike Hearn's leaving is definitely a consequence of this problem.
Earlier yesterday , I made a quip about how it's insulting to suggest that we get those who are "unbanked" as a result of living at no-fixed-address (ie: "homeless") should eventually move on to Bitcoin as an alternative to mainstream financial institutions. It's really for the reasons that Hearn made: would you want to wait a random period of time ranging from maybe a few minutes to a few hours for your transaction to go through? It's already insulting enough that they're living at the bottom of society, so why would we want to get them to use a bottom-tier financial system? Why not instead suggest making it easier for them to participate within mainstream banking schemes?
I anticipate based on my last remarks that the responses to this post will consist of feckless anecdotes and pointless accusations that I and others have a "problem" with Bitcoin. I guess to a certain extent the statement of me having a problem is true, but at the end of the day Hearn is right.
Bitcoin is a failure and if you invested into it then you're getting what you deserve. If you think that it isn't a failure then you obviously didn't comprehend Hearn's writing.
 - https://news.ycombinator.com/item?id=10898408
Why would this be? Are they hoping for increasing transaction fees and therefore increasing mining profits?
In any case increasing the block size seems like a no brainer from a technical point of view, at least if your interest is in Bitcoin itself and its growth and future.
It has some of the shape of an unincorporated association, though. There's a lot of caselaw dealing with disputes arising in those, usually from reluctant courts that were dragged into particularly petty and poisonous disputes.
Given the amount of actual money involved, people might start asking courts to settle some of these questions before too long.
(But don't ask me to do it, I'm not a lawyer and this isn't legal advice).
Well, that and "Bitcoin has become garbage, avoid at all costs".
If you have a new account and want to comment here, you're welcome to email us at firstname.lastname@example.org.
One gem in here that has not been high-lit by other commenters in this thread that stands out for me because that's one that I did figure out very early on in life (and this served me very well) is this one:
"As I've written before, one byproduct of technical progress is that things we like tend to become more addictive. Which means we will increasingly have to make a conscious effort to avoid addictionsto stand outside ourselves and ask "is this how I want to be spending my time?""
Please do ask yourself that question often, and if the answer is 'no' or 'maybe' then simply don't and save yourself a lot of grief and regret in the long run.
[Vonnegut tells his wife hes going out to buy an envelope] Oh, she says well, youre not a poor man. You know, why dont you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because Im going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I dont know. The moral of the story is, is were here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people dont realize, or they dont care, is were dancing animals. You know, we love to move around. And, were not supposed to dance at all anymore.
The next step was to remove things from my life that cause stress and are not worth the effort because of the BS they involve. Whether that's just life situations or clients it has been very refreshing.
The next step was to kill a lot of tv/movies, and most of my free time internet usage.
Finally, I started steadily filling in the new time gainings with things I really care about and the personal sense of well-being and accomplishment has improved drastically.
So I have no intent of doing anything other than continuing to go down this road, I've gotten in better physical shape, better health, enjoy life more, have learned a new language, visited many new places, my stress level has dropped by at last 200%, its been a very positive journey so far.
It was hard to stop playing with a bunch of different projects and make myself focus on one single project, but in the end it has been extremely satisfying to finish what I start. I wish my father was still around to see what I've done, but I might never have finished anything without the lesson of his passing.
From 1Q84, Haruki Murakami
When you realize how short time really is you ruthlessly cut bullshit.
I'm glad he pointed out this seemingly small detail. This took me a very long time to understand.
EDIT: It reminds me of another great post by Paul Buccheit. It's so important to have the 'heroes' of startup culture explicitly spell out these values:
> I worry that perhaps I'm communicating the wrong priorities. Investing money, creating new products, and all the other things we do are wonderful games and can be a lot of fun, but it's important to remember that it's all just a game. What's most important is that we are good too each other, and ourselves. If we "win", but have failed to do that, then we have lost. Winning is nothing.
On the other hand, there's something about the following sidenote that is profoundly human but works quite the opposite of the painful shock implied in my first sentence:
> I chose this example deliberately as a note to self. I get attacked a lot online. People tell the craziest lies about me. And I have so far done a pretty mediocre job of suppressing the natural human inclination to say "Hey, that's not true!"
This is almost universally true. It is incredibly reassuring to know that even the greats struggle with this and antagonists pursue us through all walks of life. I'll admit, I've held back from publishing articles that all of my reviewers liked because I worried about the inevitable negative backlash that comes with standing for anything on the internet. Maybe one day I will publish. If so, this essay helped a great deal in getting me there.
With children, the days are long but the years are short.
These days, I find myself trying to find the "work/life balance", which is really just me managing the ebb-and-flow of time between work and family. What I've learned in that process is that while work provides some satisfaction that meets an internal need, it will never ever hug me back.
Take time, hug your kids, leave your work every now and then. The years won't seem so short that way.
I have four children of my own and I'm sceptical of the proposed idea that somehow life is best spent by maximizing time with them. Don't get me wrong, the best moments in life are with my children. Still, one's contributions during our brief passage in the form we like best (walking and free thinking humans) surely should aim to contribute far more than the self-gratifying (and possibly narcissistic) time spent with one's children.
In short, if you do have the luxury of choosing where you actually spend your time, make sure you're giving far more back to the rest of this race than to yourself.
If Wright brothers (and other flight enthusiasts around that time) had not taken the initiative, academics, pundits, and "experts" had it settled that heavier-than-air flight is impossible.
It had to eventually happen because technology is inevitable, but we might have conquered flight in 1953 instead of 1903.
In the case of anti-aging, such a difference means you either die or barely make it past the last generation to die.
At this point in my life, there's nothing new that interests me that hasn't already been done. It's made life pretty boring at this juncture. I've lived out all my dreams and now it's all just like "Okay, now what?"
The state of all who are preoccupied is wretched, but the most wretched are those who are toiling not even at their own preoccupations but must regulate their sleep by anothers, their walk by anothers pace, and obey orders in those freest of all things, loving and hating. If such people want to know how short their lives are, let them reflect how small a portion is their own.
I don't understand it when people talk about 'squeezing everything out of life' - As though you could extract real lasting substance/meaning from it.I don't believe it's possible to "Make the most" out of life - It all adds up to 0 in the end.
"Squeezing everything out of life" implies that you're literally taking the juice out of life and storing it somewhere safe/permanent - In reality, it is like squeezing an orange and then putting the juice back inside the orange.
I have known how short life is for a long time but I encounter people on a daily basis, most far older than me who don't seem to realise it or if they do are acting irrationally. When I see them wasting their time on things that are clearly not important it doesn't bother me too much because it is their time to choose what to do with but what does make me angry is if they try to involve me in the 'bullshit' too, to use the term used in the essay. At work this can range from petty disagreements or the colleague that creates busywork. I wonder how many people start startups recognise life is too short compared to those who do not, it would be interesting to find out.
That doesn't mean they were wrong when they were kids.
On the one hand, we want to believe the adults because they have perspective and experience. They were those kids. But on the other hand, we should also believe the kids because they are actually living it.
pg mentions this, but what he says after is not even the best advice in this same essay for it. The real insight is here:
> The "flow" that imaginative people love so much has a darker cousin that prevents you from pausing to savor life amid the daily slurry of errands and alarms.
The way to slow time down is to break all your routines, and never be in a flow. Have no typical days. Don't have a schedule. Don't have a favorite restaurant, a default outfit, or hang out with the same people more than once or twice before seeking new people. This is nearly impossible for most people to do, because doing these things SUCK. And time is slowest when everything feels like it sucks.
I only know this because this was what my life was for 2 years when I was on the road as a digital nomad. It sucked, but was the most rewarding period of my life as well, because it truly was time slowed down. I learned and experienced such a larger spectrum of things in the same time frame than anyone I knew, including myself from any other time frame.
I don't endorse it as a long-term way to live life, but I highly recommend everyone spend at least a year of their (preferably younger, pre-family) adult life living thus to learn truly how much can be fit in a human life if you frame it right.
This also has a darker cousin for some parents: since spending time with one's children is always a viable and valuable option, spending time without them becomes difficult. People without children often notice that most of their parental friends disappear. This despite the prior protestations of many that "We'll still do things after we have kids."
Undoubtedly some parents work more efficiently than their childless selves (this is also motivated by a desire to earn money to support the kids). But can they socialize more efficiently too, in particular with people who don't have kids?
Contradiction means pointing to possible holes in our assumptions. So online discussions - are a way to test our assumptions and learn.
Online discussions is a playground for training our decision-making skills.
Of course, we should maintain a healthy balance between learning in online discussions, other ways of learning and actual decision-making (work). But that healthy balance should probably include more than zero time in online discussions.
But not if you had 8 private jets, or 8 cars, or 8 houses, or even 8 telephones. This is a rather arbitrary statement.
For instance, it is kind of funny to say this or that company is more or less bullshitty, when the whole structure is such that requires the masses to work for some company, else essentially be deprived of the resources required for their subsistence. So, most people will have to earn their subsistence by participating in a scheme that allocates more to someone else's subsistence.
Maybe it's the best we can do, but on that scale, the bullshittiest company of all is only marginally more bullshitty than the least.
I would include Anger as a subcategory of bullshit. It promotes irrationality and the after effects hamper you. In the renowned book "Emotional Intelligence" the author says that the best thing to do when angry is to focus on controlling it. The more it grows, the harder it is to escape.
Or did it? I think it reflects the point of view, when we're involved in work, all the details to take care of, we feel overwhelmed, busy, time isn't rushing by at all. But once it's history, the past, all of that is suddenly doesn't exist, it has no reality and it is packaged up in memory as though it was just a brief moment. Kind of like closing a menu what's there is hidden, except we're not reopening it, at least not the same way ever again.
Time is relative, as Einstein said, it goes quickly sitting next to a pretty girl, but a boring lecture drags on forever. The epochs across the lifespan come and go, and I think we judge the duration of experience by its currency because involvement with events in real time gives the sense of time. The meaning of a "long" or "short" time is anchored in such reality.
Anyway I've been thinking for a while that what's important is not how much time we have left to live. After all that's not something we can actually ever know. What matters is what we do with the time we have. I'd surely agree we can't afford to waste it on irrelevancies, pipe-dreams, or bitterness. Far better to do what we can, when we can do it.
That part really struck me. I've long thought that the most incessant effect of the student loan phenomenon is that debt indentures you to working a conventional job in a country at a similar level of economic development as the one where you have the debt. You can't just "drop out" and go live somewhere cheap in southeast Asia where it doesn't take much income to live, because your debt payments are not adjusted for standard of living imbalances.
This is probably obvious to everyone, but I think it's worth noting that it is something holding a lot of people back from doing a lot of what is suggested in the article.
If you're stuck in traffic, you could've been reading a book. If you were reading a book, you could've been cleaning your room. If you were cleaning your room, you could've been working on a side project. If you were doing that, you could've been working on a better side project to get rich. But that would be less important than curing cancer, which is less important than curing old age. However, even curing old age pales in significance to the fact that entropy will dissipate all energy in the universe. How are you going to prevent that? And what if there's multi-verses that need to be fixed too somehow. You didn't fix the past either. Maybe you should've worked on a time machine instead of solving entropy problems. And what about all those people in poverty getting malaria because you were working on some b.s. problem?
It never ends. You could go crazy dwelling on this stuff too much.
The bullshit and cherishable also seem to frequently reverse roles. Many things that seem like noise today, may return to foreground with profound meaning later.
I wonder if perhaps nothing is bullshit or meaningful after all. Experience simply plays this game of light and shadow to keep us entertained.
> Life is long enough, and it has been given in sufficiently generous measure to allow the accomplishment of the very greatest things if the whole of it is well invested. But when it is squandered in luxury and carelessness, when it is devoted to no good end, forced at last by the ultimate necessity we perceive that it has passed away before we were aware that it was passing. So it isthe life we receive is not short, but we make it so, nor do we have any lack of it, but are wasteful of it. Just as great and princely wealth is scattered in a moment when it comes into the hands of a bad owner, while wealth however limited, if it is entrusted to a good guardian, increases by use, so our life is amply long for him who orders it properly.
"Relentlessly prune bullshit, don't wait to do things that matter, and savor the time you have. That's what you do when life is short."
In way too many cases the "bullshit" exists because too many capable people are ignoring it.
That is unless "bullshit" is defined as all things that don't matter to anyone. In which case why would we assume anyone is focusing on those things anyways?
Of course the optimal lifespan will change over time. Today we aren't really facing so many physical survival challenges, but if we extend life longer then we may slow down our speed of innovation.
Just this past week I helped out a young company whose founders are working toward clinical translation of a method of clearance of senescent cells, one of the very first actual honest-to-goodness narrow focus rejuvenation therapies to emerge from the labs. This is something that works to repair and reverse a form of tissue damage that contributes to near all age-related disease.
This is far from the only approach to human rejuvenation presently under development.
But, you know, life is short, so pay attention or not, up to you.
Make too many bad decisions (or have too much bad luck) and it turns out life is really, really long.
The sooner you realize life is short, the more you will make smart use of your time.
The same goes for faith: the sooner you realize there is no life after death, the more you will make smart use of your time. Your brain runs out of electricity and fluids, and poof you go.
A couple of years back I was working remotely in UK for US clients - so my day started later. Which meant I woke up with the kids, fed them, played with them, walked them to school.
My abiding memories are cuddling a child in an arm each watching early morning TV before starting the day.
We should all be so lucky, except it's not luck - it's consciously as a society designing work around community and family not the other way round
If anyone wants to use IoT data, personal search queries, etc., to build a recommendation engine that increases the probability of 'reading the right thing at just the right time in life', I'd sign up for it! For subtle/complex things, this seems like an overly-intimidating task, but to get started on the project, someone querying illness, loss, etc., might benefit profoundly from this. You'd be essentially be creating a 'skewed Google' that returns what the user _needs_ rather than what the user _wants_ at the moment. (That said, don't pursue such a project at the expense of spending time with family... :) It's a tough balance to strike, isn't it?)
a few examples:"6. Resolved, to live with all my might, while I do live."
"9. Resolved, to think much on all occasions of my own dying, and of the common circumstances which attend death."
"52. I frequently hear persons in old age say how they would live, if they were to live their lives over again: resolved, that I will live just so as I can think I shall wish I had done, supposing I live to old age"
"This", meaning "current".
Add the belief that there is another (form of) life after this one and suddenly the equation changes:
This life is short. But then there's another one coming.
The scientists in us agree - of course there is no evidence of life after this one, despite the messages transmitted to us from our ancestors - in the form of stories, traditions, superstitions, beliefs, religions. Depending on who you ask - we either go to a place were we stay forever (heaven/hell/spirit world) or we come back to life as another being or life form.
But the body dies and rots away !
Technically the body has died many times during it's lifetime - cells die and others are created - or rather - create themselves according to the instructions in the DNA.
The DNA is the one that moves forward through time, all the other pieces of our bodies rot away.But not the whole of it, just 50%. Half the DNA vanishes into void.
But "I" will no longer "exist" !
That's a belief. And also quite vague, because - who/what is this "I" ? Is it my body, is it my brain, is it something which lives inside the body/brain, is it all imaginary ?
Well, think about anyone - someone who's not near you right now - who is he/she ?
Right now, he/she is a thought.
Isn't everyone, dead or alive, just that - a thought ? Isn't "I" a thought then ?
If so, what is life then ? A story ?
"Remember that man lives only in the present, in this fleeting instant; all the rest of his life is either past and gone, or not yet revealed. Short, therefore, is man's life, and narrow is the corner of the earth wherein he dwells." - Marcus Aurelius
"Think of your many years of procrastination; how the gods have repeatedly granted you further periods of grace, of which you have taken no advantage. It is time now to realize the nature of the universe to which you belong, and of that controlling Power whose offspring you are; and to understand that your time has a limit set to it. Use it, then, to advance your enlightenment; or it will be gone, and never in your power again." Marcus Aurelius (Meditations 2:4)
Worst. Advice. Ever.
If I have one thing to contribute, one iota of value to extract and pass on from my life, this may be it.
P.S. Substitute "extortion" for advice, in many circumstances, for a sense of how it really worked.
As a 40+ parent, I savor the time with my children, parents and friends and I prune the bullshit in work down to minimum.
The big problem: How do you choose what matters if nothing matters on the grand scale?
Life is so short that at 40+ you realize you will never get to do even 0.1% of your bucket list. So which rock do you push uphill?
In other words at 20-30 you can be adventurous and make mistakes. At 40+ you have promises to keep and miles to go before you sleep.
A similar thought scares the shit of me. I can code for hours on end (in the "flow") toward even perhaps the most trivial ends established by my employer. On the one hand, you could say I'm doing what I love in life. On the other,d darker hand, it seems like I'm squandering so many hours of my life playing this (effective) video game where I code for points (money).
1. Don't work for a start-up, since they don't impart salary-winning experience to you, they don't pay you or provide reasonable benefits, and they also don't allow you the freedom to work on big ideas that they usually promise. The lines used to sell naive engineers on working in start-ups are as paramount to life's-too-short-bullshit as anything can be.
2. Don't agree to work in Agile/Scrum-like one-size-fits-all software management environments. Almost every single aspect of these systems is bullshit and will waste your time and break down your morale while draining away your productivity in the best years of your life.
3. Don't work in open-plan offices or even offices that merely have cubicles. It's been settled for a long, long time that even in dense urban areas, providing private offices for individual knowledge workers is extremely cost effective for businesses, as productivity, work-place cognitive health, job satisfaction, moral, etc., all go up substantially. Generally the only reasons for open-plan offices are (1) bullshit trendiness in which an organization performs a shallow copy of some other organization, (2) hyperbolic focus on short-term costs, which means you should be thinking that the upper management doesn't know what they are doing and are bullshitting you -- it's similar to seeing a company stop providing free coffee as a money-saving tactic. It's bullshit -- coffee is so cheap and the productivity and good will it brings are so valuable that it's virtually never a reasonable plan to cut it; and (3) environments where upper management get off on surveillance and cognitive manipulation, and so it becomes a company cultural value to cram everyone into big rooms where you function more like a piece of office furniture than as a worker.
Personally I would also add that life's too short for enterprise C++ and Java (the languages themselves are quite fine, but anyone telling you that some legacy system couldn't have been maintained and incrementally brought into a better state by 2016 is, once again, bullshitting you and see you as nothing but a glorified code janitor).
I think if I could give any advice to young developers, it would be that if they want management types to respect them throughout a prosperous career, they have to avoid the bullshit of the items above. If you let a manager or executive bullshit you by duping you into working for a start-up, by getting you to agree you are a child whose own creative thinking about problem solving can't be trusted and so Agile/Scrum cookbook management is needed and you must play your part, or by getting you to agree that your natural inclinations for privacy, clarity of thought, protection of productivity and time, should all be sublimated so you can be a "team player" by wearing headphones that cost more than your employer's 401k matches for the year so you can just barely function 10 feet from a foosball table, you've already lost, and it will take years to undo the damage.
What will be the best use of your time?
When someone ask for the best flavor of linux, or program, or car; depending on the forum you may get the answer "that depends on you", and you may read a lot of different opinions on why people think their version is the best "for them".
With that comes a small problem: deciding the best use of your time, plan for the rest of your life may be incredibly complex.
An alternative B plan could be planning around: "What I don't want on my life"
-I don't want to be in the middle of traffic because is less time with my family...
-I want to spend less time on the internet to go the gym.../I want to stop being a gym rat to learn something on the internet
I've never got this sentiment. Life is the longest thing anyone has ever done. Life is long, very long. I think back ten years ago and it seems like an age ago. It was. I'm early 30s and I feel like I've lived a long life; seen a lot and done a lot and had my kicks. That I've maybe got another full 60 years if I play my cards right is amazing to me. It seems like eons.
The only funny thing about time I've noticed is that as you age (and if you read) the past gets closer and closer. When I was a kid finding out people were born in the 1940s was amazing. SO long ago! Now Napoleon's reign seems very relevant and modern to me.
There's a lot written on how to live a great life, but in the end more and more I think, you live great stages in life. At any stage, you optimize for it and with an eye for being prepared for future stages.
My favorite: "One heuristic for distinguishing stuff that matters is to ask yourself whether you'll care about it in the future. Fake stuff that matters usually has a sharp peak of seeming to matter. That's how it tricks you. The area under the curve is small, but its shape jabs into your consciousness like a pin."
I believe you meant "ensure" not "insure" here:
"Indeed, the law of supply and demand insures that: the more rewarding some kind of work is, the cheaper people will do it."
While that seem contradictory, it is not. When we are in a hurry we make unneeded mistakes, we don't enjoy the process of what we are doing, and we don't do things that reflect our true selves.
As a father of young children, and a cancer survivor, these words resonated more with me than anything I've encountered on hn in a very long time -- maybe ever.
Other people's bullshit has never bothered me.
I regularly just turn my phone off and pick up the pieces when I feel like it.
Maybe that comes from my musician background. I don't know.
Or maybe I'm just the most inefficient person in the world because I don't give a shit about anything. I just do what I think is necessary when I think it's needed.
I think I'm a fairly productive person. I get things done. But I don't worry about it much.
I spend most of my thoughts and energy on my family and my girlfriend, not work.
Okay, that's not fair: I spend quite a lot of time reading books.
Is this a real problem? Or is it a straw man?
For me this is my bullshit filter.
Just do the best you can with your time. If you become unhappy with how you spent it you can use that to inform you on future decisions but you can't change the past.
The pain of having missed significant time with someone you care about is severe, but it is also a thing you can't change.
I am not saying pg is wrong, I am pointing out a problem.
Life may be too short to worry about how you are spending your time.
Life is short, so is history, and the impact we can make is enormous.
To see a World in a Grain of Sand
And a Heaven in a Wild Flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.
- Poses the hypothesis that "Life is short"
- Proposes an 'objective' basis for this feeling: some of his most meaningful life events happen 8 or less times
- Transitions that the shortness of life justifies avoiding "bullshit," while acknowledging that's a loaded term.
- Proposes examples of "bullshit," traffic jams, unnecessary meetings, bureaucracy, and arguing online.
- Suggests arguing online is an example of a habit that is addictive, yet bullshit.
- Defines bullshit as things that won't matter to you in the future upon reflection.
- Proposes ways to avoid bullshit
- Proposes a way to savor time
- I'm not sold on the metric of measuring something by how much we value it upon reflection.
- I don't think the premise "Life is short" needs to be established to justify "avoid bullshit."
- The argument is fairly loose in that 99.9% of our lives are bullshit by his definition. Is 99% of sex bullshit?
Interesting piece, smarter than your average bear.
The probability for this essay to pop-up right after PG started a fire with his economic essay is just to small for there to be no connection. Could be, that this essay is in itself bullshit. People lie to themselves to hide truths that are painful but self evident. I think this essay could be such a lie.
At 4 GB, I'd just as soon query this locally, but this looks like a fun exercise.
I notice that there were 10,729 distinct ASINs out of 15,583 Amazon links in 8,399,417 comments. Since I don't generally (ever?) post Amazon links, I'd be interested in expanding on this in two ways.
First, I'd reduce/eliminate the weight of repeated links to the same book by the same commenter.
Second, I'd search for references to the linked books that aren't Amazon links. Someone links to Code Complete? Add it to the list. In a second pass, increment its count every time you see "Code Complete," whether it's in a link or not.
It is not the best when it comes to explaining things in an intuitive manner. It is a great reference book with lots of algorithms and proofs.
In recent years I have been drawn more towards Levitin's "Introduction to the Design and Analysis of Algorithms".
Anyone else have similar feelings about "Introduction to Algorithms"?
https://twitter.com/mattyglesias/status/689169613779808257"The only book ranking that matters"
Is this a result of the author spamming his own work?
Edit: Looks like it, short skimming of "darwin's theorem site:news.ycombinator.com" shows that all links are from user tjradcliffe, who is the author. A case for manual curation of data.
It's the most polarized I've ever seen in my life.
- SICP: Structure and Interpretation of Computer Programs
- CTM: Concepts, Techniques, and Models of Computer Programming
- TAOP: The Art of Prolog
Here's one on understanding the mindset of your investors when raising startup capital - Startup Wealth - http://amzn.to/1Jej8El
I admire the effort. Calling it Top Books is slightly misleading. Perhaps you can call it, most mentioned books.
The Four Steps to the Epiphany: Successful Strategies for Products that Win Author: Steven Gary Blank Publisher: Cafepress.com Number of links: 45
Not where I live. What to do about it? Move. Find an employer willing to let you work remotely, and find your own quiet cost-conscious piece of paradise.
Can we get the top 100 books as well? (since many of those would have very similar mention numbers as the end of the top-30)
Thanks for the list though. Bought the psychology one.
Always interesting to read. But just as interesting is how quickly they pop to the top of the home page.
I wrote this curated site from HN several years ago. Got tired of people continuously asking for book recommendations. http://www.hn-books.com/
Couple points of note. This is 1) an example of a static site, 2) terrible UI, 3) contains live searches to comments on each book from all the major hacking sites, and 4) able to record a list of books that you can then share as a link, like so (which was my reason for making the site)
"My favorite programming books? Here they are: http://www.hn-books.com#B0=138&B1=15&B2=118&B3=20&B4=16&B5=1... "
I started writing reviews each month on the books, but because they were all awesome books, I got tired of so many superlatives!
Thanks for the site.
For VLC and all related VideoLAN projects, we're moving to our own instance of GitLab hosted on our infrastructure.
And to be honest, it's quite good, but a few stuffs are ridiculously limited, to the point that some people in the community are resisting the change.
The first part is the groups and subgroups: it seems incredibly difficult to give sub-groups access to repos (like a team for iOS, one for Android, one for libVLC... but all are under the "videolan/" group). It seems there is a way with the EE, but not in the CE; and the current idea for the CE is to have sub-projects, which is not good, because it will make our URLs way more complex than needed.
The second part is the bugtracker/issues tracker. We use trac for VLC, and we want to leave it for something better; but gitlab issues is way too limited, even when using the templates. Especially, it seems to be impossible to add custom searchable fields (like "platforms", "priority" or "modules") which are very very useful to do queries. Also, there is no way to do custom queries and store them ("I want all the bugs for Windows, which are related to the interface modules").
If I remember correctly, this second part was also a complaint in the open letter to github.
Finally, it's not really related, since it's more a feature request, but we'd love to allow external people to fork our repos, but not create completely new ones (or have them validated) because we don't want to host any projects under the sun (there is github and gitlab for that). So far, you either allow both features or none of them.
PS: can we have custom landing pages and custom logo in the CE version? :D :D
So now it even feels like they're doing Git hosting the right way, making the core software open source, and charging for enterprise features.
On the other hand, I would have probably never paid for GitHub if they followed this model. So I don't think GitHub would have been as successful.
What's great about GitLab, there's a release on the 22nd of each month, so you can depend on pretty much continual improvement. Even if you don't think GitLab is suitable for your Open Source project, talk to the team on their issue tracker, things get solved pretty quickly!
I'm a Gitlab users for a few years now, personally I like it much more than Github, one of the reason is that I fear that Github contains too many projects and gains too much control over OSS, I also dislike their CoS.
Good luck Gitlab!
Custom templates: https://secure.phabricator.com/book/phabricator/article/form...
It's better in almost every aspect than GitLab and GitHub.
See https://en.wikipedia.org/wiki/Phabricator for an (incomplete) list of open source projects using it.
One issue that was raised several times was the ability to not create merge commits. In GitLab you can, as an alternative to the merge commits, use fast-forward merges or have merge requests be automatically rebased.
The main thing keeping me from actually doing it is the network effect... and this:
Right now GitLab.com is really slow and frequently down. This is because of fast growth in 2015.
GitLab still has a ways to go in terms of performance/reliability and polishing their product, but GitHub aught to be very nervous about them.
These are pretty essential.
But if I press "sign in", I am able to sign up, with no notice about it being just a limited (45 day) trial. So so far I'm assuming that this is a perpetually free account, though I'm not completely sure yet...
In general, I liked it, but it always irked me that its Ruby underpinnings made it hard to upgrade/migrate stuff (we basically just swapped LXC containers at one point, not sure how it was handled during the last upgrade). If anyone ever manages to do a credible alternative that does _not_ use Ruby in any way but keeps the overall GitHub-like workflow, a lot of operations folks will switch _instantly_.
(Like https://try.gogs.io/explore, for instance)
Also, like some commenters already pointed out, the CE edition was ridiculously limited in some regards - we mostly skipped the bits we didn't like and did product-level ticketing outside it (using Trac), with Gitlab issues used only for "techie" stuff, tracking fixes, etc.
But today I'd probably just sign us all up for GitHub and be done with it, or fire up a VM image from some marketplace - there's hardly any point in maintaining our own infrastructure or doing a lot of customization.
The only question left is if your servers are powerful enough to run gitlab. Maybe I'll sacrifice a goat for some new server hardware and 256GB of ram.
That being said, what both GitHub and GitLab are missing is actually becoming a "social network" or maybe more an active network. There are tons of interesting projects that pops up every day, that I would be interested in knowing about, contributing, but there's basically no way to learn about them.
Kudos to the GitLab team for all its work :)
There is an opportunity for Gitlab here and I'm happy that they decided to make this announcement.
The community is the actual winner of this healthy competition.
What I really don't get is the argument that "we won't liberate feature XYZ from EE because it's only useful for companies with 100+ developers". I think it's quite impressive that you can know what every user of your free software needs, and that you'll protect them from code only suited for enterprise.
I'll still use GitLab (the fact there's a free software version is great), and I'll be the first to fork (or back someone else's fork) CE as soon as you get acquired and your free software is no longer maintained by you (see: Oracle with Solaris, and every other acquahire ever).
Once their performance increases, maybe we'll see the momentum shift from Github.
I have never used GitLab myself, but some of the features mentioned in the article (like a true voting system) is something I've really longed for. Might have to reconsider trying out GitLab more.
If you want the talent you need, especially in the Bay Area, you have to pay more than what the average developer makes in Amsterdam. I want to like GitLab, but I just can't get that bad taste out of my mouth.
Github OTOH has an extremely usable mobile UI.
1. more than one level of subdivision for groups/projects (see below) 2. groups of users (call it department/team): because of 1 we have a lot of groups, (because all the small libs are in separate projects, so the project itself is a group, but of course we have several projects), so everytime somebody join the company, we have to add him in every single project (for them to be able to read the code), and we have also subcontractors, we would like to have a nice way to separate from the others
I've implemented two self-hosted Gitlab instances at work, for one of the instances on our private network, I'm still fighting with IT to allow us to use LDAP. Gitlab EE is still off our 'pockets' as management aren't too keen to pay for it, at least yet, but I hope that we'll get there.
Our self-hosted instance is also a bit slow, not as slow as Gitlab.com, and if it was written in a language that I'm familiar with, perhaps I and some of my team could contribute to 'making it faster'. Pity I don't have enough time left in the day to learn Ruby. I've read up a bit on the work going on around Unicorn and workers, but maybe some of these things could be better written in other more 'performant' languages?
For personal projects I still use Bitbucket + JIRA. I got to the point where I decided to stop looking for freebies and pay. JIRA has been awesome, totally worth the price.
There seems to be a lot of hating on GitHub here, but I personally love GitHub (and we use GitLab at my current employer).
I think GitLab is doing a great thing, and I appreciate that their community edition is free and open source, but GitHub has been able to provide an invaluable service. They have a great community that facilitates open source projects and a vastly better UI than GitLab (though that isn't saying much with how awful GitLab's UI is).
I'm eager to see how GitHub evolves in the future with GitLab as a competitor, as GitLab has a lot of nice features (built-in CI, etc).
How does GitLab compare to Phabricator?
I love gitlab (even made a git tool to easily create repositories from the commandline, gitgitlab) but these small things make a real difference. I'll end up paying for a github organization account just to get this annoyance out of the way.
By the way, we're using self-hosted Gitlab at work and we love it. This isn't a knock against the actual product. In fact, I think Gitlab has improved tremendously in the last 18 months. I just wish they would be a little more up-front about their marketing efforts.
I think the spammer is trying to make a point! For starters, there seems to be no rate limit applied.
I use a command line for everything else in life; but with Git I'm hopeless.
Who gets to decide that features are enterprise only ? How are these enterprise only features: "Hosting static pages straight from GitLab", "git-annex", "git hooks", etc. ?
Get a crippled version that doesn't fit the reasonable expectations or pay for enterprise edition coming with a big bag of features I have no use for just to get the missing features that I can have on github for free (And I'm not a big fan of github)?
As such gitlab community is not very useful to me and does not seem to have a future because its chosen business model goes against its usefulness to people.
Is this a joke? I mean for people looking for free private Git hosting, there is Bitbucket. This statement is like saying "free, but not really, really." The fact is if I want hosted Git hosting from Gitlab, I cannot reliably get it without paying at least $390 upfront for their EE plan. Too much smokescreen, too less actually on offer.
The example I always use, the occasion when it first occurred to me, was a couple years ago when, for some reason, I decided I wanted to make a foam for a cocktail. Within 5 minutes, I had found a video on Youtube illustrating how, not to mention a dozen other sites documenting various techniques.
I imagined being back in the 1980s or 90s and confronting the same wild impulse. How would I have figured this out? Asked a couple people perhaps. Contemplate a trip to my local library. Maybe make a mental note to chat with a bartender next time I found myself at a cocktail bar. Probably just give up on the idea and go back to watching the A-Team.
This is a rather trivial example. But then consider the ease and dramatically lowered TTK where programming knowledge (via StackOverflow) or general knowledge (Wikipedia) is concerned. The internet itself cut the lag. But it was first Google, then Wikipedia, that turned TT#$&!%&@ (Time To me cursing that I have access to all this potentially useful information that I can't quite seem to reach) to TTK, Time To (real meaningful well-organized) Knowledge.
Wikipedia is a proof of a utopian vision that infused the early web - ensuring public rights wins public contribution.
Humanities collective knowledge is better distributed because of Wikipedia, a true wonder of the modern world.
IMHO, we must treasure wikipedia as it is not clear it could happen again and it embiggens us all.
Good on them! It's one of the best things humanity has ever created. Hopefully they'll find a funding strategy that doesn't make them constantly feel like panhandlers. They provide uncountably huge value, yet I suspect with their current marketing, even very heavy readers rarely donate.
EDIT: Sounds like they are working on it: https://15.wikipedia.org/endowment.html
For casual readers like myself it's also a real pleasure to occasionally just dive into a section of history, follow the links around, and learn about the world. Same goes for various other topics but that's the one that came to mind.
Here's to hoping Wikipedia sticks around for a long time to come.
* San Francisco (Saturday): https://en.wikipedia.org/wiki/Wikipedia:Meetup/San_Francisco...
* New York City (Saturday): https://en.wikipedia.org/wiki/Wikipedia:Meetup/NYC/Wikipedia...
* Boston (Saturday): https://meta.wikimedia.org/wiki/Wikipedia_15/Events/Boston
* Bangalore (Sunday): https://meta.wikimedia.org/wiki/Wikipedia_15/Events/Bangalor...
* London (Sunday): https://meta.wikimedia.org/wiki/Meetup/London/101
* Portland, Seattle, Vancouver (Saturday, meet Ward Cunnigham!): https://meta.wikimedia.org/wiki/Wikipedia_15/Events/West_Coa...
New York will feature a talk about Wikidata, how to query it with SPARQL, and how we are integrating it with Wikipedia and pushing forward the Semantic Web. Other NYC talks include things like "Git-flow approach to collaborative editing", "Copyright and plot summaries", and "Automated prevention of spam, vandalism and abuse". We will be linking up with San Francisco and likely some other cities for a global teleconference at 4:00 - 5:00 PM ET (21:00 UTC).
If you're interested, sign up and stop by!
I felt certain they would fail to achieve critical mass in order to become the large scale success that they have.
Glad to be proven wrong! And congrats.
I have contributed too. Here's hoping they solve the latest set of challenges with the insider community.
(I am being facetious; I bloody love Wikipedia and do donate, but you think they'd be more careful about this sort of thing)
It was quite a surprise when he turned up years later in an entirely different context as a founder of Wikipedia - though I'm not surprised he did something big. His charisma showed in his MDOP contributions and he always seemed destined for something big. Congrats Jimbo, and all the other people who have made Wikipedia, for this amazing asset to humanity.
Which is why I love they have shared their system open source so others can use it.
The real issue boils down to a filter bubble problem, and google isnt helping avoid this. Its that people use wikipedia as a panacea and forgoe actually following sources far too often.
Shades of trusting trust but instead of compilers its editors and censorship.
1. My Time Machine backup (primary backup)
2. BackBlaze (secondary, offsite backup)
3. Amazon Glacier (tertiary, Amazon Ireland region)
I only store stuff that I can't afford to miss on Glacier: photos, family videos and some important documents. Glacier isn't my backup, it's the backup of my backup of my backup: it's my end-of-the-world-scenario backup. When my physical harddrive fails AND my backblaze account is compromised for some reason, only then will I need to retrieve files from Glacier. I chose the Ireland region so my most important files aren't even on the same physical contintent.
When things get so dire that I need to retrieve stuff from Glacier, I'd be happy to pony up 150 dollars. For the rest of it, the 90 cents a month fee is just a cheap insurance.
Google Nearline is a much better option IMO. Seconds of retrieval time and still the same low price, and much easier to calculate your costs when looking into large downloads.
First of all, I just woke up (its morning here in Helsinki) and found a nice email from Amazon letting me know that they had refunded the retrieval cost to my account. They also acknowledged the need to clarify the charges on their product pages.
This obviously makes me happy, but I would caution against taking this as a signal that Amazon will bail you out in case you mess up like I did. It continues to be up to us to fully understand the products and associated liabilities we sign up for.
I didn't request a refund because I frankly didn't think I had a case. The only angle I considered pursuing was the boto bug. Even though it didn't increase my bill, it stopped me from getting my files quickly. And getting them quickly was what I was paying the huge premium for.
That said, here are some comments on specific issues raised in this thread:
- Using Arq or S3's lifecycle policies would have made a huge difference in my retrieval experience. Unfortunately for me, those options didn't exist when I first uploaded the archives, and switching to them would have involved the same sort of retrieval process I described in the post.
- During my investigation and even my visits to the AWS console, I saw plenty of tools and options for limiting retrieval rates and costs. The problem was that since my mental model had the maximum cost at less than a dollar, I didn't pay attention. I imagined that the tools were there for people with terabytes or petabytes of archives, not for me with just 60GB.
- I continue to believe that starting at $0.011 per gigabyte is not a honest way of describing the data retrieval costs of Glacier, especially when the actual cost is detailed, of all things, as an answer to a FAQ question. I hammer on this point because I don't think other AWS products have this problem.
- I obviously don't think it's against the law here in Finland to migrate content off your legally bought CDs and then throw the CDs out. Selling the originals, or even giving them away to friend, might have been a different story. But as pointed out in the thread, your mileage will vary.
- I am a very happy AWS customer, and my business will continue to spend tens of thousands a year on AWS services. That goes to something boulos said in the thread: "I think the reality is that most cloud customers are approximately consumers". You'd hope my due diligence is better on the business side of things, as a 185X mistake there would easily bankrupt the whole company. But the consumer me and the business owner me are, at the end, the same person.
its even less suited to disaster recovery (unless you have insurance)
Think about it. For a primary backup, you need speed and easy of retrieval. Local media is best suited to that. Unless you have a internet pipe big enough for your dataset (at a very minimum 100meg per terabyte.)
4/8hour time for recovery is pretty poor for small company, so you'll need something quicker for primary backup.
Then we get into the realms of disaster recovery. However getting your data out is neither fast nor cheap. at ~$2000 per terabyte for just retrieval, plus the inherent lack of speed, its really not compelling.
Previous $work had two tape robots. one was 2.5 pb, the other 7(ish). They cost about $200-400k each. Yes they were reasonably slow at random access, but once you got the tapes you wanted (about 15 minutes for all 24 drives) you could stream data in or out as 2400 megabytes a second.
Yes there is the cost of power and cooling, but its fairly cold, and unless you are on full tilt.
We had a reciprocal arrangement where we hosted another company's robot in exchange for hosting ours. we then had DWDM fibre to get a 40 gig link between the two server rooms
The idea would be that the data would either never be restored or you could compel someone else to foot the bill or using cost sharing as a negotiation lever. (Oh, you want all of our email for the last 10 years? Sure, you pick up the $X retrieval and processing costs)
Few if any individuals have any business using the service. Nerds should use standard object storage or something like rsync.net. Normal people should use Backblaze/etc and be done with it.
Yes, the docs are imperfect (and were likely worse back in the day). And it was compounded by the bug, apparently. But it's what everyone on HN has learned in one way or another... RTFM.
Was it mentioned in the article that the retrieval pricing is spread over four hours, and you can request partial chunks of a file? Heck, you can retrieve always all your data from Glacier for free if you're willing to wait long enough.
And if it's a LOT of data, you can even pay and they'll ship it on a hardware storage device (Amazon Snowball).
Anyone can screw up, I'm sure we all have done, goodness knows I have. But at the very least, pay attention to the pricing section, especially if it links to an FAQ.
You pay a lower per-kilowatt-hour rate, but your demand rate for the entire month is based on the highest 15-minute average in the entire month, then applied to the entire month.
You can easily double or triple your electric bill with only 15 minutes of full-power usage.
I once got a demand bill from the power company that indicated a load that was 3 times the capacity of my circuit (1800 amps on a 600 amp service). It took me several days to get through to a representative that understood why that was not possible.
Has anyone tried this or know of a gotcha that would exclude this?
And I realize that for the OP's situation, it wouldn't have mattered since he thought he was going to get charged a fraction of this.
These days the infrequent access storage method is probably better for most people. It is about 50% more than Glacier (but still 40% of normal S3 cost) but is a lot closer in pricing structure to standard S3.
Only use glacier if you spend a lot of time working out your numbers and are really sure your use case won't change.
 - 5 cents per 1000 requests adds with with a lot of little files.
That's something that generally keeps me from using AWS and many other cloud services in many cases: the inability to enforce cost limits. For private/side project use I can live with losing performance/uptime due to a cost breaker kicking in. I can't live with accidentally generating massive bills without knowingly raising a limit.
My only experience of using boto was not good. Between point versions they would move the API all over the place, and being amazon some requests take ages to complete.
After that worked with google APIs which were a better, but still not what I'd describe as fantastic (hopefully things are better over last 2 years).
As noted by others here, if you treat glacier as a restore-of-absolute-last-resort, you'll have a happier time of it.
Perhaps I'm being churlish, but I railed at a few things in this article:
If you're concerned about music quality / longevity / (future) portability - why convert your audio collection AAC?
Assuming ~650MB per CD, and the 150 CD's quoted, and ~50% reduction using FLAC, I get just shy of 50GB total storage requirements -- compared to the 63GB 'apple lossless' quoted. (Again, why the appeal of proprietary formats for long term storage and future re-encoding?)
I know 2012 was an awfully long time ago, but were external mag disks really that onerous back then, in terms of price and management of redundant copies? How was the OP's other critical data being stored (presumably not on glacier). F.e. my photo collection has been larger than 60GB since way before 2012.
Why not just keep the box of CD's in the garage / under the bed / in the attic? SPOF, understood. But world+dog is ditching their physical CD's, so replacements are now easy and inexpensive to re-acquire.
If you can't tell the difference between high-quality audio and originals now - why would you think your hearing is going to improve over the next decade such that you can discern a difference?
And if you're going to buy a service, why forego exploring and understanding the costs of using same?
I'm really doubting the need for a maintenance regimen on a drive which is almost entirely unused. Could have spent $50 on a magnetic-disk-drive and saved yourself hours worth of trouble.
I currently have 100gb of photos on Glacier. I am going to be finding another hosting provider now.
I ended up using some cheap VPS, two of them located in two different countries. And it's still cheaper then say Dropbox.
Does s/he substantiate this claim in any way? AFAIK glacier's precise functioning is a trade secret and has never been publicly confirmed.
I'm surprised that this aspect has not been mentioned here in the comments yet:
> I was initiating the same 150 retrievals, over and over again, in the same order.
This was the actual problem that resulted in the large cost.
At my old job we would get a lot of complaints about overage charges based on usage to our paid API. It wasn't as complicated of pricing as a lot of AWS services, just x req / month and $0.0x per req after that, but every billing cycle someone would complain that we overcharged them. We would then look through our logs to confirm they had indeed made the requests and provide the client with these logs.
I'll answer any questions that you would be curious about local sentiment and coverage.
EDIT: Having finally gotten to a copy of the article I want to point out several factual statements in the article that I find a little bit challenging to agree with.
1. Iceland let the banks fail.
Not really. It was a restructuring. In the restructure a holding company handed all the assets over to a new corporate entity. All debts and obligations were honored.
2. Iceland avoided austerity
Many social programs were cut. Great cutdowns in the healthcare system, so much so it is on the verge of collapse now.
3. Geothermal is clean energy
Ask anyone from Reykjavk. Silver can no longer be kept without wrapping it in cloths, otherwise it goes dark in a day due to the hydrogen sulfide pollution. It's also not renewable. You can farm the area for about fifty years before it's too cold to extract and then it takes a millennia to recover.
4. Value of ISK/EUR
That may be the artificial value while capital controls are in effect. True value is way below that. I might cover the "snowhenge" problem later, it's a doozy.
5. Quoting the President on the banks.
You would have to know how much of a fluffer for the banks the president was for the banks and the oligarchs before the crash to know how stinky that sounds to an Icelander.
The Report of Althing is, in my opinion, the most comprehensive since the Amulree Report of Newfoundland in 1933, and one of the very few reports to investigate causes of the crisis.
As a matter of interest, I seem to recall that 83 million euros was given to Icelandic bankers for their part in the crisis. The proceeds unaccounted for numbered in the billions. Much of what the banks did was modelled on Enron-style deception, a practice which remains quite commonplace now in the financial marketplace.
I know I'd want to take my earnings and get the hell out, if my government started bowing to leftwing populism and came for the bankers like a medieval pitchfork-waving mob.
Capital controls mean that the "recovery" is completely artificial, and until they're fully lifted we won't know just how deep Iceland's problems really go.
From a game theory perspective persecuting bankers is correct. The incentive for people who are already wealthy to take massive risks gets reduced when they know they may go to prison when it all hits the fan. It has little to do with real justice, though.
Say you don't want your car stolen, go and get some insurance, let insurance companies figure out how to prevent theft and find stolen cars. Let them figure out how to make stolen cars unusable. It's just an example, but I think it's doable.
If somebody offered me a billion dollars at a 3% interest rate or whatever it is right now, I would seriously consider accepting it knowing that I don't have the capacity to pay them back. There are a lot of things in the world that pay better than 3% interest and that are relatively safe.
Similarly, a bank that gave out loans whenever you asked(up to the 10 loan limit, maybe) would be a great opportunity even for people who couldn't pay it back out of income. You could buy an empty plot of land and put a small apartment complex on it. You could buy a 4-plex and apply any number of forced appreciation techniques(put in a coin-operated laundry, redo the piping to reduce the amount spent on plumbers, plant some flowers outside to increase the attractivity, etc.). Of course, just betting on general appreciation of single-family homes is bad money.
Basically, I was too young to really experience the effects of the 2008 and I'm feeling envy rather than anger about these people's ability to cheaply obtain capital. So I don't empathize with these news reports about punishing bankers. Can someone explain why I should feel anger instead?
Longest term was six years.
Here in New Zealand, we have many native species of birds, insects, frogs, lizards and the like that thrived when our islands were cut off from the rest of the planet, but that have become extinct, or are in imminent danger of being so due to introduced predators such as rats, stoats, hedgehogs, ferrets, cats etc. etc.
It leads to the bizarre situation that conservation here is largely about killing things.
 http://www.radiolab.org/story/brink/ https://en.wikipedia.org/wiki/Judas_goat
There is also a book out now on the insects .
1. It's a feature, not a product. And a simple one, conceptually. As much as I'd love to have competition in apps offering this functionality (like keyboards), "make my screen more red" isn't exactly rocket-science.
2. It's not well-designed. Their messaging mixes up two very different use-cases: matching the color of your room and aiding your sleep. That's ok - I use it for both - but there's no way to customize it. Even a super-simple option would let me communicate that I only want it on after 10:30 pm, a couple hours before I go to sleep, when I darken my room. Instead I need to deal with it automatically turning on every day at 4:30 pm, which makes something that should be simple very cumbersome (I have to manually turn it on and off every day in the winter).
I can only imagine what this post would have looked like had it been say, Google in question.
So now that Apple has released their own screen dimming app, is Apple's implementation any different than flux's? Or did Apple effectively just abuse their app policy so that they could proactively kill a competitor to their "new" feature?
After a while I just gave up and uninstalled f.lux. Instead, I created a 4500K white point copy of the default color profile and manually switched to it at night, which seemed to have the same effect. It also prevented those big flashes whenever switching to and from full-screen apps.
I'm grateful that f.lux has pushed this issue to the point of getting traction as a built-in feature from Apple (and I do hope Apple brings it to OS X at some point), however unless f.lux becomes open-source, I don't plan to reinstall it.
Ideally, I'd like to be able to watch TV/use a computer in a dark room and not have to worry about it being so bright it gives me a headache, but also have good color quality. Software color temperature apps handle the headaches pretty well, but they don't save power and they don't let me see true colors.
It would be really nice to see some more hardware effort put into low-intensity backlights, especially color reproduction at lower brightness settings.
> Apple announced this week that theyve joined our fight to use technology to improve sleep.
Right from the opening sentence, this piece begins on a positive note. It isn't f.lux vs apple. It's flux and apple versus the overarching problem, and that's a much more effective statement than the bitter fight that all of us were probably expecting. I'm very impressed by the f.lux team's maturity.
If I were to teach a professional writing course, I would show this piece to my students as an outstanding example of how spin can affect the reader's perception.
I mean, look at the difference in abrasiveness of spectrum between these bulb technologies:
F.lux is great and helps but what we really need is an ergonomic monitor. That will truly give us healthier eyes and improved circadian rhythm.
Monitor tech is hard but this sounds like a great goal for a startup. :-)
My understanding is that an Android version would have to require a rooted phone to really do everything properly, which is a significant limitation, but rooting your phone is completely Google-approved and there are plenty of apps in the Android app store that openly require it. If (not unreasonably) f.lux is really concerned about reaching users who aren't savvy or interested enough to root their phones, then a root-only version would be an ideal test case to encourage Google to open up the API.
Switched to process explorer and found f.lux connected to three different addresses out there on the wild internet.
Fishy, to say the least. You need permanent internet connections to do what you're ostensibly doing? Replaced it with open-source alternative "redshift".
What makes a private api private? Is it merely undocumented, but still usable in the exact same way as a "public" api? I.e., in my code I invoke it like normal, but I just need to know the name?
Or do I have to fiddle with the compiled code of my app to get it to call the instruction location of the otherwise invisible function?
If they were meant to be private, why can't the app, which surely runs in some underpriviledged mode, be blocked from calling the function, which knows it itself is privileged?
I know some people feel they only have one good idea in them, but I think that results from either a) aiming too high on subsequent rounds, or b) phoning it in. But if you love to work, just start small and you'll avoid both of those things.
"90 hours a week and loving it. Like the T-Shirt? I'm going to give it to my people. Some of them work even more than 90 hours a week." (Steve Jobs depiction)
Woz says it's the only accurate one about Jobs and company. So, whether those words or not, I can only assume jobs worked his people to death to achieve Apple's success. Other things are consistent with that. Then, I hear they're "joining" f.lux to help their mission of improving sleep or whatever. Haha...
If this is truly the world health issue they think it is, now that iOS is taken care of seems like they should focus on Android, TVs and Kindles instead of begging Apple to let them compete with a built-in feature.
After their last PR push and petition campaign (which landed them on various media outlets and the HN homepage twice in three days) it took them 7 weeks to land 5,000 signatures.
I appreciate their passion but talking about cancer, weight gain and acne -- while providing affiliate links to salt lamps and Swarovski crystals -- just feels weird.
Tinting the color using F.lux is helpful but doesn't supplant the boon that a proper ergonomic monitor would for eye and circadian rhythm health.
I'm confused, why do they want this? Night Shift does what f.lux does. Apple opening up the APIs to allow f.lux to run on iOS seems rather pointless, since the OS is already doing the same thing.
I've met founders like this, recently. They're building some sort of social app that does something no one needs and no one would pay for. Maybe I'm being dense and they're actually geniuses, but I don't think so. Most startups fail, and I think a lot of that is because a lot of startups are really stupid. They're "solving" worthless non-problems because the founders aren't doing the hard work of finding real, valuable problems to solve.
Building a great product is really hard. And it can't all be done completely lean - at some point, you need to envision something so profound that people don't realize how much they need it. You don't just write Hello World, charge a buck, growth hack, and brag about how you got two bucks for it the next week. As Henry Ford said, if he'd asked customers what they wanted, they'd have said faster horses. Don't build a faster horse. You won't be great that way.
Whatever happened to growing slowly, proving the revenue stream before you throw millions of dollars at something? Why is that such a bad thing?
I understand from a VC's point of view why it's a bad thing, and that's the point of view which YC and other incubators are coming from. But why is it a bad thing for the entrepeneurs?
Taking time to consider what you're building is important. Taking time to grow is also important.
I have a warning for future, young developers: do NOT join a startup that hasn't found product/market fit, and isn't trying, with all of its might, to find that product/market fit. And when I say, by all of its might, I mean companies that product/market fit isn't the first and foremost goal of the startup.
This is really hard to determine from the outside.
For example, a friend of mine is working in a well-funded, post series A startup that just declared internally its biggest target was a billion dollars of products shipped by the end of the fiscal year. The team is smart, the founders are passionate, and they've been judicious with their funding.
Problem is that the CEO herself admitted that they don't yet have product/market fit. Which is to say their "guiding light" metric is a lagging indicator, i.e. it measures that tail-end of their efforts, and certainly not how much customers love them (and are willing to pay). It's the equivalent of early Facebook using advertising revenue or page views as their growth target instead of monthly active users.
As Sam Altman points out, everything they do will be "hacking" or, as PG puts it, doing tricks that are not sustainable for a real, billion dollar company, in attempts to achieve that misleading number.
At least from what she told me, the most senior engineer said that this isn't bad: the engineers will focus on product/market fit and the sales/marketing teams will focus on growth. Of course, she didn't say that, in a conflict between the fake growth target and product/market fit, the fake growth target always wins. Even now, I'm trying to pull my friend out, before the mental gymnastics set in.
Don't be fooled. A startup needs a real, sometimes small, star in the sky to navigate churning waters. A ship that chases the moon is a ship that will soon be sinking.
This essay, while it discusses a really important idea, comes across as saying: "Foolish, fashion-focused founders! Clearly retention comes first! We would never imply that you should focus on growth before your product had adequate retention!"
But that's exactly what I've seen YC partners do. Recommend that companies focus on growth, because that's what YC's definition of a startup is: a company that grows very quickly. Having seen both sides of the curtain, the essay leaves me with a greasy, queasy feeling.
Am I missing something here?
Sam, are you changing your standing advice from "focus on growth" to something else? If that's what this post signals, please take ownership of the change and spell it out.
It's poor form to imply that misguided founders (and their devotion to a fad) are driving the growth zealot craze when you've had a hand on the wheel for years.
It all comes back to this. I was listening Aaron Harris' interview with Digital Ocean's Mitch Wainer on Startup School Radio the other day (phenomenal podcast). Wainer mentioned how people actually love Digital Ocean, and how rare it is for that word in particular to be paired with the name of a company. I think it's said so often that it can easily be glossed over, but that word is not chosen lightly. Having a product users actually love is extremely difficult, and extremely rewarding.
Many larger companies use "Net Promoter Score", as a way of telling this:
Of course, like Agile, there are Consultants, a methodology and books, but there is some truth in asking your customers if they would promote you to another person.
You can get this from your data at sufficient scale. You can get it from surveys like NPS. You can get it from talking to customers.
One problem I have with the word "growth" is that it implies top level acquisition. A better definition would be hitting your goals. If you want to grow your retained userbase, then getting top level acquisition isn't actually hitting your goal.
But by then, the startup has already won. They already got the funding thanks to their sleazy marketing, and the social validation/connections that comes with it.
The essay doesn't contradict the "it is easier to ask for forgiveness than permission" idiom, unfortunately.
I've seen many VC's who are not data-literate enough to be able to understand the difference between measuring product value and growth without substance.
I'm sure that whatever your business practices are, after boiling everything away, you should be doing this above all things. An article like this suggests perhaps some sizable amount, or many organizations are losing sight of the way.
The downside is very few products are that loved that customers will pay for 12 months in advance.
If you are relying on your users to drive growth, an easy-to-measure proxy of "do any users love our product so much they spontaneously tell other people to use it?" is Net Promoter Score, or the difference between the % of users who would recommend you, and those who would not.
There are other (perhaps non SV startup) businesses that dont rely on users to drive growth. For example, if you rely heavily on distribution arrangements or retail placement. In that case you might want to look at other metrics that are leading indicators of growth.
(I'm paraphrasing Buffet)
If you have to take away one thing, then it will be growth but this generalization is very dangerous because as Sam's post states you need a product before growth. The better advice is to not concentrate on one thing but to bluntly say you need to have everything and everything will all come together. You need great team, great product, and great growth. If one is weak, you need to fix it.
It seems stupid to do something just because you think you're expected to do it rather than think for yourself and do what you want to do and think would be a good idea.
Find product-market fit before taking investment. Because the only reason to take VC investment (for the etype of startups we are talking about) is as a steroid injection towards growth.
Why is this the only time it makes sense? Because that's the only way you and your investor's incentives are aligned.
Now, whether you need to/should grow fast to survive in the long term is another question.
How much or little has changed?
I'm excited to now have an "official" source I can easily cite for it!
I think there are some businesses where this might be less true, but it's pretty rare.
Question - is this a valid metric when talking about B2B companies? I would imagine there are plenty of B2B solutions which became huge, but which don't exactly have companies rushing out to tell everyone to use them.
I tend to think sama's question is only relevant for B2C, but I'd love to be proven wrong.
I wonder how this would apply to, say, Uber and AirBnB. What kind of monopoly do they have?
You always have two constituencies to think about: (A) your users / customers, and (B) your not-yet-users/customers.
"Before growth", you should really pay very very little attention to constituency B. If you're doing anything interesting, there will always be a torrent of skeptics, doubters, and haters. (That sucks, but it's very human, and you just have to ignore it.)
Focus entirely on constituency A. There are three "R" metrics to look for: (1) Retention. (2) Revenue. (3) Referrals. (Note: these are the three "RRR" of the classic "AARRR" model.)
Initially, constituency A might just be your friends or extended network who you can convince give what you've built a try. You may be able to get them to try your new product once or twice, but that's about it.
However, if they keep using it again and again voluntarily, that's a really good sign (Retention). If they voluntarily and happily pay for it, that's great (Revenue). And if they voluntarily tell other people they know about it (Referrals), that's amazing!
Until you have those nailed -- or at least retention and referrals if you're deferring monetization -- don't worry at all about the "B" group (i.e. don't worry about acquiring new leads at the top of your funnel), because you don't have product-market fit yet.
Once you do have a product that the constituency A people love, only then can you start thinking about constituency B and how to turn them into new happy users / customers. (While continuing to make your constituency A happy!)
I think Sam is going a step further to place extra emphasis specifically on Referrals out of all three of these metrics. This is a subtle but powerful insight. Whether your product is B2C or B2B, most purchasing decisions have a huge emotional component. If your product is so good that your users want to tell other people about it, that's a huge step above being merely satisfactory, and goes a huge way toward powering your growth.
I don't care if you're buying a burrito, a car, CDN bandwidth, or an analytics SaaS. You as a buyer want a reasonable degree of certainty that you're going to feel good about that purchase after you've made it. Hearing that referral from your peer, whether a friend who tried that food truck before, or another engineer who used that SaaS provider before, goes a huge way toward giving you that pre-purchase confidence.
If your users aren't excited enough about it to be talking about it with their peers, you may have to adjust your product or your segmentation of the market until you get there.
Does anyone know if the RPi GPIOs can be driven at around 80KHz? I've seen reports that this is possible, but that the USB or video driver tends to lock the CPU for long times, messing with timings - but hopefully running on bare metal would take care of that.
Don't get me wrong. Rust is an interesting language. The thing described in this post is well within its capability, i.e. IMO there isn't really anything that worths bragging about. Such trivial thing neither demonstrates the real potential of Rust, nor answers important questions from real world engineering perspective.
I'm all for having better tool to write low level stuff. I have dabbled with Rust and the experience was eye-opening. I think Rust still have a lot to catch up though.
Has that changed?
On the Neo900, the modem is connected via USB (bus; there is no physical connector) which means it doesn't have DMA. There is no feasible open-source baseband. OsmocomBB (http://bb.osmocom.org/trac/) is the closest thing to one, and it is relatively incomplete and works on a very limited range of mostly badly outdated hardware, none of which would not really be reasonable to use in a phone to be manufactured today.
The systems security of modern phones is surprisingly complex. Google and Apple both care very deeply about these problems, and both have extremely capable engineers working on them. Without getting too far into the weeds: they haven't ignored the baseband.
There are lots of reasons GSM won't/is hard to make work. What are the options? As more and more carriers in the USA provide wifi-dongles that are connected to 3G, maybe it's better to just do that, and move off making calls directly from your phone completely?
For example, it might make sense to buy some phone, connect it to a device (or flash it with some software) that makes it essentially a portal for phone calls of sorts, and give it sandboxed access to your network. It's significantly harder for GSM backdoors to be effective if the entire device is sandboxed right? Maybe this way, as you roam around, you can somewhat securely communicate over IP to your call-making device, and make/receive calls?
[EDIT] - Thinking about it, the suggestion is moot, since all someone would have to do is write some software to replay messages, or leak messages or some other nefarious thing, and stick it on the baseband of the device -- even if it can't damage your network it's still quite insecure.
Maybe we should just give GSM up altogether, and start trying to move ourselves (and the world) to only communicating over IP (which we have a shot at securing, assuming modern crypto isn't completely broken)? What is the situation like with completley open source wifi connectivity?
> It would, in my view, be abject insanity not to > assume that half a dozen or more nation-states (or > their associated contractors) have code execution > exploits against popular basebands in stock.
Indeed. Such design coupled with very obscure and closed baseband firmware is a security nightmare. One should ask, who was pushing for such an approach.
According to the iOS security white paper, the baseband firmware is part of the secure boot chain, and has its own secure boot chain.
This allows me to assume it's very hard to inject or replace the the firmware with a malicious code. Whether or not the firmware itself has a backdoor or whatever I don't know, but at least one major phone manufacture knows this firmware is very important for security.
If it could, anyone with a logic probe could grab un-DRM'd video data out of the RAM, which would make many people very unhappy.
Of course the same branded (eg "Galaxy S6") has many different models for across the world, most using integrated Qualcomm chips. Honestly looking at the list of variants and thinking about the conservatism of RF and telecom regulatory regimes, you'd have to be naive to think the whole ecosystem doesn't simply exist under the control of major intelligence agencies. Communications have always been regarded as dangerous.
An MPU has all of the same protection domains that an MMU does, except for a few major differences:
- The total number of protected regions is very small (12 or so), such that the hardware cost is somewhat smaller than a TLB cache.
- To offset the small number of regions, the size of each region can be almost any power of two.
- The MPU does not perform address translation, again reducing the hardware cost.
Thus, the kernel can configure the peripheral's DMA engine to only allow access to a page or few.
> It can be safely assumed that this baseband is highly insecure. It is closed source and probably not audited at all. My understanding is that the genesis of modern baseband firmware is a development effort for GSM basebands dating back to the 1990s during which the importance of secure software development practices were not apparent. In other words, and my understanding is that this is borne out by research, this firmware tends to be extremely insecure and probably has numerous remote code execution vulnerabilities.
I've read in several places that basebands now widely use the OKL4 microvisor, based on the formally verified (fwiw) seL4 microkernel, and are much more secure than before. Does anyone know more about this?
However I'm thinking a different approach can be taken, suppose we abstract the different ways for communication a device has and use them as sockets or layers and then create an algorithm that distributes the communication through several channels.
For example two cellphones
-one against another using the light on one screen against the camera of the other
-the vibrator motor captured by the microphone
-introducing certain pattern of noise in Bluetooth communication by the other radios
-communication through stenography
-sending huge amounts of information (the more information the more power needed to discriminate, understand)
-abuse how things are not supposed to work (instead of sending packets in the correct order, use ping as a way of sending data by crafting the requests).
-Custom network stack
-use of a customized version of encryption with extra large keys (think 20480 bits instead of 2048)
-and last but not least use an algorithm that mutates the distribution logic based on certain algorithm dependent on time (in the same fashion virus mutate themselves)... using as key for example your voice
In 2016 that's just the price you pay for using computers. You just have to live with it. Mitigate it the best you can, rely on the ol' "mossad or not mossad" strategy now and then, hope for the best, etc. If you have a strong need for increased security, well, god help you (spoilers: you will receive no help), you're going to pump a lot of effort into building something that will still have tons of vulnerabilities.
1. Software. The phones run complex, low-assurance software in unsafe language and inherently-insecure architecture. A stream of attacks and leaks came out of these. The model for high-assurance was either physical separation with trusted chip mediating or separation kernels + user-mode virtualization of Android, etc so security-critical stuff ran outside that. There was strong mediation of inter-partition communications.
2. Firmware of any chip in the system, esp boot firmware. These were privileged, often thrown together even more, and might survive reinstall of other components.
3. Baseband standards. Security engineer Clive Robinson detailed many times of Schneier's blog the long history between intelligence services (mainly British) and carriers, with the former wielding influence on standards. Some aspects of cellular stacks were straight designed to facilitate their activities. On top of that, the baseband would have to be certified against such requirements and this allowed extra leverage given lost sales if no certification.
4. Baseband software. This is the one you hear about most. They hack baseband software, then hack your phone with it.
5. Baseband hardware. One can disguise a flaw here as debugging stuff left over or whatever. Additionally, baseband has RF capabilities that we predicted could be used in TEMPEST-style attacks on other chips. Not sure if that has happened yet.
6. Main SOC is complex without much security. It might be subverted or attacked. With subversion, it might just be a low-quality counterfeit. Additionally, MMU or IOMMU might fail due to errata. Old MULTICS evaluation showed sometimes one can just keep accessing stuff all day waiting for a logic or timing-related failure to allow access. They got in. More complex stuff might have similar weaknesses. I know Intel does and fights efforts to get specifics.
7. Mixed-signal design ends up in a lot of modern stuff, including mobile SOC's. Another hardware guru that taught me ASIC issues said he'd split his security functions (or trade secrets) between digital and analog so the analog effects were critical for operation. Slowed reverse engineering because their digital customers didn't even see the analog circuits with digital tools nor could understand them. He regularly encountered malicious or at least deceptive behavior in 3rd party I.P. that similarly used mixed-signal tricks. I've speculated before on putting a backdoor in the analog circuits modulating the power that enhances power analysis attacks. Lots of potential for mixed-signal attacks that are little explored.
8. Peripheral hardware is subverted, counterfeit, or has similar problems as above. Look at a smartphone breakdown sometime to be amazed at how many chips are in it. Analog circuitry and RF schemes as well.
9. EMSEC. The phone itself is often an antenna from my understanding. There's passive and active EMSEC attacks that can extract keys, etc. Now, you might say "Might as well record audio if they're that close." Nah, they get the master secret and they have everything in many designs. EMSEC issues here were serious in the past: old STU-III's were considered compromised (master leaked) if certain cellphones got within like 20 ft of them because cell signals forced secrets to leak. Can't know how much of this problem has gotten better or worse with modern designs.
10. Remote update. If your stack supports it, then this is an obvious attack vector if carrier is malicious or compelled to be.
11. Apps themselves if store review, permission model, and/or architecture is weak. Debatable how so except for architecture: definitely weak. Again, better designs in niche markets used separation kernels with apps split between untrusted stuff (incl GUI) in OS and security part outside OS. Would require extra infrastructure and tooling for mainstream stuff, though, plus adoption by providers. I'm not really seeing either in mainstream providers. ;)
That's just off the top of my head from prior work trying to secure mobile or in hardware. My mobile solution, developed quite some time ago, fit in a suitcase due to the physical separation and interface requirements. My last attempt to put it in a phone still needed a trusted keyboard & enough chips that I designed (not implemented) it based on Nokia 9000 Communicator. Something w/ modern functions, form-factor, and deals with above? Good luck...
All smartphones are insecure. Even the secure ones. I've seen good ideas and proposals but no secure[ish] design is implemented outside maybe Type 1 stuff like Sectera Edge. Even it cheats that I can tell with physical separation and robust firmware. It's also huge thanks to EMSEC & milspec. A secure phone will look more like that or the Nokia. You see a slim little Blackphone, iPhone, or whatever offered to you? Point at a random stranger and suggest they might be the sucker the sales rep was looking for.
Don't trust any of them. Ditch your mobile or make sure battery is removable. Don't have anything mobile-enabled in your PC. Just avoid wireless in general unless its infrared. Even then it needs to be off by default.
Over the past year, I've noticed a HUGE uptick in the quantity of fake 5-star reviews. They are so blatant it's frightening, and they usually go unnoticed in Amazon's default "Most Helpful" sorting.
In particular, the Home Office Desk Chairs landscape is pretty insane: http://www.amazon.com/Home-Office-Desk-Chairs-Furniture/b?ie.... I was trying to find a chair back in September, and I was appalled by some of the reviews I was seeing. Top selling products, with several hundred reviews that averaged out to 4/4.5/5 stars.
This is a screenshot from back in September: http://imgur.com/qbCz0yE, and it only contains a small sample of the "Awesome, highly recommend" reviews spattered around. You'll notice this pattern on virtually every chair on Amazon, except the Aamazon basics chairs which wer launched sometime in late September / early October. Their reviews seem pretty good so far (i.e. real), but unfortunately for me I had purchased a chair from eBay before these launched.
These patterns are pretty frighteneing (especially considering a lot of people are actually buying these things), especially considering I've experienced the same issues when shopping for others things.
Has anyone else had an experience like this? Or am I losing it?
I still have a stack of paperwork from them back when I was considering shipping some items by container from Japan.
Amazon getting into international freight logistics is big news.
Just from the title I had guessed that that they would own ships as a hedge against rises in freight costs, just as airlines buy oil stocks etc.
Is anyone aware of any analysis of Amazon's recent moves in aircraft, drones, and so on as hedge vs integration vs disruption?
Amazon needs move product to make money. If others will not physically move that product to the US, then they'll have to do it. More expensive is better than none at all.
amazon was changing up it's rules with shipping from china a few months ago, now I can see why they did it.
It's unlikely that an importer or exporter is going to use Amazon for the freight forwarding outside its network. I guess Amazon only will end up handling shipments of its own or network sellers who significantly rely on amazon to sell their products, likely through LCL Consol box, putting a lot of smaller shipments into a single 40HC container., that's going to save them a lot of cost as well as give them tighter control over the shipment routing and transit time etc.
Other regular importers & exporters either in China or USA are never going to use it. Because of the reason mentioned above.,
An importer or exporter shares too much sensitive and critical information with his freight and customs agent., If I suddenly get involved in trading or in other words become their competitor its obvious no one is going to share the information with me.
Amazon obviously have too much cash lying around.
So be good and stop using Amazon!
Curmudgeony security issues aside, this undeniably feels like The Future and a big deal to watch out for. It's also one of those cases where a creator / maintainer makes a huge difference for long term viability in my opinion. Feross is crazy smart and has been working with all the related tech for a while now (via PeerCDN, Instant.io, etc, etc), and is just an all around respectful, nice guy, which is important for the continued development / community aspect.
We're really at the mercy of open platform-minded engineers at Google, Apple and Microsoft though! I wonder what we can do to help support those folks.
Unfortunately, after a certain file size it'll just crash your browser. It'd be great if there was a way to work with large (+2GB) files.
Anther question: How do I open the file once downloaded ? (I use ublock, should the file be displayed in the rectangular area next to the graph ?
C/D letters come with a 200-1000 fee depending on the content and now it's trivial to make someone download stuff illegally in the background.
Correct me if I'm wrong, but this poses a problem if you ever want to take WebRTC further (i.e. in a self-hosted mesh network).
My idea was a browser-plugin for youtube, that would take the downloaded video and start seeding it. On the other side, if a video has been blocked by YT, it would automatically use the torrent version.
- No support even in modern browsers by default 
- Don't want to [maybe] get into legal troubles if it's wrongly used
PS, apparently the caniuse info was wrong, since now it appears in green
Funny, Fx44 does support WebRTC
You can tag or organize the data locally and cache it, or return it sorted to the nodes which serve it to others. People don't give a shit about webpages for search, they care about information. The web is a big rss feed, and our old feedreader "google" stopped doing that well, and also we pay a massive privacy tax for that now.
I see this happening in ~2 years for really techie people and being standard in 5.
edit: elastic search, webkit, real time, distributed file systems, apache spark, google tensor flow. These ingredients will be used to make the new browser which browses information and returns that information not the actual web pages.
- Where is the downloaded data being stored? With a traditional bittorrent client I the data is written to disk. Since JS doesn't make raw disk access available, I'm assuming it's being kept track of in through some js api that tells the browser to store this data. What API is it using?
- Even when I finish downloading the video, the player doesn't allow me to seek to random positions in the video. It displays a "this is how much is buffered" bar that is way smaller than the green bar at the top of the page indicating download progress. Why is this the case?
- As you can see in the screenshot, there's lots of nodes that are labeled with ip addresses that are not visible to my computer at all. Is this because the displayed ip addresses are self reported?
 - http://nacr.us/media/pics/screenshots/screenshot--17-46-37-2...
2 looked at network traffic and it seems to open separate TLS sessions per transferred data packet, not the most optimal thing to do, might be an artefact of being hosted on https. Probably a cpu bottleneck right there.
3 doesnt store anywhere (local/session storage).
Complains it cannot play the file for not having Chrome with Mediasource. Why not serve an ogg or webm for crying out loud?
Also, why auto-start the download?!
After the download is finished, where can I watch the video? There's no link for watching it anywhere.
If I refresh the page the download starts again.
I realise this is just an experiment and kudos for that, but the author could have made some better choices re above.
I think the problem is that they stopped caring about their user base and, therefore, became less ubiquitous and people saw that and were like "oh.. time to value them lower.."
My opinion is that the tech sector has greatly expanded since 2000, not just in amount of investment available, but also types of business tech companies are actually in.
So maybe there is a web/ad bubble and it might pop, but how much that affects individual company is more nuanced.
Unsophisticated investors might still lump Google/Twitter/Tesla/AMD into the same "tech sector", but they will find the signal they receive is very mixed and difficult to analyze.
not surprised one bit. i echo the sentiment of other commenters who give other examples.
this is only surprising if you are also out of touch with reality
Now they're trading for 2.x times sales for fiscal 2015. They have 1/3 of their market cap in cash. And they're accumulating cash from operations rather than burning it. The value proposition on the stock is dramatically higher than it was when the hype was on the moon.
Operating cash flow is a mere 2% of the current market cap. This defines shambles unless it can really turn it's fortune around in '16-17.
Box: from $24 in Jan 2015 to $10
GoPro: from $87 in Oct 2014 to $11
Groupon: from $26 in Nov 2011 to $2.60
GrubHub: from $46 in Apr 2015 to $21
Twitter: from $70 in Jan 2014 to $18
Yelp: from $97 in Mar 2014 to $21
Zillow: from $121 in Feb 2015 to $22
Zynga: from $15 in Mar 2012 to $2
I think it's more correct to say that the extremely high valuations to start with are predicated on the idea that these companies can expand well beyond their initial niche. They're all hoping to be like Facebook, who went from college kids to the whole world. When it starts to become apparent that this won't happen, valuations come down to reflect a niche product.
You may say that Twitter has been beaten with a bat by Wall St, but they are still worth 12 billion dollars. Someone hit me with that bat.
I saw this happen with Skype where I worked a couple of years. The company succeeded because of P2P: we grew with little infrastructure to reach 200M+ people. P2P became our DNA, rooted deep within (almost) every core component.
Then came the new wave of mobile messaging apps. We reacted... with a P2P messaging solution. It was obvious this wasn't working - you sent a message to someone from Skype for iPhone, and they got it... sometime.
We knew to have a chance against Whatsapp and other messaging apps we needed server based messaging, so we built it.
It took 3 years. Yes, it took this long to get rid of the P2P code from just the messaging components from the 20+ Skype products - we had 1,000+ engineers and 50+ internal teams by the end which significantly slowed things down. When we were done and popped the champagne - no one really cared.
And yes, the source code is still full of P2P references and workarounds to this date.
If anything, large companies often miss out on new trends and changes in business and technology, but it's not solely because building that one new layer "up the stack" is so technically hard or different.
It is important to instead concede that you don't know the needs of the consumers in the higher level, and if you think you do it is because you are guessing. The only way avoid the problem is to not attempt to move into the higher level, at least not intentionally and not through business priorities.
This is extremely counter-intuitive because there are generally fewer expenses and greater market frequency at each higher level, which means superior revenue potential. Businesses exist to make money and to ignore moving up to the higher level means denying this potential (vast) revenue source.
This doesn't mean you can't move into the higher level of the stack and be really good at it. It just means you cannot do so both intentionally and as a business objective.
The solution is to double-down on where you already are with what you are already good at and focus on product quality of your existing products. Continue to improve where you are already good. Improvements and enhancements to existing products can gradually yield the next level as the improvements progressively open new potential in an evolutionary fashion. While getting to the next level this way is much slower it is also risk reduced and continues to associate your brand with quality and consume satisfaction.
This will only work, though, if the goal is improving the current product and not acquiring revenue in that desired higher level. Think evolution and not revolution. It has to be a gradual, almost accidental, increase of capability based on meeting current consumer needs.
Apple's networked services have often struggled. But are they really higher level than the things Apple succeeds at? Asking whether enormous distributed data stores are higher level than Mail.app just seems confused. It's different, and it brings new challenges, but are they part of the same stack? And is the data ingestion and sanitizing that Maps struggled with higher or lower level than the client that was basically ok? You can multiply these questions and I'm not sure you can get good answers.
I don't think that manufacturing semiconductors are comparable to building maps. Apple should have done a better job with maps, and even though they do complex manufacturing, likely should have done worse at chip manufacturing.
Iirc they brought in 3rd parties to help with the chip fab, and certainly spent more money building that core competency than maps.
I believe the author is correct that the issues is companies not fully understanding, and consequently underestimate, what it takes to be successful in a different arena putside their cc.
Google sees people as articles in a db. They dont understand people at all, they dont understand design as it relates to people, and they didn't understand that nobody needed another social network.
They probably underinvested (initially) in G+ and it was not a great product. It didnt achieve critical mass quickly, and thus had no chance of growing as a docial platform ever.
However, google is a lot more capable of creating something like this because they have all the core conpetencies down.
I guess my takeaway is that the companies can in fact take these arenas, but they underestimate the challenge. So to use a drug dealing analogy, they try to start moving bricks amd kilos, instead of working their way up learning the market pushing dimes and quarters.
They start too big, and when you fail big, you dont get the recovery of a smaller failure which affords small relaunches and features.
Tldr big companies try to enter at the top, cant recover from huge public failures, and either exit or buy in
Current usage of the database uses it as a loose, adhoc, difficult-to-maintain, polling-based API between multiple applications.
The future perspective looks back on our time, shaking its head at the way people use databases for everything in the same way that we shake our heads at bloodletting.
Oracle's business model is (1) convincing people to use platforms they shouldn't be using and then (2) selling the victims ongoing hacks and services to work around the limitations of the model.
Amazon's software services won't be build on a database. They'll be built using a decentralised messaging platform.
Apple is a fantastically successful software and industrial design company. The vast majority of their production is outsourced. This is not vertical integration.
Additionally, actually, Apple has tremendous amounts of hugely successful and popular software.
Though I dig the underlying point of this article, that product management is hard, I think the examples are less than good.
Well for one thing we know that Intel spends several $billion to open a new semiconductor plant and has a dozen of them already. https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_si...
Whereas SAP is, well, a lot of software. Which is something, but Intel needs to make a lot of software too, and chip designs are in some ways a specialized form of software.
So I think in some sense Intel is strictly more challenging to replicate than SAP. (But this is probably just my misunderestimation talking. :-)
Wasn't IBM a classic case of not trying to build the layer above them on the stack?
The Wikipedia page on IBM PC DOS even claims that their "radical break from company tradition of in-house development was one of the key decisions that made the IBM PC an industry standard".
"What the article is referring to as stack fallacy is the work of Physics Nobel Laureate Philip Anderson: https://web2.ph.utexas.edu/~wktse/Welcome_files/More_Is_Diff...
Let's give credit where it due please."
Because even the author references competency-based views of competitive advantage, but for some reason ignores resource based views, and ignores the fact that companies might be aware of their competences. That is to say, I'm sure that large companies tend to mostly be aware of what their competences are based on the resources and knowledge that they have. If they don't have marketing departments that have analyzed the ERP market, sales teams with ERP training, tech departments with key HR, key knowledge etc etc, then I'm certain they are very well aware of this.
Maybe some companies have had marketing missteps and have made poor strategic and competitive decisions, however, but I really doubt that it's due to a lack of introspection or simple analysis as described.
Also, IBM didn't "think nothing much" of the software layer. They misunderstood the nature of power in the supply chain, and most importantly, didn't solidify their position within the supply chain while they were dominant.
THIS! +1000! I would even leave out "often", or at least replace it with "usually".
A related factor is that larger companies tend to be more specialized (formalized processes, specialists, focused teams/departments, and so on), meaning they can be prematurely optimized with respect to new goals and poorly equipped to conduct the necessary roaming.
And in what order.
His basic logic was that - Success depends on processes- Processes even though might be thought of as abstract in reality are function of people at top. - Company gets successful because some bright guy is the rebel, he questions status quo, persists and succeeds. - As time goes by, the rebellious ideas actually become conservative ideas. The rebel is now on top. As his ideas fade he struggles to stay on top.- He recruits people who see the world through him, he builds processes that enforce that vision.- This makes it difficult for the truth to be visible to the top management.- By the time failure is visible it is hard to turn around the ship. - IN SHORT: Companies/Nations fail because someone at top did not know when to quit. - In the end that rebel turned conservative becomes bitter. He thinks the world owed him something for what he achieved.
He explained who USSR examples. How a genetic scientist got promoted because his fake research re-enforced something that Stalin had said long back and his peers were scared to point out the fact because it might get perceived as anti-Stalin.
I observed Blackberry very closely and it resonated to me so much. The founders at one point blamed people for using iphone and not blackberry.
Best companies in the world are seem to be those where their top leaders quit at their peak to make way for their successor.
Here let me make an article... wait wait... ah... "Big Companies FAIL" that sounds like nice click bait. Now... hm, let's invent some stupid word to pad it out how about the 'Stack Fallacy'. Programmers will dig the 'stack' part. Yeah. Ship it!
Seriously, this article is content free.
People make products. Sometimes they work... sometimes they fail.
If you pretend you have some magical insight into why they fail or succesd with gems of wisdom like:
found it very difficult to succeed in what looks like a trivial to build app social networks.
The stack fallacy provides insights into why companies keep failing at the obvious things things so close to their reach that they can surely build. The answer may be that the what is 100 times more important than the how.
Really? What you build is important?
Why is the top of the list this morning?
As with the tearing down of the Berlin Wall, opening up countries to the world's economy and ideas is the first step towards democracy.
The implications of this are tremendous (not in order of importance):
1) Oil prices will continue to fall as Iran is able to supply the global markets. Many oil states rely on money from natural resources to preserve monarchies. Money for freedom only works so long as the money keeps flowing.
2) Our (US) reliance on Saudi Arabia will diminish as there are now two powers in the region to work with. Having strong relations with both Shiite and Sunni powers in the Middle East will likely reduce sectarian violence. We're light years from being out of the woods, but this is big step in the right direction.
3) The Iranian people will gain access to the world economy. From a human rights perspective, they are the biggest winners here. As with Sunni/Shiite relations, no doubt a long way to go (the Ayatollah is a tyrant,) but you gotta celebrate the wins when you can.
4) De-escalation of our conflict with Iran. We saw it with Iraq, Vietnam, and Korea. Invasion + nation building is sexy, but highly ineffective. Having one less nuclear power that calls for our destruction is certainly a nice to have.
5) Shows Americans that diplomacy can work. Iranians don't hate Americans, they hate what America represents. To them, we represent a superpower that gives little to no thought of anyone else's sovereignty. We assassinated their democratically elected leader and backed the Shah, which got us into this mess. Diplomacy is far less sexy and easily criticized, but that's a huge part of getting this deal done.
Note: Many of these are over-simplified. Nonetheless, this is a pretty big deal and a cause for celebration.
- Iran gets access to global markets, and in time tourism (there's an incredible number of amazingly beautiful things in Iran for tourists to see, from ancient to modern ski resorts)
- Iranian oil will keep prices in the toilet, this is basically a way for the U.S. to punish Saudi Arabia for decades of support for various maleficent actors. Except it doesn't involve an invasion, a takeover, or anything else beyond economic sabotage. The Saudis have also had decades to form a more diverse economy, and for various reasons haven't managed to do it...this has kept them vulnerable to this kind of action and it helps free the the major users of Saudi oil from "vendor lock-in"
- It demonstrates that cool, calm, collected diplomacy can actually work. However, many people will forget that the U.S. and Iran have been fighting a proxy war for decades. It hasn't been a hot war, but Stuxnet, various revolutionary movements and so on have been bits of that war. This isn't just Iran throwing in the towel because the sanctions finally worked, its because all of the other major leverage points Iran could muster were defeated.
- While the sanctions by themselves failed to work, they helped create a political climate inside of Iran that favored this outcome instead of having another go at saber rattling.
- This helps provide a mildly more palatable "friend" in the region than Pakistan
Economics - 1 - the price of oil has been declining since the IranDeal was signed. In US alone the annual savings as result of cheaper gas and cheaper food (food production costs is strongly tied to gas prices) is about $500B/year (from $4/gallon - $2/gallon). Basically providing additional $500B in spending money in US. 2 - globally, the lower price of food and gas can potentially provide additional spending money.
3 - Iran has crumbling infrastructure and need numerous foreign contractors to rebuild (European and Chinese have already signed up). Sadly US companies will not be able to participate.
4 - Iran has potential for a large consumer market.
1 - US will finally have a second option (let's call it the second front) in middle east. We in US have been keeping a blind eye toward Saudis, their indirect financing of ISIS and all types of jihadist fighters in region from Libya, Syria, Africa, Afghanistan, etc.
2 - Iran will be tapped into helping stabilize Afghanistan, Iraq and Syria (there is already talks of providing and exist for Assad)
3 - Iran's gas can provide a hedge (at least the fear of an Iranian pipeline) against Russia. This probably wont happen as Iran relies more on Russia than Europe and will probably maintain that role, but that's a possibility.
4 - The open Iran is forced to further integrate globally. This has always been the fear of the hardliners and it'll be resisted by some within, however, there seem to be an understanding that stopping progress is a futile task.
5 - One of the largest women secondary eduction (close to 70% of college students are women), it will eventually play out as a potential model for other regional muslim countries to emulate.
6 - It will force Saudis to change. Saudis are extremely worried about the IranDeal. But their biggest existential threat is not militarily, but culturally. From having 70% women college students, to hybrid system of government. Iran socioeconomic role, will put pressure internally on Saudi rulers and it will force them make uncomfortable changes or face internal turmoil.
> Irans support for terrorism, its imprisonment of dissidents and even some Americans, its meddling in Iraq and Syria and its arms trade.
Funny that Saudi Arabia is guilty of all the above if not 10X worse but not a single word from those republicans. This speaks volumes of the power of the Saudi lobby in the US political system and how their wealth could influence decisions and policies in the US.
I hope this trend continues and Iran comes back to the international scene. It's good for everyone. Iran is very similar to Israel. Most of people are normal but there is a small percentage of extremist who have a lot of voice. Lucky for Israel they have a better constitution and governance model.
Now I'll sit back and wait for the "but... but... Iran said they would wipe Israel off the map!!!1!" crowd.
IMO, it's a classic case of nut-jobs on both sides of a boarder causing pain for a wide range of people.
Really gives some perspective on an oft-misunderstood place.
It is time to embolden the moderates and reformers in Iran! When you read about the details of Iranian society, it is very clear that they have a huge amount of potential. Regular Iranians are the most positive to the west in the region. Religion is in strong decline there. They got a lot of real industry there. They are big car manufacturers e.g. They have more scientific output than the whole Arab world combined. Their strain of Islam is not as extremist as the one found among the gulf states like Qatar, Saudi Arabia, Jemen etc.
We got to give the Iranians reasons to believe that playing well with the west will give them a lot more benefits than antagonizing Israel.
I support Israel's existence but I really wish they had a more moderate and constructive leader than Bibbi. He really comes across as a deranged conspiracy theorist. To make real progress we really need to get Iran and Israel to make the peace.
"A senior American official said Saturday that Iran will be able to access about $50 billion of a reported $100 billion in holdings abroad, although others have used higher estimates."
At the time I just dismissed this as some tinfoil hat developer adding some nonsensical warnings to the firmware, but in retrospect, after reading this article - this matches perfectly, chances are - phone was indeed detecting Stingrays. Still no idea how it managed to do it.
EDIT: I had no data/IP connection of any kind at and around the time of seeing this, so this is clearly unrelated to TLS interception.
That the Met (and other police forces) regularly use IMSI catchers is not new information - here's a ~5 year old Guardian article on the subject:
Given that these things are cheap, would any fellow Brits be interested in clubbing together and acquiring one and installing it in or around parliament? I'm sure there would be plenty of buyers for the call records of MPS.
Report in english:http://www.aftenposten.no/nyheter/iriks/New-report-Clear-sig...
The UK's level of surveillance is extremely unsetteling to me, and quite frankly I think a lot of Americans have forgotten all the reasons why the UK might not be as good of an ally as everyone thinks since the 47 USUK agreement. Thr point being that I really hope our politicians dont start adopting That level of surveillance just because they do it to.
It seems we have quietly been in a surveillance arms race, which isnt good for the population at all.
Is this the same technology?
In the US, where the constitution expressly prohibits it: that your property is seized w/o due process is complete and utter garbage.
By no means am I a right-wing/vigilante militia supporter, but this type of behavior from the police makes me support having a heavily armed citizenry.
Whenever I see police, I have the same fight or flight response as if I'd see someone in a dark alley. The police have become dangerous, and none of my friends trust them. They would be the last people I'd call if there was something happening. Too much of a risk they would beat you, kill you, or rob you.
Who does that sound like?
However, it did happen to be money he had made selling drugs several years prior. They had identified him as a convicted felon with a drug related offense and connected the money to it.
It also raises the age-old question of "who polices the police?" The (federal) DOJ can only do so much, it seems. But maybe ordinary citizens can demand reform if injustice stares them in the face?
From what I've read in the press (especially NyTimes), the USA justice system seems to fundamentally disadvantage poor people . The saddest part about the civil forfeiture business is that it probably affects the poorest people, who then have the least resources to challenge it.
On a separate note, I know the press is more likely to publish instances of injustice vs run-of-the-mill "just" justice. I honestly have no concept of if we live in a society with a tiny bit of corruption, or a lot more than I ever realized.
Was this guy just not familiar with air travel? Or is it less likely that money will be seized from a checked bag than from a carry-on? It's just absurd to me to put that much value into a checked bag, especially in the form of cash.
I'm sure there are some lawyers out there who might say otherwise. They will tell you that "this" is how the constitution is interpreted. But the constitution, particularly the bill of rights, was written so that you could understand it, irregardless of what politicians and lawyers say.
Except municipalities explicitly write expected seizures into their annual budgets. They literally must seize property to make their planned budget.
Yes, there have long been news articles on this civil forfeiture scam. No doubt it happens. But have to suspect that it doesn't happen very often to innocent people. Why so suspect? Because there would be more screaming, political debates, SCOTUS cases, etc.
In a sense, that there can be such a scam is not too surprising: That is, as we know well, generally, "The price of liberty is eternal vigilance". So, we can expect attacks on our Constitutional rights, and, to get our rights back or just maintain them, we have to fight, continually. That is, there are plenty of people who will take our rights unless we do fight back. So, right along, there needs to be fighting back.
Where is the ACLU in all of this? What about other groups interested in keeping government under control?
We can fight back by bringing law suits and by voting.
But, in particular, and in practical terms, in a local community, likely it can be enough to be known and respected in the community, active in politics, well known to the local politicians, and to have a little chat with them. A respected local citizen will likely not get pushed around by the police. It's a little like high school -- it helps to fit in at least a little.
Broadly an immediate, expedient, practical solution is: In public, don't carry a lot of cash. If have a lot of cash in your house, then, in case your house gets searched, have that cash well hidden. If you have a business that gets paid in cash, say, some tens of thousands a month, maybe make a daily deposit to your business account. Then, get well known at your bank as a successful, local business person who does get revenue in cash -- hopefully then your bank won't file papers saying that you are suspicious. E.g., generally in business, a banker wants good business customers, and business person wants good respect from their banker.
Commonly in a small community, the police know a lot of the people. That can help, say, if at 3 AM drive to the post office to deposit a letter -- the local police will just remember who you are and relax.
It might help to make a donation to some local police charity drive and, there, shake hands with the local chief of police. Maybe can get a window sticker for your car indicating that you are such a supporter.
Likely if there are enough legal cases where citizens bring suit against civil forfeiture, the practice will reach the SCOTUS and get struck down. Of course, legal cases are very expensive, but there are a lot of law school graduates without much to do; with enough forfeiture cases, some of those lawyers would take such cases and change the situation.
For long distance travel, there is an old saying, "A stranger in a strange land". So, it has long been recognized that being such a stranger is not the usual but has some dangers. E.g., don't carry much cash or anything very valuable. E.g., if want to carry $15,000 to buy a used car, just go to a bank and get a certified check for that amount and hide it somewhere, maybe in a book, fold it up and put it between two credit cards in your wallet, or just mail it to yourself at your destination. Some of the police might say that anyone who didn't use such a technique is suspicious.
I think what's going on is availability bias. This is where dramatic or sensationalized dangers get overplayed in one's mind instead of paying attention to the actual probabilities. It's basically like shark bites, airplane crashes, or terrorist attacks. The media tends to sensationalize these risks because there is money in it and people tend to over-fear them despite their actual statistical probability. Don't get me wrong, there should be something done about crooked cops. But it's also important to distinguish real risk with the types of risks that sell newspapers.
the whole system needs disruption, that's what we do.
I've been in UK many times, and being able to effectively walk up all escalators due to the diligence of the people always impressed me. Coming from a country that doesn't have such respect for basic rules, it feels just wrong despite the gain of average efficiency.
* Outside peak hours: stand on the right part of the escalator and leave the left part for walking.
* At peak hours: stand in both sides of the escalator.
* At extremely peak hours: If most of the flux goes upwards, stop the downward escalator and walk on it (this is actually illegal)
I think this result is a due to it only being tried on one of three up escalators. By the assumption that there are 6 lanes, devoting 1/3 to walkers and 2/3 to standers can lead to greater efficiency if that more closely matches the actual preference distribution. By maintaining choice, and matching the available options to those desired by passengers one can optimize the results for both those who prefer speed and those who prefer not expending energy.
I usually camp at the bottom, waiting for the crowd to disperse, then grab the powerup and use it efficiently.
Is this new? When I visited Hong Kong in 2011, you had to stand on the right and walk on the left. In fact, this was one of the things that caused a great deal of anger towards "mainlanders" tourists from mainland China who ignored such social conventions.
EDIT: I just googled it and apparently the "no walking on the escalators" rule in Hong Kong is only a few months old:
If the escalators are at capacity then the right thing is to build more. But that costs money, so instead we get "cheats" like this.
I wonder if they have tried to simply explain to commuters why it makes sense to stand on both sides. Treating people like intelligent and responsible adults often works much better than treating them like intellectually disabled children.
British drive on the left side of the road, so more natural tradition would be to stay on the left side of escalator and run on the right.
Never occurred to me that the occasional lane-blocker was an accidental optimization.
Is this some sort of British slang thing, or did somebody make enormous strides in hologram technology when I wasn't looking?
As a kid I was taught that walking up or down an escalator was rude, could cause injury, and defeats the purpose of the escalator. I think I was punished for doing it. When I went to DC, people were asking me to move out of the way so they could run up the escalators. I thought they were just being rude, but then I noticed lots of people doing this.
I don't remember there being any signs or anything anywhere explaining this. It just emerged as part of their culture. Very interesting. I will definitely try running up an escalator the next time I see one.
Must be almost touching the escalator's "Design Capacity"!
Reading a Guardian article on that feels, uh, really redundant? I guess we Russians have some experience in dealing with mobs.
So if you let people stand on the right and walk on the left, you're getting higher density and throughput, since you now have one person standing per stair PLUS people walking on the left. It's like turning a 1-lane into a 2-lane street.
On long escalators, mark some of the stairs in a particular color at regularly spaced intervals. Those are the designated "rest stairs".
Someone walking up who changes their mind (temporarly or for the rest of the ride) just finds a green stair, and moves over to the standing spot to let those behind pass.
Rest stairs can be spaced reasonably sparsely so as not to cut into capacity too much or annoy people, and would only be featured on the long escalators where this is a problem.
Maybe some people give up on the idea of walking up the long escalators in rush hour because they don't want to hinder someone who is faster.
An treadmill running at 10m/s with pairs of people spaced at 0.5m has a capacity of 40 people/second, or 2400 people per minute, or one large train every 30 seconds. The average speed is higher and you no longer need to wait for a train. The only very small problem is the prohibitive cost with existing tech.
All this is comparable to people crossing streets on red light and afterwards walking slower than me crossing on green.
Having another row of people on the left means the overall capacity increases and everyone moves faster but I always walk up the stairs and this won't benefit me one bit. Everyone else wins.
Standing when I'm in a rush can never be faster than walking up the escalator.
lol I can see that being taken out of context.
It's surprising to hear that Londoners are just keeping one side clear, with few "takers" to climb up. Are they unfit or something? Respectful, though.
London Transport might have attempted a campaign along those lines. Lowest common denominator can be tiresome at times.
You know what's faster than either walking up stairs or an escalator? Walking up an escalator.
However, as I was changing from the Central line to the Piccadilly line in the morning, like many others I walked through the much quieter 'No Exit' corridor (against the flow of traffic) to avoid/alleviate the congestion in the actual designated exit corridor. I'm not sure if I was counted in their stats as a +1 or -1?
Then they closed my local station anyway, so I switched to the overground. 5 mins more, but so much nicer!
When our driverless cars head underground, they will all be going at the same high speed.
THAT is a very british approach. Planners know that a problem is approaching, a problem created willingly by infrastructure improvements elsewhere. But rather than address that spillover issue with money/time/new bricks, yet another code of behavior is to be enforced. The people are to shoulder the burden yet again. Heaven help the tourist in a hurry who gets an asbo for not maximizing the carrying capacity of tube escalators. I wait for the day the escalator stops and everyone stands motionless for fear of being ticketed.
See https://www.youtube.com/watch?v=DyL5mAqFJds where shoddy architecture is answered by suggesting that things will be ok so long as only lightweight people enter the building.
And i thought there was an obesity crisis? They've been telling us for years to keep moving and now here is a government agency telling people to stand motionless? I say encourage people to burn calories by running the escalators in reverse!
Seems like they are just shifting the cost for finding out from themselves to the disrupted travellers. Even if over longer periods of time this proves more efficient, disrupting normal patterns for regular commuters will cause a lot of stress and disorientation; such stress may be a soft cost, but most commutes already suck. I guess when you're a government agency, it's hard to fail your customers in a way that matters to you.
The goal seems to be to reduce the spread of DRM. I'm cool with that. However, I'm not sure that these actions will do anything at all to reduce the usage of DRM. My reasoning there is that those who want to use DRM are not going to accept any alternative that is not DRMd. So in order to stop spread of DRM, keeping DRM non-standardized would have to prevent others from adopting DRM.
So the ultimate question for me is: who's going to start using DRM that's not already? I think this set is zero. Adding standardization of DRM won't close up any more content that wasn't closed before, IMHO.
However, by not standardizing, we lock out all sorts of non-mainstream clients form accessing content. Now that Flash is going to disappear entirely, that means no access to all sorts of content on Linux, unless it's standardized.
So I see something to gain, and nothing to lose by standardizing DRM. I'm making assumptions to arrive at that conclusion, but I believe that they're no worse than the ones that Cory Doctorow is making here. It's just weird to see myself diverging from the EFF on this, and on T-mobile, and other things.
Regardless of what W3C decides, Chrome won't drop Netflix support, and Netflix for now seems to be hell-bent on having total legal control over which devices are allowed to play their content.
I wonder how much of a deterrent that is. W3C needs Google/Microsoft/Apple more than they need W3C. The content producers aren't even members of W3C I don't think. I guess it would be companies that create the encryption plugins like Adobe that could theoretically sue people under the DMCA. I just don't see how the W3C could even function without the biggest players at the table.
Similarly, either you will accept backdoored encryption or you will be automatically considered a terrorist and singled out for LE scrutiny.
Also, I don't expect Mozilla to do anything useful, given they went along with it so easily. But then, I haven't expected anything much of Mozilla in a long while.
Those who control products and infrastructure shouldn't be allowed to set up such tools to grant themselves disproportionate power. However, individuals need to realize that such tools could help protect them as well. (Analogy: Camera phones and police misconduct.)
1. A Party may, in formulating or amending its laws and regulations, adopt measures necessary to protect public health and nutrition, and to promote the public interest in sectors of vital importance to their socio-economic and technological development, provided that such measures are consistent with the provisions of this Chapter.
In other words, the TPP overrides any domestic laws protecting public health and nutrition, or socio-economic development.
That's not at all how the TPP works. The treaty doesn't allow foreign governments to "override" local laws, but rather allows for damage claims against the governments themselves if they enact and enforce laws contrary to the agreements in the TPP itself.
I'd really like the TPP annotated by legal experts. Instead, it's annotated by the CTO of Fight For The Future. I'm not sure that's a win.
"... the TPP elevates investor rights over human rights and democracy, threatening an even broader array of public policy decisions than described above. This, unfortunately, is the all-too-predictable result of a secretive negotiating process in which hundreds of corporate advisors had privileged access to negotiating texts, while the public was barred from even reviewing what was being proposed in its name.
The TPP does not deserve your support. Had Fast Track not become law, Congress could work to remove the misguided and detrimental provisions of the TPP, strengthen weak ones and add new provisions designed to ensure that our most vulnerable families and communities do not bear the brunt of the TPPs many risks. Now that Fast Track authority is in place for it, Congress is left with no means of adequately amending the agreement without rejecting it entirely. We respectfully ask that you do just that."
Cloudflare's captchas are nearly impossible to solve, which means that Tor users are effectively blocked from seeing the site. Would you consider using something other than Cloudflare to host the site?
No body has time for that. It's nice that they have pared this down to 31 different sections, but my guess is that they are not showing the full agreement here.
It would be much nicer if someone just dumped it all into a single PDF and HTML file.
Edit: Care to leave a comment rationalizing your downmods?
By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence."
So true, and this is why I don't listen when Elon Musk or Stephen Hawkings spread fear about the impending AI disaster; they think because a neural network can recognize an image like a human can, that it's not a huge leap to say it will be able to soon think and act like a human, but in reality this is just not the case.
"This is all well justified, and I have no intention to belittle the current and future impact of deep learning; however, the optimism about the just what these models can achieve in terms of intelligence has been worryingly reminiscent of the 1960s."
From what I've read and seen, the leading people in the field (Yann LeCun, Hinton, etc.) seem to be very aware that the current methods are particularly good for problems dealing with perception but not necessarily reasoning. Likewise, I have not seen many popular news sources such as NYT make any crazy claims about the potential of the technology. I hope, at least, that the people who work in AI are too aware of the hype cycles of the past to get caught up in one again, and so there will not be a repeat of the 60's.
It's not hard to see that the reason NN are becoming the prime candidate for AGI, is because of their inspired architecture based on biological neurons. We are the only known AGI, therefore something similar to the brain will be producing an AGI. NN at least mimic the massively parallel property of biological neurons. And if we're optimistic, the fact that NN is mimicking how vision works in our brain, might mean that we are at some point in the continuum of the evolution of brains, and it's a matter of time until we discover the other ways brains evolved intelligence.
What keeps me optimistic is evolution. At some point brains were stupid, and then they definitely evolved AGI. The question is how did this happen and whether or not there is a shortcut, like inventing the wheel for transportation instead of arms and legs.
I feel like the gist of what current neural nets can do is "pattern recognition". If that's fair, I also suspect that most people underestimate how many problems can be solved by them (e.g. planning and experiment design can be posed as pattern recognition - the difficulty is obtaining enough training data).
It's true that we're most likely a very long way away from general AI - but I'm willing to bet most of us will still be surprised within the next 2 years by just how well some deep-learning based solutions work.
We shouldn't forget that the mind/body split is a wholly artificial construct that has no basis in reality. The brain is not contained in the head. The nerves running down your spine and out to your toes and all over your body are neurons. Exactly the same neurons, and directly connected to the neurons, that make up what we think of as the separate organ 'the brain'. They're stretched out very long, from head to toe, sure, but they are single cells, with the exact same behavior and DNA, and there is no reason to presume that they must have some especially insignificant role in our overall intelligence.
Then there is the fact that it is probably reasonable to presume that a machine which has human-level intelligence will not appear overnight. It would almost necessarily go through long periods of development. During that development, when the machine begins to behave in ways the designers are not able to understand, what will be their reaction? Will they suppose that maybe the machine had intentions they were unaware of, and that it is acting of its own volition? Or will they think the system must be flawed, and seek to eliminate the behavior they didn't expect or understand?
I have a hard time imagining that an AI system will be trained on image classification and one day suddenly say "I am alive" to its authors or users. If it instead performs poorly on the image classification because it is pondering the beauty of a flower in one of the images, what are the chances that nascent quasi-consciousness would be protected and developed? I think none. We only have vague ideas about intelligence and consciousness and our ideas about partial intelligence are utterly theoretical. Has there ever been a person who was 1% intelligent? Is mastering checkers, or learning NLP to exclusion of even proprioception 1% of human intelligence? You optimize for what you measure... and we don't know how to measure the things we're looking for.
Here's the important difference about NNs. They are incredibly general. The same algorithms that can do object recognition can also do language tasks, learn to play chess or go, control a robot, etc. With only slightly modifications to the architecture and otherwise no domain information.
That's a hugely different thing than brute force game playing programs. Not only could they not learn the rules of the game from no knowledge, they couldn't even play games with large search spaces like Go. They couldn't do anything other than play games with well defined rules. They are not general at all.
Current neural networks have limits. But there is no reason to believe that those limits can't be broken as more progress is made.
For example, the author references that neural networks overfit. They can't make predictions when they have little data. They need huge amounts of data to do well.
But this is a problem that has already been solved to some extent. There has been a great deal of work into bayesian neural networks that avoid overfitting entirely. Including some recent papers on new methods to do them efficiently. There's the invention of dropout, which is believed to approximate bayesian methods, and is very good at avoiding overfitting.
There are some tasks that neural network can't do, like episodic memory, and reasoning. And there has been recent work exploring these tasks. We are starting to see neural networks with external memory systems attached to them, or ways of learning to store memories. Neuroscientists have claimed to have made accurate models of the hippocampus. And deepmind said that was their next step.
Reasoning is more complicated and no one knows exactly what is meant by it. But we are starting to see RNNs that can learn to do more complicated "thinking" tasks, like attention models, and neural turing machines, and RNNs that are taught to model programming languages and code.
On the other hand there are reasons to be optimistic. Human brains are built from networks of neurons and the artificial neural networks are starting to have quite similar characteristics to components of the brain - things like image recognition (https://news.ycombinator.com/item?id=9584325) and Deep Mind playing Atari (http://www.wired.co.uk/news/archive/2015-02/25/google-deepmi...)
The next step would may be to wire the things together in a similar structure to the human brain which is kind of what Deep Mind are working on - they are trying to do the hippocampus at the moment. (https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...)
Also we are approaching the point where reasonably priced hardware can match the brain, roughly the 2020s (http://www.transhumanist.com/volume1/moravec.htm)
It'll be interesting to see how it goes.
Many people got dissilusioned with classical AI because mathematical logic (inference engines) would not scale to 'strong' AI.
Hofstaedter says that most concepts handled by Humans do not fit into clear cut onthologies one to one. Instead each higher order concepts are created by finding analogies between objects or simpler concepts, and by grouping these similar concepts into more complex entities.
I have a summary of the book here http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/po...
"Deep learning has produced amazing discriminative models, generative models and feature extractors, but common to all of these is the use of a very large training dataset. Its place in the world is as a powerful tool for general-purpose pattern recognition... Very possibly it is the best tool for working in this paradigm. This is a very good fit for one particular class of problems that the brain solves: finding good representations to describe the constant and enormous flood of sensory data it receives."
I had to make a copy to my Google account to keep the slides.
Overview (5 slides)
General Concepts (9 slides)
K nearest Neighbor (6 slides)
Decision trees (6 slides)
K means (4 slides)
Gradient descent (2 slides)
Linear regression (9 slides)
Perceptron (6 slides)
Principal component analysis (6 slides)
Support vector machine (6 slides)
Bias and variance (4 slides)
Neural networks (6 slides)
Deep learning (15 slides)
I especially like the nonlinear SVM example on slides 57 and 58. It provides a visual of projecting data into a higher dimensional space.
Does anyone have any other resources?
What I do frequently see with Github, is that they've managed to work their way into almost being beyond reproach. This letter feels like an example of that...Almost like Github needs someone to stand up for it in light of some meanies picking on it.
It's a good product. We should give credit where credit is due, just don't forget it's a product. A (by all indications), very profitable product that wants to make money off you. That is its goal and purpose in life, and OSS furthers it. For the record, I think this is a good and healthy relationship, but we shouldn't pretend it's some FOSS group or non-profit out struggling to provide us with Git hosting.
Then, a handful of guys took the challenge to build an awesome platform and as a consequence of their hard work, their platform earned its hegemony.
Two things stand out in this "thank you Github" open letter:
1. While the situation improved tremendously in certain areas the way to participate in Open Source is still very much fragmented. Most of the major open source projects (like Linux, Mozilla, Apache and nginx, to name a few) still have their own workflows, patches are still circulated in emails and issues are still being reported in a myriad ways. Despite of the big visibility GitHub has among the new open source projects we are very far from not being fragmented.
2. Before 2007 we had, for instance, SourceForge that back then had also earned its hegemony and, for a series of reasons (one of them being too late to answer to the community wants and needs) lost its way, its hegemony and its user base.
There is time for praise and time for hard work and, IMO, the "Dear Github" open letter is a constructive way to call attention to the perceived problems while the 'Dear "Dear Github"' and this gratitude letter are dismissive to their concerns (the former) and mostly empty praise and adulation (the later).
This to me, itself, is wrong.
The GitHub issue tracker does need to change. While it's great for OSS that projects can get a leg up SOONER, GitHub does introduce it's own problems by having some watered down tooling in some areas.
I'm STILL at odds with how it has shifted the equation from discuss to throw code at the problem, which generates extra code review and often, angry committers when their patches are not immediately merged or unwanted, or have to be reworked.
GitHub has done some GREAT things because it has built up critical mass, but because it has gotten critical mass and has become a defacto standard, does have some obligation to keep up with demand.
This seems passive aggressive to me.
It's as if they were talking to GitHub the thankless FOSS maintainer. Quit mirroring guys. It's a for-profit enterprise that would do well to listen to the concerns of its userbase.
The dilemma is about the sum of the parts.
The serious projects that do care about free software don't use GitHub.
: See Mike Gerwitz's GitHub Does Not Value Software Freedom: https://mikegerwitz.com/about/githubbub
Um...ever hear of Source Forge? Yeah, before 2007 there was another OSS hegemony. It failed to meet its users needs. It was replaced.
So it goes.
Don't you still have to figure out every project's rules? Being on Github does not impose coding guidelines, testing requirements, documentation requirements, contributor license agreement policies, project management and governance system, code review process, dispute resolution process, and so on.
> Nowadays doing Open Source is infinitely easier thanks to you, GitHub. You've provided the tools and the social conventions to make those days a thing of the past.
Nearly every time over the past 30+ years that I've wanted to fix a bug or add a feature to some open source thing I've been using, and been thwarted, it was never figuring out the workflow, or patch procedure, or issue reporting that did me in, or figuring out the project's rules.
The big problem has usually been one or both of (1) the project has a bazillion files and it is not at all clear from the meager documentation and haphazard directory organization which are for the thing itself and which are for ancillary tools, and (2) it gets build errors that I can't resolve.
Shouldn't we (the OSS community) have an open source, roll-your-own version of something like GitHub? Like, the repo-management equivalent to a phpBB or a Wiki or a Wordpress.
We do have the separate components, though maybe the hard part is to glue them together. But still, it is something what would be worth the time and effort, wouldn't it?
>Before 2007, the way to participate in Open Source was fragmented. Each project had their own workflow, patches circulated in emails, issues were reported in a myriad ways, and if anyone wanted to contribute they had to figure out every project's rules.
And now we have a monoculture. Monoculture is bad, folks.
This letter paints pre-2007 as something bad because everyone used their own infrastructure for their projects, but this is actually a really great thing. It meant that more projects had autonomy over the infrastructure that they rely on. So, rather than needing to beg a for-profit corporation for features that they want, they could actually change the software they used to work for them. Monoculture is more convenient for the masses, but trading freedom for convenience is a bad deal in the long-term.
The web is becoming more centralized every day, to the detriment of all Internet users whether they know it or not, and when SaaS apologists thank GitHub for helping it makes me upset. A federated, free software source code hosting tool could solve the barrier to entry problem without relinquishing control to a company who ultimately does not care about you.
And how about GitHub's ToS? Has anyone read it? Probably not. I didn't when I signed up. Did you know that changes to the ToS can happen any time and without notice? Even if you did read the terms, by agreeing to them, you agree that they can completely change them. Who would reasonably agree to that if it were not buried in legalese? You also surrender your rights to a fair trial by defending and indemnifying GitHub. For further reading, see "Why I don't support or contribute to GitHub repositories"  or read the ToS for yourself.
Now, on a technical note: GitHub encourages bad development practices via hooking people on their web interface. The Pull Request interface is the biggest offender. It encourages unclean commit history because it's scary to rewrite the patch set of a pull request. If you rebase fixup commits, you have to force push the changes. You cannot even do the safer route of deleting the remote branch and pushing the new branch because GitHub will automatically close the pull request with no way to re-open it. So, most people just pile on fixup commits that never get squashed into decent patches. And that's not all! The Pull Request interface makes it difficult to comment on individual patches by encouraging reviewers to look only at the aggregate diff of all patches. This leads to lower patch quality because it leads to a bunch of terrible patches that look okay squashed together to enter the Git repository. When your patch history sucks, it reduces the utility of blaming and bisecting to find issues or otherwise learn about the code. Reviewing patch sets on a mailing list is, despite being "low tech", a much better experience for me. I'm not forced to use a web interface, I can just use the email client of my choosing, and Git already knows how to do an email-based workflow. There's a reason why a huge project like Linux still does patch review via email.
In conclusion, GitHub is a company that receives almost nothing but praise. Most criticism is dismissed because they have a nice UX for a certain group of users (not me). I think GitHub has harmed the free and open source software community both ethically, legally, and technically. I no longer use GitHub for hosting my personal projects. I write all of this in the hopes that more people will recognize this and work on real replacements for GitHub.
Overall GitHub is a cultural place where anyone can improve his personal skills, expecially in computer programming, thanks to the huge code present on it.I have romantic vision of Github. For instance, guys from poor parts of the world can study great code with this site.
Yes it is a company with investors and probably it made some wrong decisions, and if we want we can choose other services, but today sorry for the repetition Github represents an open and huge cultural Hub.
1) Work for one large client and essentially become an employee (consider this. a lot of startups pay good money for remote employees)
2) Work for multiple clients
Focusing on #2 here
Core rule: You want to be paid premium for quality and service.
Avoid marketplaces - it's very hard to compete on quality here.
Niche - the more focused you are on a (profitable) niche the better you can charge premium for domain competence
As thibaut_barrere mentioned - Build a brand - i would even go further - create an agency like brand. At the point is stopped saying "I" but said "we" i was able to charge more.
Dont charge by the hour but by the value - most developers charge their time - you want to charge the value you provide to the client. Read up on "willingness to pay"
Most important: Deliver as promised and always try to over-deliver in service, quality, etc. Eg try to understand why the client asks for features and not only what features she/he asks for - you might be able to come up with better solutions or anticipate future requests. Any successful project should usually lead to improved reputation and more projects and clients.
I came into Syracuse knowing nobody and nothing.
I had never done any app making as of January 2015. I had done some wordpress stuff, but just the basics.
And I had (and have) no CS degree.
I now make a living on contract work. I did it by going to local meetups and introducing myself as a freelance web developer. Nevermind that I hadn't done freelance web development ever. I kept going to meetups for month and still attend a monthly hacker meetup. I participated in hackathons without really knowing how to program.
But all along the way I met people more experienced than I am and picked up two clients along the way. I think one thing that I do differently to most is that I charge a high rate (I always quote $150/hr). I am willing to negotiate lower than that but its a starting point. I have been paid that in the past for less complicated work like hiring developers and being a project manager.
What am I saying? Your questions is what sites to use? Just one: meetup.com
Remote OK - https://remoteok.io/
Stack Overflow - https://careers.stackoverflow.com/jobs?allowsremote=True
LiquidTalent - http://www.liquidtalent.com/
Working Not Working - http://workingnotworking.com
Hired - https://hired.com/contract-jobs
Gigster - https://gigster.com/
Mirror - http://mirrorplacement.com/
Metova - http://metova.com/
Mokriya - http://mokriya.com/
HappyFunCorp - http://happyfuncorp.com
Savvy Apps - http://savvyapps.com/
Clevertech - http://www.clevertech.biz/
Workstate - http://www.workstate.com/
AngelList - https://angel.co/jobs
I know you're just asking for sites and not approaches to finding contract work, but getting in with a very promising early stage company through contract-to-hire [that allows remote] is probably the most sustainable way to go.
Doing one contract project after another at an hourly rate just doesn't scale well financially and finding a next decent client can be like pulling teeth.
Sites /can/ work (I know people who make a good living off certain sites), but nothing will beat self-managed marketing on the long run.
Feel free to email me (see profile) if you have specific questions.
In short, to answer your question, I never used any sites to find contract work. I got all my leads through face-to-face interaction with real humans in the real world, and a good deal of it came from word-of-mouth because of exceeding my clients' expectations.
Contracting sites marginalize developers and the type of clients who troll them are typically the kind who will try to squeeze as much work out of developers for as little money as they can. On top of that, developers are generally a pretty introverted crowd, so the number of introverted and talented developers who troll those sites looking for work is far greater than the number of outgoing, personable developers in your local area. Which group do you want to compete against?
These are informal "Can I take you out to coffee?" talks with people in your industry to see what they are working on, what is happening with them, what is going on in the industry. Every job I have ever gotten is through informal meetings with people I have met through my network (whether its your old job, your friends, parents, relatives, or other).
At the end of every one I ask: "Is there anyone else you think I should talk to?" and "Do you currently have any opportunities at your company for me?". Rinse repeat.I guarantee that after investing in 30 informational interviews you will find work.
Instead, I browse job boards and when I find an interesting role I contact the company. If they are interested in my background and the fit is right, I sell them on setting up a contract relationship instead of full-time employee. Sometimes it works, other times it doesn't. The important part is being honest that you are looking to work as a contractor, not an employee.
Job boards to consider: AngelList, WeWorkRemotely etc. If you're looking for a list of job boards (http://nodesk.co has lots and so does this article by teleport http://teleport.org/2015/03/best-sites-for-remote-jobs/)
I suspect the secret to contract work success lies in having really good networking skills and a Rolodex of contacts from having worked in a given industry and having a reputation as someone who delivers. If you don't have that then you would probably have better luck finding reasonable work by going to meetups or similar industry events to build a network of professional contacts. The only way I know of to do this online is to become a notable contributor to prominent open source projects and then use that to leverage paid work.
For Germany, Gulp (www.gulp.de) is a very good site where you can actually find clients that are willing to pay a reasonable hourly rate (they even have a rate calculator on their site).
If you're in the UK...
I've been contracting about 3 years now and started it the simple (and probably dumb) way - stick a resume up on jobsite.co.uk, wait for agents to call. Lots will. Be nice to them on the phone but be firm about what rates and locations you're willing to work. You'll get lots of useless ones who haven't even bothered to read it, but no matter, you'll learn to filter them out pretty quickly. Remember the good ones. Rinse, repeat.
I've had two contracts now through reputation, which is quite nice, but getting contracts from previous workmates isn't a panacea. One of them was the most boring thing I've ever done in my life (worse than shelf-stacking in a warehouse) and I quit after three weeks because I was literally unable to complete the work it was so dull. I told the client that I was poor value for money and a recent graduate would be a better choice. The other one was good though!
Also, make sure you're prepared for some time off between contracts, it's pretty much going to happen.
Depending on your living situation and time available Id recommend trying to establish your own identity so you dont have to go through a marketplace for contract work. Instead youll have the contract work come to you and not filtered through a middleman that would take a cut out of your work. I would never recommend someone go through fiverr, Upwork or these other marketplaces unless they were just moonlighting.
I've been consulting over a year (US-based, near NYC) and I've found plenty of very good clients (small and large) through freelancing websites.
Few loose guidelines I've used to help me with applying to gigs:
1) Evaluate if you think the person understands the value of the work, and only reply if you can somewhat-confidently answer "yes."
2) Reply to gigs that say "$5" or some other crazy low number, as long as they seem competent at explaining their project.
3) ALWAYS follow up with your past clients! Ask them for new work regularly.
I didn't bid to low quality jobs and once I finish my job I offer them an maintenance contract outside upwork.
If I'm looking for more cutting edge, interesting work I'll go out and find either a company, industry or project I'm interested in and try and insert myself into it somehow. Usually through meetups, over coffee or in one case just showing up (probably wouldn't recommend that, depends on the people - in my case it was 4.30PM on a Friday and I brought beer).
Usually I'll either do it gratis (if it's non-profit or public domain) or cut my rates if I'm learning on-the-job.
When I started pretty much all of my job offers and contracts came by word of mouth. I only had to kick down doors a few times before I had developed a reputation as a good worker. This involved cold-emailing, calling and meeting people at various industry events.
some side projects I have done:
Have done more complex stuff but requires user to login.
Edit: I mean on HN similar to the first of month feature not a site (I know these are out there obviously).
update: I post my pitch in the freelancer thread and potential clients contact me, for example https://news.ycombinator.com/item?id=9998249
The gist of it is, as many here are saying: Don't use marketplace sites. Instead show off your knowledge in a way that gets attention of potential customers, then they'll come to you.
I would recommend http://AngJobs.com
disclaimer: I run AngJobs, https://github.com/victorantos/AngJobs
Here's my list of resources that I would be looking at ifI needed to start looking for a contract immediately:
- Authentic Jobs: http://www.authenticjobs.com/
- StackOverflow Careers: http://careers.stackoverflow.com/jobs?type=contract&allowsre...
- We Work Remotely: https://weworkremotely.com/jobs/search?term=contract
- Angelist: https://angel.co/jobs
- Github Jobs: https://jobs.github.com/
- Hired: https://hired.com/contract-jobs
- Toptal: https://www.toptal.com/ I'm a member of Toptal's network)
- Gigster: https://www.trygigster.com/ (haven't used it yet)
- Crew: https://crew.co/ (haven't used it yet)
- Approach companies at Meetups
- Meetups, meetups, meetups
- Pitch on forums
- Work with contract agencies
- Become a subcontractor
It also helps to work on branding yourself, blogging, and integrating into communities (like HN!). Generally, just becoming an authority on a topic and allowing people get to know you before they work with you helps a lot. Kind of like patio11 has done for himself around here. Then people start coming to you instead of the other way around.
I would also highly recommend looking at DevChat TV's Freelance podcasts for ideas, they're really great: https://devchat.tv/freelancers
I send a LinkedIn message to some of my contacts I'd like to work with, telling them it's been a while and that I'd like to get in touch, and offer them to take a cup of coffee with them this week.
During the meeting, tell them about your freelance status and that you're looking for work.
Let's stay with video games for a bit. What if we look at joy as 'seeing the world change', graded by the degree of indirection from our inputs (the longer it cascades, the more joy it gets). Maybe let it have preference for certain color tones and sounds, because that's also how games give us hints about whether what we do is good or not. Boredom is what sets us on a timer - too many repetitions of the same thing and the AI gets bored. Fear and disgust is something that comes out of evolutionary processes, so it might be best to add a GA in there that couples success with some fear like emotion. Anger, well, maybe wait with that ;-).
Edit: Oh, and for the love of god, please airgap the thing at all times...
Videos games are also essential for AI pedagogy. Creating Pac-Man agents in Stanford's AI class is a great example. Most players can barely get a "strawberry" but to see a trained agent mimicking human expert level play is eye-opening.
Quick reminder: Global Game Jam 2016 starts Jan. 29 and NYU is hosting its annual jam!
Video games are explicitly designed to test and fit within our bounds of conscious control and processing; particularly the retro games, but essentially all games in general have a very limited input control space (a couple keys or joysticks) and usually very rigorously defined action values. Moreover, these were designed by humans with very explicit successes, losses and easily distinguishable outcomes.
None of these descriptions fit the kind of control that an 'intelligent' system needs to handle. Biological systems do not have predefined goal values, very incomplete sensory information and most importantly control spaces that are absolutely enormous compared to anything considered in a video game. At any point in time the human body has ~40 degrees of freedom it is actively controlling - compared to ~5 in a serious video game.
I do not doubt that pattern recognition and machine learning techniques can be improved through these kind of competitions. But the problem is in conflating better pattern recognition with general intelligence; implying or assuming any sort of cost, value or goal function in the controlling algorithm hides much of our ignorance about our 'intelligent' behavior.
NLP to understand dialogue and actions that need to be taken based on what NPC's/quests/item descriptions say, strategies for several different enemies with different strengths and weaknesses, exploring the open world in a logical order.
When you think about the difficulties of such a loosely defined problem, it's hard to buy into the real-world fears of AI.
Language is quite complex and can't easily be beaten by hard coded algorithms or simple statistics. You can do some tasks with those things, but others they will fail entirely. The closer you get to passing a true turing test, the harder the problem becomes. It certainly requires human intelligence, and most of our intelligence is deeply rooted in language.
He mentioned games like Skyrim and Civilization as being end goals. But even a human that doesn't speak English wouldn't be able to play those games. Let alone an alien that knew nothing about our world, or even our universe.
On boot, all surrounding data will be taken in, this step would give everything context. All new data coming in would be processed (referenced to original data to determine what is happening and actions to take), then clustered, and then updated to the original data set, dropping data from the original set determined to be irrelevant, and updating the context to give more relevant perspective of the new data coming in. (And loop)
"Made up minds:a constructivist approach to artificial intelligence" by Gary Drescher presents a small scale virtual world with a robot embedded in it that figures out the laws of its world by interacting with it, much like what a child does. Need more people thinking like this.
Aside from using them as benchmarks, they way games are capable of simulating a world will probably be key in creating a true AGI. In the comment section of the article, we're already seeing some theories that involve video games not just a tests, but as a primary component of the intelligence architecture. Very exciting times!
At least motorcycle drivers who care are better drivers.
Maybe the article makes some valid scientific points, but I simply cannot go past this unscientific opening claim to a purportedly scientific article. Not just me, no peer-review journal will accept such frivolity. Passing on the article and hoping for better scientific writing in the future!
And here's why they aren't: First-person Shooters.
Why give AI something that's a goal that involves killing things that look like humans or animals for points? That's a recipe for disaster.
Breakout's not much better either. How often do you need to break a wall to smithereens with a ball? Never.
Perhaps some independent validation, but I was coincidentally having this conversation the other day with a relatively well known computer vision researcher, about why it seems like the idea of neural nets has floundered for decades and suddenly it's the hot topic, and we're seeing massively improved results.
His answers, summarized, are that:
1- Big data is making possible the kind of training we could never do before.
2- Having big data & big compute has made some training breakthroughs that allowed the depth to increase dramatically. The number of layers was implicitly limited until recently because anything deep couldn't be practically trained.
3- The activation function has very commonly in the past been an S-curve, and some of the newer better results are using a linear function that is clamped on the low end at zero, but not clamped on top.
All really interesting to me. This is making me want to implement and play with neural nets!
Of course, now the big question: if we have a neural net big enough, and it works, can we simulate a human brain? (Apparently, according to my AI researcher friend, we're not there yet with the foundational building blocks. He mentioned researchers have tried simulating a life-form known to have a small number of neurons, like a thousand, and they can't get it to work yet.)
Thanks for linking to the old NYT article on Frank Rosenblatt's work. One can see how researchers of the time were irked by delirious press releases when the credit-assignment problem for multilayer nets had not been addressed.
(We managed to mostly address the credit-assignment problem for multilayer nets...but the delirious press release problem remains unsolved.)
Incidentally, it's "Seymour Papert", not "Paper" (appears twice).
contains a collection of papers by nn luminaries including rumelhart, Hinton etc. Very highly recommended.
The problem however is that a nation's currency is arguably the primary source of it's sovereignty. Whomever controls the currency primarily used in a nation, controls the nation. So it's safe to say that no nation is going to let bitcoin or any other currency take hold for widespread commercial use in lieu of it's sovereign currency - that's a hallmark of a failed state.
In fact that's why when regulators declared bitcoin as a commodity it was such a big deal. The US doesn't really care because we haven't had commodity money since the 1970s, but the idea that if enough people decided that they wanted commodity money, they would arguably have a medium with which to do it with that wasn't tied to any state.
So as with every product, unless a massive number of people decide to stop using their home currency and start using bitcoin, then bitcoin will "fail."
Whether it pivots to some kind of credit system or not is largely immaterial because of the potential it had.
It's like if we created Artificial General Intelligence and it decided to just write movie reviews forever.
Those invested financially have very good reasons to believe in solutions that will retain their investment, and dismiss solutions that put them at risk.
Bitcoin's first killer app was drugs. Then Silk Road I and Silk Road II were taken down. That put a dent in the price.
Currently, Bitcoin is a way to get yuan out of China and convert it to dollars or euros, despite China's exchange controls. Most of the mining and the exchange volume are in China. Buying Bitcoin with yuan and selling it outside China is technically prohibited by the People's Bank of China, but they haven't cracked down hard on it. Yet. Mining is also a way to convert yuan to dollars. Miners in China can also qualify for the loans and subsidies the government of China gives businesses.
Bitcoin as a general currency for transactions just isn't happening. The real transaction costs are too high. There's volatility risk. There are exchange costs getting in, and exchange costs getting out. (At the retail level, those are high; Robocoin ATMs have a buy/sell spread of about 15%) There's also the substantial risk that the exchange will fail or steal your money. (This got better in 2015, but until then, more than half of Bitcoin exchanges went bust. It wasn't just Mt. Gox.) Paying 1% - 3% to Visa looks good compared to that, especially since buyers get protection against merchant fraud.
Bitcoin as a speculation looks good some years, and bad in others. It's like a penny stock, except that Bitcoin is zero-sum. There's no intrinsic value, and no fundamentals. It's all greater-fool speculation.
The impressive thing about Bitcoin is that the system is well behaved in the presence of a very high level of criminality. Few, if any, other distributed computer based systems can make that claim. It would be nice if DNS or BGP or SS7 worked that well.
1. a 32 MB block, when filled with simple P2PKH transactions, can hold approximately 167,000 transactions, which, assuming a block is mined every 10 minutes, translates to approximately 270 tps
2. a single machine acting as a full node takes approximately 10 minutes to verify and process a 32 MB block, meaning that a 32 MB block size is near the maximum one could expect to handle with 1 machine acting as a full node
3. a CPU profile of the time spent processing a 32 MB block by a full node is dominated by ECDSA signature verification, meaning that with the current infrastructure and computer hardware, scaling above 300 tps would require a clustered full node where ECDSA signature checking is load balanced across multiple machines.
For context, a meager 300 tps is less than 10% of what VISA does - hence this solves no long standing problems in Bitcoin, yet it condemns all the nodes to run in compute clusters in datacenters. Naturally, "small blockists" as we're called, point out that this isn't how Bitcoin works today at all. Forcing nodes into compute clusters in remote datacenters is a major, sweeping departure from nodes running on home networks with consequences both forseeable and unforseeable.
2. Nor has anyone presented me a compelling reason to get bitcoin (other than the future). Off the top of my head, if a company offered me an account that would take care of my online transactions in a secure way using Bitcoin, I might be interested. Maybe I'd put a few hundred in it each month and use it for random expenditures, and hopefully get access to some cool micro-services.
But I have zero awareness anything like that exists.
3. If it's to succeed as capital, it will be a daring bank that creates their own bitcoin equivalent... followed by one of the norwedish countries making their own national bitcoin. Then the floodgates will open.
Increasing the blocksize will increase throughput, but is it the only way? Definitely not. The very recently improvement by Dr.Peter Wuille to segregate the witness (cryptography used to validate transactions) from the data (transactions) increases the capacity by around 66%, even if block's max size is unchanged. This is already in the testing phase. There is a whole bitcoin scaling roadmap & a lot of work going on.
Practically, the death scenario being painted by XT hasn't realised and transactions are still going through normally.
The bitcoin developer community comprises of very capable hackers many of whom are not only PHDs but have also invented new cryptographic techniques and also are robust security coders.
Is it really true that the people who are objecting to increasing the blockchain size see Bitcoin turning into a reserve "currency" to hold wealth instead of a liquid currency used to make real time transactions?
Is that really the debate? Because I didn't get that from Mike Hearn's essay at all...
If you choose to remain with the fixed blocksize, then you're betting the system will reach another equilibrium (which may be total collapse). This equilibrium will revolve a natural evolution in the pricing of transactions.
Either way, at some other point in the future another source of discontinuity will be the circulation limit. Again, which may or may not kill the system.
Bitcoin is simply evolving, as it necessarily needs to.
It's definitely interesting to watch as an outsider.
I don't see any reason to believe that his position on Bitcoin should be read in any other terms than his investment in Bitcoin companies is not appreciably worse or more risky than other companies in his fund. That his personal asset portfolio holds less Bitcoin than wine illustrates the distinction he's making.
His message is for people invested in Bitcoin companies. It is don't panic, these companies were always risky.
Can 100% understand why some people may hate that future for Bitcoin, but I think the parallels are serious and it's probably what will happen.
No, among the developers actually working on Bitcoin that is not what the debate is about at all.
Bitcoin is a decentralized ledger, and indeed it can be argued that this is the only property about Bitcoin which is interesting/useful. Why? Because all properties we care about (availability, uncensorability, unseizability, etc.) derive from decentralization. And at the end of the day we can do everything Bitcoin does faster, better, and cheaper on some alternative consensus system (see: Stellar, Open-Transactions, Liquid) that does not have this decentralization property. Decentralization is expensive. It requires a dynamic membership, multi-party block signing algorithm, which at the moment means proof of work. And proof of work costs hundreds of millions of dollars per year to maintain, and throttles the available bandwidth due to the adversarial assumption and the existence of selfish mining.
The question is not whether Bitcoin should be a store of value or a medium of exchange. That implies we have some choice in the matter. The question is what level of on-chain utility does Bitcoin actually support under untrusted, adversarial conditions, without losing all properties derived from decentralization. This is an empirical question. The available bandwidth is something that can be determined from the performance of the code in the real world extrapolated to various adversarial simulations.
We had two Scaling Bitcoin workshops last year that gave us a data-driven answer: 3-4MB per block, tops. There are potentially ways that this number can be improved (see: weak blocks), and those are being worked on but are still some time from showing results. There are also some assumptions underlying this number, e.g. that we change the validation cost metric, which none of the existing proposals do in a smart way. But the scientific process is telling us right now that with the tools available to us we can increase the worst-case block size to 3-4MB with a better metric without the decentralization story becoming unacceptably worse off.
That is the plan of Bitcoin Core. The deployment of segregated witness will allow up to 2MB blocks under typical conditions, and 3-4MB under worst-case adversarial conditions. It will exhaust the available capacity for growth in the Bitcoin network at this time. Meanwhile, work progresses on IBLT, weak blocks, Bitcoin-NG, fraud proofs and probabilistic validation, and other related technologies that might provide an answer for the next increase a year or two later. I'm hopeful we may even be able to get an order of magnitude improvement from that one, but we'll see.
No one I'm aware of is pushing for smaller blocks because Bitcoin should be a store of value and a settlement layer. If I had magic pixie dust I'd want 1GB blocks and everything on-chain too. But we live in the real world and are stuck in a situation where Bitcoin loses all of its unique properties if we scale much further than where we are at now. And so we must ask the question: what will Bitcoin become, since it can't scale on-chain? How can we live with that outcome? The idea of a settlement layer and off-chain but trustless payment networks like Lightning naturally arise from that thinking. The Lightning Network is a way that we can have our cake and eat it too: Bitcoin remains small and decentralized, but everyone still has access to bitcoin payments. Lightning can potentially scale to global usage with a small-block chain as the settlement layer.
This should continue to work until block reward becomes negligible in 2035.
because some blockchain investments might be in danger