Apologies if you find this to be in poor taste, but GCS directly supports the S3 XML API (including v4):
and has easy to use multi-regional support at a fraction of the cost of what it would take on AWS. I directly point my NAS box at home to GCS instead of S3 (sadly having to modify the little PHP client code to point it to storage.googleapis.com), and it works like a charm. Resumable uploads work differently between us, but honestly since we let you do up to 5TB per object, I haven't needed to bother yet.
Again, Disclosure: I work on Google Cloud (and we've had our own outages!).
The timeline, as observed by Tarsnap:
First InternalError response from S3: 17:37:29 Last successful request: 17:37:32 S3 switches from 100% InternalError responses to 503 responses: 17:37:56 S3 switches from 503 responses back to InternalError responses: 20:34:36 First successful request: 20:35:50 Most GET requests succeeding: ~21:03 Most PUT requests succeeding: ~21:52
For legacy customers, it's hard to move regions, but in general, if you have the chance to choose a region other than us-east-1, do that. I had the chance to transition to us-west-2 about 18 months ago and in that time, there have been at least three us-east-1 outages that haven't affected me, counting today's S3 outage.
EDIT: ha, joke's on me. I'm starting to see S3 failures as they affect our CDN. Lovely :/
A pyrrhic victory... ;)
 - http://status.hrpartner.io
EDIT UPDATE: Well, I spoke too soon - even our status page is down now, but not sure if that is linked to the AWS issues, or simply the HN "hug of death" from this post! :)
EDIT UPDATE 2: Aaaaand, back up again. I think it just got a little hammered from HN traffic.
The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates.
(Yes it sucks and yes we're working on fixing it. We hate slow software too!)
I've been fuzzing S3 parameters last couple hours...
And now it's down.
"We are investigating increased error rates for Amazon S3" translates to "We are trying to figure out why our mission critical system for half the internet is completely down for most (including some of our biggest) customers."
Somewhere a sysadmin is having to explain to a mildly technical manager that AWS services are down and affecting business critical services. That manager will be chewing out the tech because the status site shows everything is green. Dishonest metrics are worse than bad metrics for this exact reason.
Any sysadmin who wasn't born yesterday knows that service metrics are gamed relentlessly by providers. Bluntly there aren't many of us, and we talk. Message to all providers: sysadmins losing confidence in your outage reporting has a larger impact than you think. Because we will be the ones called to the carpet to explain why <services> are down when <provider> is lying about being up.
CloudFront is currently experiencing problems with requesting objects from Amazon S3.
edit: Since posting my comment they added a banner of
"Increased Error Rates
We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region."
However S3 still shows green and "Service is operating normally"
* Slack file sharing no longer works, hangs forever (no way to hide the permanently rolling progress bar except quitting)
* Github.com file uploads (e.g. dropping files into a Github issue) don't work.
* Imgur.com is completely down.
* Docker Hub seems to be unavailable. Can't pull/push images.
But if you go to your personal health dashboard (https://phd.aws.amazon.com/phd/home#/dashboard/open-issues) they report an S3 operational issue event there.
Edit: Mine is reporting region us-east-1
Edit 2: And now the event disappeared from my personal health dashboard too. But we are still experiencing issues. WTH.
they just now put up a box at the top saying "We are investigating increased error rates for Amazon S3 requests in the US-EAST-1 Region."
increased error rates? really?
Amazon, everything is on fire. you are not fooling anyone
edit: in the future, please subscribe to @MyFootballNow for timely AWS service status updates https://pbs.twimg.com/media/C5xdm9_WMAAY7y_.jpg:large
Through some dumb luck (and desire to procrastinate a bit), I opened HN and, subsequently, the AWS status page and actually read the US-EAST-1 notification.
HN saves the day.
"Increased API Error Rates - 9:52 AM PST We are investigating increased error rates in the US-EAST-1""S3 operational issue - us-east-1"
What else should I add?
Many aws SDK libs don't remove \n for you.
(I hope it wasn't me who broke it lol)
I'm curious how much $ this will lose today for the economy. :)
(I think the AM means PM)
"Believe" is not inspiring.
The only services my team uses directly are EC2 and RDS, and I'm thinking of moving RDS over to EC2 instances.
We are entirely portable. We can move my entire team's infrastructure to a different cloud host really quickly. Our only dependency is a Debian box.
I flipped the switch today and cloned our prod environment, including VPN and security rules, over to a commodity hosting provider.
Change the DNS entry for the services, and we were good to go. We didn't need to do anything because everyone was freaking out about everything else being down. But our internal services were close to unaffected.
At least for my team.
Obviously, we aren't Trello or some of the other big people affected. And we don't have the same needs they do. But setting up the DevOps stuff for my team in the way that I think was correct to begin with (no dependencies other than a Debian box) really shined today. Having a clear and correct deployment strategy on any available hardware platform really worked for us.
Or at least it would have if people weren't so upset about all our other external services being down that they paid no attention to internal services.
Lock-in is bad, mmkay?
If your company is the right size, and it makes sense, do the extra work. It's not that hard to write agnostic scripts that deploy your software, create your database, and build your data from a backup. This can be a big deal when some providers are flipping out.
All-your-junk-in-one-place is really overrated, in my opinion. Be able to rebuild your code and your data at any given point in time. If you don't have that, I don't really know what you have.
You would have to host your own software which can also fail, but then at least you could do something about it. For example, you could avoid changing things during critical times of your own business (e.g. a tradeshow), which is something no standard provider could do. You could also dial down consistency for the sake of availability, e.g. keep a lot of copies around even if some of them are often stale - more often than not this would work well enough for images.
Well good thing I have my backups on [some service that happens to also use S3 as a backend].
AMZN stock down $3.45 (0.41%).
"500 The server encountered an error processing your request." message
It appears to be impacting gotomeeting, I get this error when trying to start a 12pm meeting here:
CloudFront is currently experiencing problems with requesting objects from Amazon S3.
Edit: ironically, my missed 12pm meeting was an Azure training session.
As someone who's really only a yellow belt (assuming you're all black belts!), just so I understand ('cos I'm cacking myself!) ...
I'm seeing the same issue. Does this mean there's a problem with Amazon? I can't access either of my S3 accounts even if I change the region, and I'm concerned it may be something I've done wrong, and deleted the whole lot. It was working yesterday!!!
Would be massively grateful for a heads up. Thanks in advance.
From http://status.aws.amazon.com/ Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.
[edit- looks like they do have a pretty heavy reliance on S3, per https://github.com/WhisperSystems/Signal-Server/blob/master/... and various other sources.]
Hearing reports of EBS down as well.
Increased Error Rates Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue.
There is something to be said about not being located in the region where everything gets launched first, and where most the customers are not [imo all the benefits of the product, processes and people, but less risk].
Good luck to everyone impacted by this...crappy day.
S3 promises four nines of availability (11 nines of durability), so today we got about 3-4 years worth of downtime in one fell swoop. Oops.
is there a part of this hosted on S3? I cannot open Atom anymore, it keep crashing on the check for updates screen...
When I go to my orders I get "There's a problem displaying some of your orders right now.If you don't see the order you're looking for, try refreshing this page, or click "View order details" for that order."
It seems that Amazon is eating its own dog food.
Amazon Elastic Compute Cloud (N. Virginia)Increased Error Rates less 11:38 AM PST We can confirm increased error rates for the EC2 and EBS APIs and failures for launches of new EC2 instances in the US-EAST-1 Region. We are also experiencing degraded performance of some EBS Volumes in the Region.
Amazon Elastic Load Balancing (N. Virginia)Increased Error Rates more
Amazon Relational Database Service (N. Virginia)Increased Error Rates more
Amazon Simple Storage Service (US Standard)Increased Error Rates more
Auto Scaling (N. Virginia)Increased Error Rates more
AWS Lambda (N. Virginia)Increased Error Rates more
In the meantime, EC2, ELB, RDS, Lambda, and autoscaling have all been confirmed to be experiencing issues.
In the last couple of minutes that forum post has gone from not existing to 175 views and 9 posts.
"Update at 11:35 AM PST: We have now repaired the ability to update the service health dashboard. The service updates are below. We continue to experience high error rates with S3 in US-EAST-1, which is impacting various AWS services. We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue."
For S3, we believe we understand root cause and are working hard at repairing. Future updates across all services will be on dashboard.
As part of the release they wanted to make sure everybody gets a chance to see "red" metrics.
Technology leads to technology (and wealth) monopolies, in other words: more centralization. Which has always been bad.
Just like with Cloudflare leaking highly sensitive data all over the Internet, a couple of days ago.
It shows up in the event log now too.
Amazon Web ServicesVerified account @awscloud 8m8 minutes agoMore The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates.
At least now we can see all the network failures in full RGB.
I'd rather my app load but appear broken so I can show my own status rather than just shutting down every single app...
Increased API Error Rates
09:52 AM PST We are investigating increased error rates in the US-EAST-1 Region.
Event dataEventS3 operational issueStatusOpenRegion/AZus-east-1Start timeFebruary 28, 2017 at 6:51:57 PM UTC+1End time-Event categoryIssue
$ s3cmd ls WARNING: Retrying failed request: / ([Errno 60] Operation timed out)WARNING: Waiting 3 sec...WARNING: Retrying failed request: / ([Errno 60] Operation timed out)WARNING: Waiting 6 sec...
Half internet is down the data center in Virginia the one with the cloud is totally dead apparently. Enjoy the cloud bullshit :)
It seems their status page is hosted ... as a S3 static website.
"The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates."
Well that explains all the green checkmarks /s
Upon the fields of barley
You'll forget the sun in his jealous sky
As we walk in fields of green
This is bullshit if you're using an S3 origin in your distribution.
Oh wait. The site sits on S3. Never mind.
2. People push updates as fast as possible to fix security
3. No tests, so everything blows up
Slack image uploads are hanging.
Is it related to S3??
Interesting tweet from last month.
Seems cloud computing still has a lot to learn.
edit: for the year, it only takes 52.57 minutes
slack file services down too
They are consistent for me.
After two hours, they have finally updated their dashboard.
Sia are immune to situations like this because data is stored redundantly across dozens of servers around the world that are all running on different, unique configurations. Furthermore, there's no single central point of control on the Sia network.
Sia is still under heavy development, but it's future featureset and specifications should be able to fully replace the S3 service (including CDN capabilities).
I see this statement all the time, but it doesn't make sense. If you're looking at any programming language out there, they all have a growing members of their community asking and showing interest in targeting WebAssembly for their language of choice. It's not just C/C++. Go, Rust, Ruby, Python, Crystal, Nim, D, and many more. Now you might get the reaction that "meh, why would anyone write web apps with Rust?", but that's an irrelevant question. Companies are going to see this as an opportunity to save resources and become more efficient, especially since wasm has so much better perf than JITed JS and the possibility of going isomorphic is a reality (back-end & front-end written in Ruby for example, deriving from the exact same codebase and shares code).
Now I'm not saying "WebAssembly will take over JS!", what I'm saying is that it perhaps, possibly, maybe will. It will depend on these languages and how they add support for it, what abstractions and integrations they provide with their current ecosystems. And of course, how WebAssembly will improve over the coming years.
However, a weird, highly-annotated strict subset of JS is not the ideal representation of what is basically portable assembly language. WebAssembly's big strength over asm.js is it has smaller executables and they can be rapidly decoded and verified in binary IR form, rather than having to shove megabytes of ungzipped bracket-fest through a JS parser.
I support anything that improves performance and efficiency. But the best of both worlds is always great. I'm wondering if it would be possible to implement reference counting (and maybe automatic reference counting) similar to Objective-C, and if so, would that simply be a matter of the particular language and WebAssembly transpiler you're using supporting it? And are there disadvantages to reference counting that make it a bad idea? I enjoyed using it doing earlier iPhone programming.
Furthermore JS has view source. WebAssembly has a text format  but it's really assembly. Hopefully there will be source maps .
From what I had heard the plan was not to do that until the final standard was settled on, but I wasn't able to find any corresponding announcement.
Does this mean instead of a normal "native app" I'm going to start getting C applications compiled with wasm and distributed in electron? What does this possibly gain the end user?
"The Rust Evangelism Strikeforce" is particularly brilliant.
"the Rust Evangelism Strikeforce stages a sortie, but meets resistance."
It would be so meta if you summarized this post.
It reminds me a bit of genetic algorithms. GA is the 'last resort' when you truly know nothing about how to model your problem.
What is the sweet spot for RL?
Make sure to check out the paper on arxiv as well.
It's amazing that is all boils down to 1s and 0s and some boolean logic.
cat response.html | nc -lp 80
2. allow the apps to be used without a login - with the default view showing 'what is on now'. almost every member of my family has attempted to use twitter at some point and just been confused.
3. reformat all the explore pages into ordinary twitter streams
4. acquire nuzzel. their view of 'whats on now' is better than twitter's view
5. drop the video passion-project nonsense. you don't need to own content to use twitter alongside it. strike deals with the content providers instead where tweets are shown alongside (this is already being done) and become a partner to content owners and distributors rather than a competitor
6. improve the core product for users. group messaging, longer tweets, only show replies from people who are authenticated or two degrees away from you by default, etc. etc. (and pro accounts, if you wish)
7. let people pay to get a checkmark, and then let users pay to flair tweets they like
8. better tools for businesses who provide support on twitter. let them pay to use it as a platform and properly authenticate their customers on twitter
9. ditto above but for marketing
It's a greenfield space no one else is really jumping upon yet. Focus may have turned to on-demand TV, but people still want to watch sports live, and Twitter already has acquired some of those deals as the sport franchises get more comfortable with online distribution. Trump's tweets, the presidential debates broadcast on Twitter and the fact people turn to Twitter during breaking news make it a logical extension to move into news and possibly finance too.
Twitter's modern-day utility seems very low outside of news/sports/politics and the average joe has moved their engagement to more visual platforms like Instagram and Snapchat where it is much easier to create and consume more personal content and updates.
Twitter would also be able to focus their monetization and advertising efforts around a much tighter content and audience niche. Plus consumers are used to paying for some of this premium content, making monetization of a freemium model even easier.
I would create a system where subscription to News on Twitter helps to automate payment for individual articles.
1. The lede or quote gets pulled into the tweet. 2. http://t.co becomes a payment-debiting gateway (402 Payment Required).
Almost everybody would benefit from this arrangement:
- Users would no longer need to buy multiple newspaper subscriptions. - Journalists would be better positioned to ask for revenue share. - Publishers could gain a larger paying market without needing to coax user's through the account creation and subscription signup hoops.
Twitter's payroll (to say nothing of its stock-based compensation expense) is bloated. Slashing staff isn't a popular play. This is a textbook private equity deal.
Twitter's habit of ringing in the year with $500MM losses could be single-handedly cut with a 2/3rd staffing reduction (which costs lots in payroll and $800MM in stock-based compensation expense). How much of Twitter's $2bn in revenue would evaporate post-cuts. Over half? Still leaves $750MM of pre-tax income before R&D ($800MM in the FYE 2015). Cut that R&D budget in half, say you lose a further 25% of revenues, and you still have $160MM before taxes yielding $100MM of net income. That's worth $1bn to $2.5bn.
If you can grow that to $500MM over 4 or 5 years, you could sell it for ~20x. Discount back at 10% or 20% and you have an optimistic valuation of $4 to $7bn.
Twitter's trading at just under $12bn. I suppose I'd bid $3.50 per share and be willing to entertain someone talking just under $10 a share.
- Have more options for blocking, including "block this person and everyone who follows them or followed them within last N days"
- Fix trending topic spam. Seriously, how is this so bad? Free advice: for every trending topic a tweet mentions over 1 in a single tweet, the probability that it's spam asymptotically approaches 1.
- Allow an unambiguous, never "played with", chronological timeline. Have a separate view that's your ML playground. The "In case you missed it" and "tweets you might like" features are good but I don't want them randomly appearing in my timeline.
- Allow alternate clients, even if you have to charge a fee.
- Similarly, create a separate free developer-focused API but clearly identify all tweets posted via that as "bot" and allow people to never see tweets posted by a bot, or tweets posted by a bot @ them. Tweets posted from the "alternative client" paid API would not be subject to this marking.
- Identify "sleeper cell" bots -- accounts inactive for a long time that suddenly become active, usually around a single topic, concurrent with many similar bots, and aggressively ban them.
- Do more and better things with Lists. Don't just show me 3 people to follow (usually clearly just based on the last person I looked at). Show me algorithmically curated suggested lists, popular lists, allow me to sort those by # of members, easily find lists that user X belongs to, etc., mark lists as low quality/harassment vehicles. Surface good content shared by my interest lists somewhere other than the timeline.
- My personal #1: give me the likestream of the people I follow. This is easily more interesting than their actual tweets, at least to me. Something like a quarter of my usage these days is visiting individual accounts "Likes" pages. At least use this data in the aforementioned algorithmic curation of Lists/suggested follows.
1) Trolls love twitter. The legion of racist eggs sowing destruction for no other reason than their own nihilistic enjoyment is an existential threat to the business and must be culled. The company and the trolls cannot live together in peace. One or the other will die. It feels like twitter hasn't figured out it's them or you. There can be no 1st amendment compromise here. These guys are ruining you for fun. They gotta go.
2) Public figures. It's a good platform for them. Cull the trolls and they'll stay, bringing an audience of
3) Regular people, who need a nice feedback loop of people interacting with their tiny little voices. Twitter is pretty shitty at this right now-- if you don't have an audience, you're shouting into the void and are eventually going to figure out you're wasting your time and quit. This is shitty for engagement and it's sinking twitter. Facebook figured this out already. Just copy them.
4) The last group is "brands" and for-profit companies who are your actual customers, but who would like to free ride on the platform, soaking up the attention of the regular people for free. If they want access, they gotta pay. No free riding for non-people. Facebook also figured this one out. If you're not a human, and not a public figure, and you want the attention of humans, pay up. Twitter is also slowly figuring this out.
There's a virtuous cycle of engagement here, and Twitter is slowly getting it straight, but they gotta cull a lot of trolls, spammers, and free riders, and that's going to hurt their monthly usage numbers. The management of that haircut is probably over my pay grade, but it seems like they're slowly getting it together with the algorithmic timeline. Had to be done. Livelock is a real thing for people who don't tweet professionally.
I want twitter to be a feed of thoughts an opinions from people I respect, or important updates from companies I'm interested in.
I see a secondary value from twitter by people contributing to a conversation around an event, be that a sports game, a site outage, a traffic jam or an unfolding natural disaster.
Filtering out / systemically discouragingly a lot of the countless low-value/self promotional posts alongside a better hashtag (channel) view would be a great start.
Twitter is really an unpleasant site to use for following discussions of any sort. When I click a thread-view for a post, I want to see a clear tree-view of all of the posts and replies like any other sane website, not the current flat-layout bullshit wherein you have no clue what the chronology of anything is, or who is responding to what.
There are a lot of good ideas in this thread for how twitter can refocus and monetize itself, but I think before all that you need to make it a site that more people enjoy using beyond its original use case of "waiting at the airport -- hmu".
"Imagine a Twitter app that, instead of a generic Moment that is little more than Twitters version of a thousand re-blogs, let you replay your Twitter stream from any particular moment in time. Miss the Oscars gaffe? Not only can you watch the video, you can read the reactions as they happen, from the people you actually care enough to follow. Or maybe see the reactions through someone elses eyes: choose any other user on Twitter, and see what they saw as the gaffe happened.
What is so powerful about this seemingly simple feature is that it would commoditize live in a way that is only possibly digitally, and that would uniquely benefit the company: now the experience of live (except for the shock value) would be available at any time, from any perspective, and only on Twitter. That such a feature does not exist indeed, that the companys stated goal is to become more like old media, instead of uniquely leveraging digital is as good an explanation for why the company has foundered as any."
2. Experiment and find the right point between monetizing users and those that get the most value out of Twitter. Right now users' eyeballs are being bled dry, and getting their experience ruined with tons of ads, and timeline shuffling. It feels like those with tons of followers are getting a free ride at the expense of everyone else.
3. Introduce meaningful timeline features such as: 3a. Ability to follow #hashtags/topics instead of just people and companies. Curated "Moments" are a weak substitute. 3b. Follow geographical areas of interest (e.g. Top Tweets in Oakland, SOMA etc.) 3c. Ability to explore Twitter geographically. Again, I feel this is a huge and untapped. Heard something crazy happen over your neighborhood? Pull up an map and explore what people are saying around there.
4. Actually do something about trolls (Perhaps a reputation system?)
5. Clamp down on bots. Why is it even possible to follow 300k or a few million people?
6. Slim down the workforce, by a lot, unfortunately. I don't think a sustainable Twitter can ever be a large as it is today.
7. Bigger focus on live TV + discussion
8. Fix search: Its awful and nearly useless unless you put in a ton of effort in "advanced search". Top results are often times just the same retweets and news articles over and over again.
I could keep going...
I mostly just see replies to other conversations and I don't understand the context. Scrolling through the timeline I can't parse structure, it just seems chaotic.
Barely anyone I know uses twitter. It just seems to be a way to follow celebrities and politicians, I don't really care what they have to say.
I'm probably missing something here.
2. Reduce salaries and lay some people off. Having seen how many people work for Twitter and how big their office is on Market street, some people need to go. Realise this wouldn't be popular, but Twitter is spending way too much cash.
3. Reduce the size of the ridiculous buffet they have for lunches, make it a fixed menu with 3 or 4 meals to prevent food waste. Get rid of the free alcohol and soft drink they seem to offer.
4. Actually, embrace developers, make the API limits more generous and allow developers to build cool things like the early days of Twitter.
5. Raise the character limit (even make this a premium feature, double it to 280 characters).
6. Get rid of Jack as CEO, it's not working. Twitter is losing money, they're not innovating and they keep focusing on things like video which most don't care for.
7. Focus on the core product and get rid of the Google-like dream products.
The way I see it, Twitter isn't a complicated idea. It's somewhat predictable size text strings being shown in a feed. Twitter is the kind of app you clone when you're learning a framework like Ruby on Rails, it's not a complicated idea from a technical standpoint. There is no reason to be spending two-billion per year.
This solves the problem of timeline being unreadable once you subscribe to enough people. Ain't nobody got time to read all that crap. Once everyone is rate-limited, everyone can easily digest their timeline. Without length limit, tweets become more thoughtful.
4. Fix the UI. Make it easy to view replies. Make it easy to view embedded images. Make it lean and fast. That would give Twitter advantage over similarly bloated services.
5. Anti-trolling measures. This one is really obvious! There should be no indication that you're blocked by another person, they just don't see you anymore. If the blocked person doesn't know they're blocked, they don't get the satisfaction of being blocked, and they don't know when they need to create another account to annoy you. This should be the basic rule when you implement a blocking feature.
6. Open up API. This one is obvious.
* Let Evan return as CEO (merge with Medium)
... this will restore Twitter management to the situation around 2010, then ...
* Reform or cancel the Trust & Safety council
* Restore open API access and app ecosystem
* Remove side wide censorship tools, add self censorship tools (a la Gab)
* Reverse the timeline changes
* Stop pandering to far left ideologues
Something like that?
Assuming 5% YoY growth in revenue, which is about 2% growth in users combined with a 2% better yield, both of which are imminently doable, the current $2.5B revenue grows to about $3B.
According to their 2016 financial statement, http://files.shareholder.com/downloads/AMDA-2F526X/398748660..., Twitter spent 2.668B in 2016. So that means Twitter needs to cut costs by 20% by 2020 to hit my goal of $1bn profit.
Twitter spent $800 million on each of their three big areas, which they list as "Cost of revenue", "Research and development" and "Sales and marketing". If you can shave off 40% from each of "Research and development" and "Sales and marketing", costs hit $2Billion give or take, and goal achieved.
None of that is silicon valley swing for the moon sexy, and it seems pretty unremarkable in a world of hype and excess. But $3B in revenue and $2bn in costs seems achievable by 2020.
Right now you can turn on notifications for a user's tweets, but that gives you push notifications for all of their tweets which is super annoying. Also, 99% of users don't know that exists.
Their recent move to make trending topics and search more visible in the iPhone app is a step in the right direction but they're a long ways off.
FOMO and live is how they're different from Facebook. I can always go back to Facebook at any time and they'll show me what I missed and I can still engage with it. With Twitter, the discussion has come and gone and I'm left out if I don't know it's happening.
You need limitless thumb scrolling energy to find tweets from friends.
I think it should just die. I never had a Twitter account and never thought that I'm missing something.
The only utility that twitter was providing was already solved by RSS feeds.
Charge for reach. Ask accounts with more than 50K followers to either pay for all followers to receive tweets, or limit distribution to 50K followers, randomly chosen.
It's the accounts with many followers that get the most benefit from the platform.
And, a twitter crack could, in present circumstances, cause global political instability. The accounts with > 50K followers, if compromised, are the accounts that could cause this sort of problem. Why shouldn't the users of those accounts shoulder at least some of the cost of securing the service and making it fast?
Another possibiity: the Bloomberg Terminal biz model. Charge consumers of twitter for timeliness, and delay messages to unpaid consumers and general feeds. Allow originators of messages to purchase timeliness for their own messages, even to unpaid consumers.
- Twitter is a platform, open it up to allow any clients first class access.
- Stop political censorship immediately. It's fine to prevent scams and bot-nets, but do not stifle political speech.
- Lower burn rate. Cancel all of the product-oriented projects that are expensive, simply focus on building the infrastructure to make Twitter's platform as inexpensive as possible to maintain. I'd estimate 10% of Twitter's employees are actually needed.
- Be very cautious about ads. Do not compare yourself to Facebook for ad revenue generation. This is a long-term decision that will require adequate funding to undertake.
What Twitter really needs is to be bought up by some Wall Street type who can look at their books and do just that, and not much more.
1) Creating a twitter profile (with tweeting privileges) costs $5. Profiles without tweeting privileges are free.
2) Once you have 10000 followers, you need to pay additional $$ per year. This fees increases exponentially as you gain more followers. Eg. Famous people pay a lot. Unless this $ is paid, the follower count caps up and the follow button disappears from the profile.
3) Stop considering no of active user profiles as a metric entirely.
4) Regular non-famous people can create profiles (that do not have option for others to follow), but can follow famous paying users for free.
5) If a normal non-famous person wants to chime into the conversation, they pay a one-time fee of $5 to become a paid user. Now they can tweet and have followers. If they ever get too famous, they might have to pay again to unlock ability to have 10000+ followers.
This way you try to charge the users who actually have the money to spend. Let's admit, people with high follower counts like politicians do gain a lot from twitter, and would probably pay for un-mediated access to the population.
This also fixes the problem of junk / troll accounts.
USERS: we love twitter but it has problems
TWITTER: great we'll fix them
USERS: do you want to know what they are
TWITTER: absolutely not
(18k likes, 14k RTs)
USERS: could you at least look at addressing the pervasive harassment of women
TWITTER [twirling like Maria von Trapp]: M O M E N T S
USERS: you're alienating the people who actually use your product
TWITTER: likes are now florps
TWITTER: timeline goes sideways
Revert it back to pure text, which can include any type of url
Make the tweets always load chronologically
open up the API
basically, turn it back into #OldTwitter from 2010
If you think this sort of thing doesn't happen, read this: http://blog.dilbert.com/post/157826468646/nothing-to-see-her... , or http://blog.dilbert.com/post/157201503761/freedom-of-speech-... . He's had problems with this for months, because of his political blogging, and this is just one example. If it can happen to the guy who made Dilbert, it can happen to anyone.
Maybe Twitter could use the hashtag system to accomplish this?
2. Make replies work better. Relax the character limit for replies to several hundred characters, make replies threaded, make low-quality replies go away, and high-quality replies float to the top (just like HN). Remove the line noise by having @ and # symbols not show up in the feed. Hashtags and mentions also shouldn't use up any characters. If all this happens, it will become reasonable to have actual conversations on Twitter.
3. Stop showing me duplicate tweets. Once I've seen a tweet, I shouldn't see it again if it's reposted (something many media outlets tend to do frequently).
Actions like this will make Twitter a better experience for regular users, and should help to kickstart growth.
4. Charge whales (those with the most followers, who disproportionately benefit from using Twitter) actual monthly fees.
- support multiple accounts on the website
- Twitter ads do not seem as valuable as other ad services, there is not enough reason to buy them.
- remove the bots
- Twitter need a way to lessen noise from talkative people. Something between muting and following. See point 1.
- order the tweets by most recent, like it was previously. Ordering by popularity ensures your tweets are dwarfed by the popular GIF of the day.
Yes it is a bold move. It has a great platform ecosystem but the amount of automation you can do is what removes the value from the platform. For example, followers mean nothing anymore and auto-DMs from people I recently followed is an Ah-NO moment.
Instagram and LinkedIn have kept POSTing out of their API for the most part. One reason (of the many) they are thriving is because people know it's all handmade engagement.
- Pay video creators out the ass to get them to dual-publish from YouTube, and create auto-sync features that let them publish in both locations. Build in live-streaming functionality to compete with Twitch.
- "Async realtime". When watching a show, make it possible to replay Homeland tweets from the time you start. If you watch an Apple Keynote later, make the realtime tweets replay, and make it possible to add your own.
- Allow different engagement models. If someone has a whiff of abuse in their feed, make it trivial for them to see only verified + low risk users. The moment someone sends an @message to someone they've never conversed with with a single abusive word, crank the risk on them. If someone wants to engage with the firehose, make that the default.
- Make it easy to "import" feeds. I've had at least 3 friends ask me who to follow, and then we spend 15 minutes scrolling through my follow list, they manually look them up. When a new user registers on Twitter, I should be able to pick 3 people I'm most interested in following, and it should then recommend the people they like the most.
- It should be one-click to "super follow" someone, and get all their follows into my feed. Make it trivial to get an awesome, active feed. And trivial to reduce noise when I'm not interested in something.
- No character penalty for URLs
- Let people play with the data and metadata, exposing fake accounts is good for all
- Encourage bots to be bots
- Stump the chumps. Make this type of charade harder to pull off: @rea1DonaldTrump vs. @realDonaldTrump
1. Pay as much cash as you need to (even if it means laying off a few people) and hire one or two of FB's monetization and growth leaders, preferably away from a team like Instagram. Give them the freedom and resources to grow the team they want.
2. Based on what Twitter has done with the product over the past few years (i.e. not a lot), the product management team is too risk-averse. I would fire them and acquire a few startups to build a more aggressive PM team that knows when to listen to users/metrics and when to ignore them.
3. Partnerships team seems to be great. I would incentivize them to stay.
4. Twitter should invest in experienced engineering management to refocus the team. They have open-sourced four (four!) separate message brokers, and I heard that they had five internally recommended JS frameworks at one point. They should standardize on one boring stack for all new development and move all new development to the cloud.
I'm guessing Twitter has about as many users as it can ever hope to have, which means it's no longer about growth, it's about profits. That means it's time to cut costs, largely in engineering. You need a far smaller team to run a service and make incremental improvements than you need to grow a service aggressively.
Imagine Twitter would start offering users the possibility to create group chats where only the invited users can write in but that others could follow in real time.
So lets say there is an Apple keynote and @BenedictEvans, @asymco and @gruber start a group chat where they comment the event. I get a notification that this group chat started and I can go and follow it (the same way I could follow a live video). I can't comment myself, but I can follow their conversation (and maybe hit the like button from time to time). Because it's a chat, the participants will write much more than they would if they would be only limited to tweeting.
It's crazy that in 2017 we can see celebrities interacting with each other via video, tweets,snaps, Facebook comments and so on, but there is no option to follow a real time chat between them.
Allow people to publish more content natively. Such as: Long-form writing Long-form video
Occasionally do "announce" tweets that explain "well-known" twitter features to help people be able to set it up better. Twitter could learn from Slack or Discord in this regard.
Others have suggested cleaning up the engineering org, that seems like a good organizational idea, but is going to be tricky to pull off.
Implement a feature or set of features to discourage bandwagon-hating. Twitter has been instrumental in destroying a non-trivial amount of careers, sometimes over things that were mere bad taste. Perhaps a limit on the number of responses a tweet can have, or rate limiting on how fast a negative tweet loads, or something. I'm not sure, but having them put some serious UX research into this problem would be a great reputation boost, if we know they are doing it.
As a backend service it would be nice if they focused on making your Twitter account into a sort of internet drivers license to identify you anywhere and everywhere online through a sort of trust chain. I never want to have to sign up for an account again unless it pertains to my finances. I so desperately wish I could manage all my various subscriptions and accounts for random services from a central place that is highly secure and easy to access. I should be able to sign-on seamlessly, unsubscribe effortlessly, and never have to remember a username or password. This would also allow a central place for me to set privacy preferences so we can dictate exactly what the downstream services should and shouldnt be able to see.If Twitter can just let me two-factor authorize with a token+pin and have this let me into just about any account online (aside from maybe my main email and financial accounts in the interests of not having all the eggs in one basket) thats a service people would indispensible. (So much so that maybe ICANN should just work on something like it as a public utility?)
On the more user-facing end, Twitters niche has always been people who are keen on promoting themselves and making announcements (new paper published, new product announced, press releases, etc.), so maybe they should just fill into what Facebook was before it became a NewsFeed. They could give you an About Me page and a status-bar. This basically is what Twitter is now, but they lack the focus to design it around that stuff as a central purpose for the service. They focus more on the Status Bar than the About Me, this would really just a difference in design language and emphasis. Make it into an RSS feed for people.
Or, as a third option. . . they could just make Twitter into an RSS reader. Maybe even add Wordpress/Medium style pages for long-form writing and feed that all through the same feed paradigm.
Essentially making Twitter 'too big to fail'.
Tell shareholders that they're in for the long haul and that they can write off any chances for quick bucks.
Most probably - unfortunately - cut deeply into the employee base because there is no way Twitter could sustain the company size they are at today based on the product that they have.
Some of our friends are good at recommending a movie or a restaurant and their opinions matter at a micro level similar to celebrity endorsements matter at a macro level. They would do more of it if they get paid like celebrities do but at a proportional rate. Take the burden out of complicated algorithms to match ad content to users and instead let users do it in between their conversations and pay for the content of their tweets. Make the normal user to be their spokesperson for the product/idea. Twitter is a very good medium for this.
This is native advertising at a whole new level. Financially rewarding users who are influential in their small circle might be difficult to implement. Take the ad money from businesses, local and global and share it with users who say nice and constructive things about their campaign to their followers. It could be as simple as rewarding a user for retweeting a well designed ad. Many users will get creative and make endorsements from their daily lives if it meant their followers liking it. Its like design crowd mixed with advertising. Brands would pay for this model because its driven by results.
Understanding user tweets and matching them to a business campaign and fairly allocating the reward to top/all contributors might be a much harder computer science problem but I think this model of advertising has a potential to work well for twitter. There is so much for user to tweet about. Local restaurants, to movies to their brand loyal purchases etc in exchange their tweets get financial reward based on how many people read them and engage with them.
On the usability side, there's lots of room for improvement in terms of fostering meaningful discussion, which in turn would lead to stronger social ties between users. Addressing that issue would probably have to start with an effort to improve discoverability of accounts that engage thoughtfully with other users. So people who reply to tweets that earn hearts might show up in suggestions more often, etc.
I'd also work to discourage endless ICYMI repostings of big multimedia tweets and go back to a chronological timeline. If there's too much noise in a chronological timeline, that means too much clickbait/link spam is being posted, and that's the real issue.
From a revenue perspective, there are a bunch of options worth looking at: a Patreon model to encourage people with great insight to tweet more; more accessible paid analytics, baked right into the app that could help non-business users improve the quality of what they send out; an in-app store for subscribing to third-party add-ons.
Basically, at some point it's worth realizing that plenty of mobile users will spend some money for an improved experience. The constant focus on ad-based revenue makes money, but ultimately incentivizes the company to do things that make the overall product experience worse.
1. Focus on making the company profitable by cutting down on staff and resources. Seriously, Twitter doesn't need thousands of employees, a large HQ and all that other fancy stuff. I think a team of about 30 people could probably run it fine.
2. Get developers on board again. Open up API access, stop shutting down/blocking projects, etc. Make people feel like they could start a business on Twitter's platform, without the rug being pulled out down the line.
3. Get rid of the Trust and Safety Council. It's currently a bunch of left wingers that don't care much for freedom of speech, which groups like the ACLU suspiciously absent.
4. Improve moderation. Kick out terrorists and nutcases on the 'left', stop looking for every excuse to ban right wing users and generally treat everyone with respect all around.
5. Try and make the Android app more usable. Because at the moment, it's really awkward to use and gets rather slow at times.
6. Stop using verification and unverification as a punishment. Really, it's like Twitter is being as confusing as possible here.
7. Have the timeline set to how it used to be. Remove the 'show best tweets first' crap from any accounts unfortunate enough to still have it enabled.
8. Make things like URLs not count towards the character limit. I think Gab already does this, and it's very useful.
1) Add golden heart2) Sell golden hearts to users.3) Reward some golden hearts daily to users, perhaps based on tiered ranking.4) Allow advertisers to gift golden hearts to users.5) 'Promote' tweets with golden hearts and display them in Moments.
In short, allow peer promotion. Red hearts are currently being wasted as weak social signals and nods. This change blurs the line between ads and peer-promotion.
2) All those Twitter developer/publisher services which Twitter recently sold were IMO the real value at Twitter, Inc. Unfortunately, Twitter has burned developers too many times to be trusted. I would have made them independent rather than selling, though.
3) Rather than randomly banning users, focus on better filtering tools, and tools to coalesce spam/multiple replies/etc. If you make a popular tweet, or are the target of an attack, there should be a single "click more" link, rather than hundreds of separate notifications.
Add a feature that allows users to censor their feeds / remove @replies from "trolls".
Decrease engineering staff, increase outbound sales people.
Establish syndication rights with NFL.
That's the draw of Reddit (and even Hacker News). You immediately see what's really popular on the internet right now.
Twitter is easily the most negative place on the internet, and that's including madness like the Something Awful forums or 4chan. A downvote option would hopefully push the constant arguments out of sight too.
The long version - ads are great, but they cause a misalignment between the service users are happy with and the services necessary to monetize. In addition (not instead), id bring payments into the platform so goods can be discovered and purchased directly through the feed without the user having to leave the platform.
This would require quite a few changes throughout, but when they all come together I believe it'll bring the platform much closer to a Facebook-like status, where users spend more time on the platform as opposed to it being a "starting point" to finding interesting links.
Granted, I am answering a different question: from the CEOs perspective, they _need_ to do something because they aren't growing fast enough.
But as a user, Twitter is more enjoyable when it's niche. I have a circle of developers that share their projects and thoughts on software dev, and it's delightful.
Someone else said that it's a great resource for the medical community, I know that the hiphop scene is big on twitter, there's whatever the hell Weird Twitter is...
Twitter makes more sense as a series of specialized clusters based around specific communities, not as a Facebook where it's everything for everyone.
You have it in your power to truly differentiate your platform and make the world a better place by implementing controversial topic and filter bubble detection (per the paper we looked at yesterday), together with letting users see their polarity score (per todays paper) and making controversy reducing / filter-busting follower recommendations (also per todays paper). This would be something new and unique in the world of mass media consumption, and could help to make Twitter great again.
How about it?
Provide additional analytics as a paid service for marketing. Charge for add-on services (like delayed/periodic publishing etc, running polls, etc).
I love Twitter, but it becomes less of a platform for personal expression and more of a machine operated tool for propagandists and spam garbage when you just widely allow botnets. For instance, do a little digging into some of the accts that constantly retweet Trump (Dems are no better). Maybe they tie back to alt-right blog-nets - not humans - which also managed to hijack the search engines to some extent. That ain't personal expression.
If they can't generate some new excitement, the BUMMER is, messaging is the future. I'll argue FB and everyone else will be known as messaging platforms - not a face book or social news feed.
Trolls, harassers and other bad actors all show up as _positives_ in Twitter's stats. Most of the UI features you hate probably cause upticks as well.
In practice, making a better Twitter might be a worse business plan than continuing to flame out, so this is unlikely to happen.
I'd expect it be something between Facebook and twitter itself. Nope never google+.
It needs a fresh look hmm! By fresh I meant the design as the aesthetics of web Facebook messenger a modern, miminial, fresh look. That Facebook lacks.
I'd want to it be bit less minimal but not as much bloated as Facebook hence I suggestrd earlier something between the Facebook and twitter itself.
It's stalled and boring, and at this point it looks like a driveless train that could hit the dead end pretty soon.
Does CNN pay Twitter every time they read someone's tweet on the air? I'm not talking about a "newsworthy" tweet (for example one from a politician's account), but CNN occasionally says, "Let's see what a random person on the internet thinks about this development." Then they prominently focus on a couple tweets. I think CNN (or who ever) should pay for that content.
1. Consider twitter a user's portfolio of interest channels and let us tab between our chosen channels immediately (multi-select box at the top where I can pick VCs, medicine, Design, Oscars, whatever - and it blends my feed for me)
2. Encourage floods of content and monetise curators filtering for quality - I can pay for subscription a feed of world news from WSJ, NYT, and other paid sources, and my subscription fee is distributed to them based on consumption. The best content wins and the quality editorials get rewarded for earning loyalty, not writing clickbait.
3. Enable paid advertising-free feeds.
4. Enable premium, niche feed advertising that is hyper relevant (If I have a spine medicine feed, an ad from Stryder would be very appropriate, but one from herbal remedies providers would be irrelevant). Building the curation mechanisms would draw top engineering talent in machine learning too.
5. Allow co-watching experiences during media events.
6. Allow me to filter out topics I want to avoid (and by doing that, you get more engagement and better ad targeting capabilities)
7. Open your developer ecosystem again and this time pay attention to what works and provide guarantees that you won't kill developer efforts. Those developers build bots for Facebook now and help their user engagement instead of yours.
The gist of it is: make your revenue model reward and improve quality. The moment you let advertisers lead you by the nose and dictate for obstructive anti-user product decisions, you will permanently lose your market to Facebook and others. I lead a hyper-niche collaboration network so happy to do a longer brainstorming session with Twitter people.
2. Fix the mess of UI. I still don't understand how to engage in conversations to this day. Convoluted modal boxes, overlaying other screens, that then expand out to more replies, and so on and so forth. It's WAY too confusing.
 https://www.w3.org/blog/news/archives/6156 https://news.ycombinator.com/item?id=13729525
2. Improve their ad product.
That's it really. It doesn't do anything now that it didn't in 2014, and the workforce is significantly larger. It'd be a decent, profitable company.
Or, make it a non-profit. It's in the public interest.
1. Move the chronological feed to the background, the feed should be sorted by relevancy not time. (If you're a power user you can click to the raw chronological feed.)
2. Right now you can only follow users and not interests. This makes it extremely hard for new users to get a sensible feed of content. If a mainstream user signs up for Twitter they are only going to spend a minute or so to set things up. Twitter needs to immediately add value for those users.
3. Use a machine learning approach to learn what a user is interested in based on email clicks. (Quora does a great job at that.)
4. Redesign all apps and simply. A good example is their settings screen. Another is the crazy behaviour that you have to put a . in front of your tweet. Get rid of all those power user features and settings and simplify.
5. Remove abusive bots and clearly mark bots as bots. Twitter is spending millions to facilitate people engaging in follow spam and other forms of spam.
6. Build up a dedicated team to make sure Twitter works for high profile users. (IE, do notifications and messages work if i have >10m followers). They need a team on top of that to keep those users happy.
7. Some general tips: https://getstream.io/blog/13-tips-for-a-highly-engaging-news...
With that focus, I believe Twitter can return to growth in its user base. There is more that could be done to make the experience more engaging, for example, without interfering with the core experience. By mixing some suggested tweets into my feed using machine learning, Twitter could increase engagement. The new user experience would flow better with good use of machine learning.
In terms of monetization, it's about the data. Twitter APIs should be recognized as best-in-class, and access should be sold on a subscription basis on a graduated scale based on frequency of access.
There is a natural scale to core Twitter, and it might not be much bigger than it is right now. Sometimes we have to be content with what we've got -- which in Twitter's case is nothing to sneeze at. They shouldn't be going all "New Coke" getting into video and media in my opinion.
2. Focus app and website around two concepts: 'Now' and 'Here' -- temporally local, and spatially local.
'Now' would surface what's happening in the world now: major entertainment events, major political events, intermingling global, culturally-similar, and local. Show a stylized zoomable map to show what's happening around the world, so one can narrow or widen the locality of the world's pulse.
'Here' would invert this, showing everything that happened hyperlocal, surfacing recent popular and random tweets from where you are now.
Bonus for some visual eye candy that shows, perhaps as a Venn diagram, when 'Here' and 'Now' get closer and closer together to where if you're at a sports event, they're one and the same.
3. Keep everything chronological. For a network like this, 'Fear of Missing Out' is a feature, not a bug -- the anxiety should be palpable. For 'Now', sell ad slots for exact rotating times, like TV. This will drive demand for high-quality, high-cost brand advertising, instead of low-value mundane stuff. For 'Here', sell the ad slot to local businesses.
4. Open the API but charge a fee for access.
5. Use ML, identity, hashtags, and context to classify tweets into a limited number of categories/tags: breaking news, humor, insight, commentary, chatter, feedback. Expose these as a user-controllable filter on top of any existing view.
6. Disable most notifications. Make users want to return to the app without being nagged.
7. Only allow replies and DMs from people you follow and verified accounts.
Re-ordering the timeline promises the solution - but it doesn't work yet.
I would think you need "auto-group", which FB, Google, and others have tried and failed at.
But in any case - twitter is the place I feel like I have to go, but don't want to go, and I think I'm not the only one.
Implement smart/personalized lists by interest and suggest them to users, suggest new people to add to existing lists and/or tweets that may be relevant to that list. A bit like spotify playlists and smart radios, but oriented to tweets.
Basically, make it easier for people to find tweets and users they want to follow, segmented by interest.
Display relevant non-intrusive ads based on the interests on that list. They should take the hint from reddit regarding what "non-intrusive" means. Adding something like Reddit Gold wouldn't be a bad idea either.
Apart from that, a nice interface to follow live events and their tweets would be awesome.
2. Charge fees for this stuff
3. Make it easy to buy anything via Twitter
4. Get rid of the bots and AI-obvious trolling/threats/TOS violations. I find it astounding that despite years of promises to do something the situation seems to be getting worse.
5. Get a new fully-focused leader who can execute on these and other issues without distraction of a second company, and can also bring down headcount. This probably requires a reorg and a new board.
# Mass Monetization
1) Integrate payments and one-click purchasing, take a cut.
2) Host specific pages for live-stream events (not within tweets) like sports games. Target the remaining pieces of cable TV: Live sports, ESPN, Awards Shows, Olympics, Talk shows (e.g. Daily show, Colbert Report, etc.)
# Large Account Monetization
1) Charge for additional features, e.g. private/protected accounts, verification, having more than XXX followers.
2) Build tools specifically for managing a) large accounts and b) brands/customer service. Charge them for it!
# Data Monetization
1) Build API access for alternative clients that is free for a certain number of users (~10k) then charges on a per-install basis.
2) Partner with marketing platforms (e.g. Salesforce) to build marketing funnels from Twitter into CRM or marketing platform.
(Like any software company, offer lower pricing to charities.)
Use that cash to get rid of ads (they are not working) and invest in more tools for publishers (who are now paying).
If that turns out to not be the case, then the strategy would depend on how far from profitability they are -- are we talking about minor tweaks to the business model? Or a major overhaul of entire company?
In short, how would I turn it around? I'd step in and do a large analytic effort on the status quo, and then react to the result.
I hate doing this but maybe they need to consider a "freemium" model where you get basic tweets at a certain rate limit for free but to do things like post long videos or images directly in the tweet you can pay to do that.
Consider charging for different types of searches a user can do.
Offer researchers and people using Tweets as a dataset for sentiment or other analysis a fee for real-time and direct access to data.
Also, stop messing with my timeline.
Cut costs and cut it by a lot!
Facebook isn't great because of how it looks, but because they have React, hiphop (or what's the name now), things like that; and that allows them to scale and build and iterate more quickly.
Twitter had Bootstrap and that was great investment IMO... now the Bootstrap guys all left. Why?
2. Live Interaction with Events, Games, Television, Radio etc. e.g. Polls, QnA, Sentiment etc.
3. Open Access to Developers to build Apps on real-time content.
4. Enable fact-check score methodologies on every tweets. Don't completely wipe out the trolls. As weird as it may sound - trolls make twitter interesting.
5. The Ultimate messaging platform that replaces SMS/Texts with an identity that is not numbers.
Low latency seems to be Twitter's thing, cash in on that and make some speedy low latency workflow thing.
- clarify community guidelines
- threaded replies
- groups public/private
- add channels
- follow anything, focus on live
- then I'd buy Reddit & Imgur.
Basically, monetize the one thing that every wants to do on twitter, which is go viral.
- Charge for longer tweets in the following way
- 200 chars -> $10/year - 500 chars -> $10/month or $100/year - 1000 chars -> $100/month or $1000/year
1. anonymous, free, limited use (300 tweets per month)2. consumer, verified identity, paid, $10/month (1500 tweets per month)3. commercial, verified identities, paid, $25/month per user (3000 tweets/month)4. commercial, verified identities, paid, $100/month per user (unlimited tweets)
Based on comments on this thread, with some UX improvements the app could meet a lot your requirements. Will gladly accept feedback, and willing to iterate.
I would pay serious money to use their historical data. It's a goldmine for machine learning research, finance, market research, news, politics, etc. I'm sure anybody could find a legitimate use for that much data from a social network.
Instead, I have to hack together a way to constantly collect tweets from within the past 2 weeks or use 3rd parties to access their data in any sane way.
Sell me your data! I want to buy your data!
Get the info you want from the sources you like at the moment you desire.
This is a fireball. It's shining bright no doubt, but it's a fireball all right.
As it stands now, I deleted Twitter simply because it's nothing but corporate accounts, overly aggressive SJWs posturing over every damn thing, and the only content I actually cared about was reposted from Instagram (apart from a few people I know who live streamed, but have since switched platforms). So now I only use Instagram.
They should have been able to release, for example, competitive offerings against Disqus, Signal, and Slack.
this places the payment model in alignment with who the actual beneficiaries of twitter are. it's a mass broadcast advertising/propaganda platform. let the propagandists pay for it.
1) Twitter is already used for financial news and real time financial trading of events.
2) Bloomberg has a huge financial data and news business.
3)Bloomberg would then be the sole provider of twitter data and the revenue from that alone could keep the product afloat.
The details are friable, eg maybe don't charge for private DMs, maybe only charge per first comment per user per thread.
But charge for use.
I would cut back on research (or at least bring it in-house) - $713 million is too much. If they paid each of their researchers 200K per year, they could hire 3500 of them which is insane.
2. Let users pay to DM certain accounts.
3. Mesh-networked solution.
4. Launch 'labs' version as sandbox for developers and users to experiment with (eg. encrypted tweeting, blockchain embedded messaging, proxied messages, etc.)
5. Twitter comms OS embedded on hardware.
You're never going to be innovative if your employees dread coming to work.
1. Trim the fat. Reduce the number of employees dramatically. Obviously not a graceful change but I feel there should not be that as many people working there as there are now.
2. Focus on engagement, not growth. Twitter may not be growing in the way that the market wants, but the users that it does have are incredibly devoted to the platform. If I were to leave Twitter there's nowhere else I could go. If I leave Tinder or Snapchat there are many other platforms that can fill almost the same niche. Twitter needs to capitalize on that.
3. Make brands pay to have a page. In other words, if you're not an individual, you must pay to create an account. Savvy companies have realized that being on Twitter is a key part of a solid social media campaign. To not be on Twitter is to miss out on a huge opportunity to reach a very devoted audience, and you can't reach that audience anywhere else (#2). Some brands are already doing this well (e.g. Wendy's.) If the choice comes to paying for the opportunity to market on Twitter, and not market at all, companies will gladly pay. On the plus side, this could let Twitter reduce the interstitial ads on the timeline.
Everyone hates ads, but the way that brands have engaged with individuals on Twitter really humanizes them and makes people form more real relationships with them. It also forces brands to be more accountable and aware.
4. Bring back Vine. A huge part of Twitter's staying power is the unique culture it has created (#2). Staying power is what gives Twitter its greatest value to advertisers (#3).
5. Ramp up engagement on Periscope. Periscope being a part of Twitter makes a lot of sense because Twitter is all about stuff happening live. It's a great platform but I think it also needs a desktop client (with OBS support, the way Twitch does) to allow the caliber of content creation to go up.
6. Re-open APIs. Twitter has sown a bad seed with the dev community by making its API very restricted. Tweets make up a very interesting dataset on which other people could build very unique things on top of. Twitter should encourage this, not stifle it. "Look what cool things we can do with Twitter" will only serve to strengthen the image of Twitter as a unique, irreplaceable platform.
These are the main issues I see as an everyday user of Twitter. Things like live sports/TV are good ways to grow but these are all secondary to Twitter strengthening its core platform for longevity and meaningful sustainability.
2. Fire all the rent-seekers.
3. The 5 people that are left, keep the lights on.
I think of Twitter the same way I think of highways. It fulfills a huge market demand that the market isn't willing to pay for itself, so has to be subsidized in other ways.
Twitter is fine for what it is, and all it needs to do is stay consistent, not suck, not burn money, not force opportunities
2) Keep the timeline simple.
3) Better custom timelines, searches, and notifications.
4) Stop trying to copy Facebook, Whatsapp, Snapchat, etc. and just be the best Twitter possible.
You can't just turn a business around from your core competency, which in the case of Twitter is short bursts of emotions. You can't turn it into a Medium or Facebook, you'll fail miserably.
1) Twitter's ability to have a good experience around discussions around a group of friends like Facebook is
2) Twitter can be a huge publishing platform
- Keep it simple. Stop trying to be Facebook and Snapchat and Youtube all at once.
- Better AI / search to enable/improve things like custom timelines and notifications.
- Optional paid accounts with appropriate benefits. Keep the cost low and don't penalize unpaid accounts.
This is the price they now pay.
Also, Twitter isn't so much a business as it's a hybrid speech platform/media outlet and moneyed interests shape it as they please to promote the agenda they want.
Seriously, Twitter can't be saved. They fucked up when they alienated every developer out there by making their API too damn strict. There's no way back from this.
Too often I have to take screenshots of a Signal convo, format it for Twitter, etc.
1) Stop abuse.2) Find something new for engineers to work on.3) Sell to a media company.4) Don't waste time on BS like streaming NFL games.
- Suspending accounts for no reason at all.
- Shadow banning users by hiding their replies (they refer to certain users as "low quality").
- Aggressive censorship of alternative opinions.
The act of whittling down a tweet to fit inside the (increasingly ridiculous) 140-character limit is the exact kind of tedious, repetitive thing a game designer would instantly recognize as a "grind":
And free-to-play mobile games have demonstrated that lots of people, when presented with a grind, are very willing to pay real money to skip past it. So: give Twitter users the option to buy extra characters, usable whenever they're needed, at some price point low enough to be attractive as an impulse buy. A penny per character, say, or 40 characters for a quarter, or 120 for 99 cents. The marginal cost to Twitter of shipping 141 characters over the wire instead of 140 is essentially nothing, so whatever you charge would be almost 100% pure profit.
A user with a bag of such extra characters in hand would now have the ability, if they wanted to, to skip editing down every tweet and just post on the fly. Which could be a real time-saver, if you're one of the media-type power users who spend all day on Twitter! And how much it costs you depends entirely on how often you want the luxury of not having to edit yourself. If you only need it occasionally, it's cheap; if you're compulsively logorrheic, well... consider it a tax on the burden you're placing on your followers' attention.
But wouldn't it ruin Twitter, you ask, if people weren't forced to be terse? I don't see how. When people use the extra space wisely, it makes their life easier, costs you nothing and generates revenue that can subsidize freeloaders like you. When people abuse the extra space, you can always unfollow them -- and when the abusers notice their follower counts crashing, they'll be encouraged to rein themselves in. Nobody logs on to Twitter in the morning with the objective of losing followers. The system would correct itself.
So: Twitter makes money, power users enjoy using it more, regular users get their freight paid for by the whales, everyone has access to longer-form expression with a mechanism already in place to still encourage brevity. It's a win all around.
Allow me to easily and permanently get rid of "In case you missed it" thing and read my feed on a strictly timeline basis.
I get a lot of junk in my feed that I don't want to see, and thus I don't go to it much.
Facebook is not as bad, but they've gotten worse. Two of my friends "like" some newspaper and then I start seeing the latest stories from that newspaper in my feed all the time.
I want to go to these feeds once a day and read what those who publish once a day or less who are my friends (Facebook) or friends/colleagues (Twitter) say, in timeline order. Any deviation from that lessens my desire to read it. Some of my friends publish to Facebook several times a day and I usually don't even want to read that, never mind the other junk that both put in my feed.
2. Start charging people based on how many followers they have. Twitter isn't worth much for the average consumer, but it's hugely valuable for people with massive reach. Charge them for it.
People are giving lots of product suggestions, but the product itself isn't the biggest issue. Twitter spends too much and makes too little. Patch the holes in the boat before you try to row faster.
I think Twitter has always been a completely ridiculous service and it's a poster child for this misguided iteration of Internet companies. If we just get enough users, we HAVE to make a profit! Turns out that isn't the case. The only thing I've seen Twitter accomplish is poisoning our collective consciousness with false information and a bad model of reality provided by an unsustainable system.
 If the author is looking, a few nitpicks from the website:
- A blank page is shown if browser doesn't load 3rd party scripts by default
- After enabling them, the scroll wheel doesn't work
- Typo: "free completely to use" -> "completely free to use"
- "Run PHP apps on the most secure platform available, Microsoft .NET" -> I also admire the platform but most secure? Comes off a bit too strong.
They choose to heavily modify the C#/VB compiler (Roslyn) to handle php syntax. Microsoft should be investing time into helping them succeed, it would be great to see the Roslyn compiler platform become more broadly used.
(which in the case of Wordpress can't be that hard)
PHP 7.2 will make libsodium a core extension, so if you use that, you can make use of SipHash-2-4.
By the way, does anyone know of a similar project where the JVM is targeted instead?
The nuget thing is inefficient, MS should plot a path to make npm a first class citizen in all ms tools.
Don't use the internal browser of your password manager, no matter which one you use. There's too much that can go wrong, and the small convenience just isn't worth it.
* 2016-08-22 Vulnerability Discovered * 2016-08-24 Vulnerability Reported * 2016-09-06 Vulnerability Fixed
Edit: Thanks for the informative replies, the links, and the advice. I'm going to explore all of my options and re-think this.
Any thoughts on Bruce Schneier's PasswordSafe password manager?
I posted it on here the other day but it didn't go far. It's like youtube-dl but instead of downloading videos it changes your password on various online services. If you get your password compromised by vunlerabilities or whatnot it makes it easy to mass-rotate your passwords. Could use some help adding support for more websites if you're interested.
I noticed their website is made entirely in php. Not that php is bad, but this is possibly the worst choice for a web platform that holds secrets. At only $12 a year, they probably aren't trying very hard.
I use Bitwarden for some things (lots of testing, nothing serious). Given its OSS nature, i thought it might have had more traction.
For reference: https://github.com/bitwarden
This is absolutely not correct, and elucidates little but the prejudices of the answerer. Philosophy of language has /not/ become linguistics, philosophy of mind has /not/ become neuroscience, and only a subset of natural philosophy has become natural science.
The philosophical questions discussed by Socrates have elided the grasp of both dogmatic rigor and empiricism for twenty-four hundred years, and there seems to be absolutely no reason to expect this to change.
The answerer has either no actual grasp of the history or content of philosophy, or has simply decided, apparently by fiat, to discard all but the most narrow positivism-flavored slice as nonsense.
It doesn't help that many philosophers of mathematics are, for obvious reasons, either also logicians or mathematicians, so demarcating between advancements in philosophy of mathematics that clarify mathematics and advancements in mathematics that clarify mathematics can be a bit of a fool's errand.
Whatever the case, I dislike it when folks from the sciences or mathematics try to discredit or dismiss philosophy--funnier still, and luckily not as bad, is when they question the value of philosophy without realizing that question is in and of itself a highly philosophical question!
Philosophy has been around for a long time and isn't going anywhere in the perceivable future (though I suppose it depends on what metaphysics of time you subscribe to :) ).
A few notes from the above:
* the beginning of astronomy == plato's school
* the scientific method as falsification == Popper
* quantum theory / relativity / Heisenberg == positivism (If I don't see it (e.g. electron orbitals) it doesn't exist) (* e.g. complementarity)
* Einstein claimed that his reading of Schopenhauer was crucial to thinking about time, space, etc...
in essence, you are doing philosophy when you're re-evaluating your methodology and using a evolving reflective feedback loops to change your thinking.
I would hope so! The short answer is the philosophy of Math will help you determine whether what you're researching is true! Surely it would be very bizarre to research something with complete apathy regarding its truth value. A few examples:
The famous Peano axioms  are widely used to prove such things as the commutative property of multiplication (ab=ba). But as the name "axiom" suggests, you just have to accept them as true or the whole thing crumbles. So why is it true that "0 is a natural number"? If this is false, much (all?) of math research is in big trouble! Does this suggest a sort of mathematical epistemic foundationalism? If so, what are its limits? When is mathematical research warranted, and when can we simply regard mathematical beliefs as properly basic?
Also, consider the realist/anti-realist debate [2, 3] which seeks to answer the question "are numbers, sets, functions, etc. actual features of the real world, or are they all just in our heads?" (or some refined variation thereof). If they are real entities, how is it that these non-causal things (like 5) lie at the very heart of the laws governing the physical, causal universe? But if they aren't real, then what possible explanation can one give for the perfect harmony of the physical world and these functions, that are ultimately all in my head? Moreover, why is belief in these unreal entities so widespread (I know of no "amathists")?
It is my understanding that philosophers have added a lot to our understanding of how the axiom of choice impacts logical reasoning about things which matter to humanity in concrete ways.
It bears some truth :)
There is also the case of Frank Ramsey and Piero Sraffa, who were the only close friends of Ludwig Wittgenstein, and who went on to make major epistemological contributions to economics (and Ramsey was a philosopher in his own right): Ramsey was the first person to really clarify the concept of a subjective probability, and Sraffa was central in the capital aggregation controversy.
Here in the "JSON Pure API" you see a reinvention of HTTP request and response concepts built into the API payload, leaving the implementation of negotiation up to the consumer of the API. You lose all the benefits of years of development that have gone into browsers and web servers to handle this for you.
The main problem with REST is that people tend to call any JSON endpoint they build a REST API (and hence the term, "RESTful") which leads to a misunderstanding of what REST actually is.
Re-implementing those same semantics in your own messaging protocol / format, without intertwining the concerns of the protocol and the message format, throws away any/all of those benefits. You need a protocol that guarantees that middleware can "look inside" the messages it's passing (or at least their metadata), in order for any of this to work. That's why HTTP has both a transparent part (req path+headers; resp status code) and an opaque part (req and resp bodies) to each message: the transparent part is there for data that affects middleware behavior, while the opaque part is there for data that doesn't.
Note that that doesn't mean you're stuck with HTTP1. SPDY/HTTP2 is effectively an entirely different protocolbut it keeps the same semantics of requiring certain properties of the metadata tagged onto each message at the protocol level, so that anything that speaks the protocol can use that metadata to inform its decisions.
Twilio Conference 2011: Steve Klabnik, Everything You Know About REST Is Wrong: http://vimeo.com/30764565
REST APIs must be hypertext-driven: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...
Hypermedia APIs - Jon Moore: http://vimeo.com/20781278
Designing a RESTful Web API: http://blog.luisrei.com/articles/rest.html
Um...is this guy trying to implement REST clients with HTML forms? Does he know about ajax? Fact check: http://stackoverflow.com/questions/165779/are-the-put-delete...
Ever notice how nobody calls their API RESTpure? Instead they call it RESTful or RESTish. Thats because nobody can agree on what all the methods, payloads, and response codes really mean.
In REST, I'm not sure that a lot of these issues are that contentious. I do think that some of the emphasized points in "RESTful" design practice can create more contention in API design, but that's the downfall of that one pattern, it has nothing to do with what Fielding described.
No governing body - at least to my knowledge - has convened to set things straight
Roy is probably a great guy and he certainly had a lot of great ideas. However, I dont believe that RESTful APIs was one of them.
I used to find it awkward to implement services in rest, as in some action that is triggered and may overlive the request cycle until I started thinking of service commands as items in a work queue that get processed by a worker. So when a service is requested I can see it as a resource being created.
The point about SOAP not requiring documentation makes no sense either. You'd still need to document what the underlying fields in the various endpoints are. (We build against a lot of terribly documented SOAP APIs and its pure torture)
In terms of PUT (and PATCH) not being extensively used - it comes down to your use case. For the idempotent micro-services we build APIs against, there is a massive difference in the behavior expected for POST/PUT/PATCH and it would be pretty burdensome (and limiting) to have to create parsing code on the server for POST.
I won't use REST again. I got an opportunity to use GraphQL recently in my profession and all of my projects will be using it in the future.
But at about hour 4 of wrangling a bearer token to authenticate your barely-documentated PATCH request that is returning a strange error about your application/x-www-form-urlencoded body, you start to realize that APIs in theory are very different from APIs in practice.
(That being said, I don't love the "solution". It's very simplistic and it seems like the author doesn't really understand what he dislike about APIs. I don't think we need another protocol, but rather higher-level tools for dealing with them.)
Isn't that just because RESTful is a play on words?
> For example, most web browsers have limited support for PUT or DELETE.
Really? How? If anything browsers restrict these methods to exactly how they should be used (DELETE can't have postdata but PUT can).
It's true that folks don't use PUT/DELETE much (the alternative works perfectly well though), but to me that's because they're unnecessary complexity, not browser support.
JSON-Pure is not much more than an encoding. RESTful is an entire protocol.
I once worked with an API where they implemented their own HTTPS and because their own https didn't support gzip they removed quotes from json keys to save bandwidth. JSON pure probably is better when working with knowledgeable people but REST is better than what most people come up with and when not following REST
I've recently started playing Rim World, which is essentially a Dwarf Fortress light. I'm enjoying it way more than I enjoyed Dwarf Fortress despite being a less complex (relatively speaking) game because it offers a FAR superior interface and presents it's mechanics in a friendlier way.
TLDR: DF is a great game that shaped my childhood and motivated me to become a programmer.
Other games I have played of a similar vein: Rimworld (great), Prison Architect (great), Banished (great, needs mods to add more content), Planetbase (EA, good, light on content after 2~ hours).
Others I found were: Stonehearth, Gnomoria, Town, Kingdoms and Castles (not out yet), Dungeon Keeper 1 and 2 (kinda) / War for the Overworld (fan remake of DK, essentially).
I dl'd it and started it up. I'm now wondering what I'm looking at. Not being a gamer, I don't have the visual vocabulary or expectations, so I'm glad there's a wiki; the in-game manual doesn't fully work. EDIT: My reading comprehension is not fully functional, the manual works.
> Waters not doing it for me these days, he said. I know its bad, but the sugar goes right into programming the game. If I dont drink soda now, I get a headache and cant do any work.
I feel bad that he's sacrificing his health for our pleasure.
> Hed enrolled at the University of Washington, ... Tarn moved into a string of dingy one-bedrooms with bad moisture problems in one, he discovered a shelf fungus growing behind his couch.
I probably lived in one of those when I was at UW. It was a "World's Fair" apartment building, garden level, the only window facing north and looking up an outside stairwell into the alley. Google maps shows me that much of all that has been scraped and replaced.
But they vary in terms of their approach this complexity: some seem to always want to add more, seeing more complicatedness as always better, and end up feeling like they contain everything but the kitchen sink - complexity for complexity's sake. (Nethack, I'm looking at you.) Others add it only where it's justified by producing interesting gameplay decisions. (Brogue and Sil are rigorous about stripping out unneeded complexity and getting the maximum amount of subtlety and nuance from a stripped-back set of mechanics. Dungeon Crawl Stone Soup is more complex, but seems aware of the trade-offs around complexity, and is known for removing features as often as it adds them.)
Which of these camps does Dwarf Fortress fall into? There's a lot of complexity, features and mechanics there. Is it all justified, in terms of adding interest to gameplay? Or is just for complexity's sake?
Archcrystal - 320 years in a fortress (w/spoilers read 37592 times)
Uh. The US... sorry, but if you spend like $1600 before food that is not "low expenses". If you can get below $1300 WITH food then I'd say that's low. Some people have to live with much less than $1000/month altogether.
here's a bug related to turtle pond going extinct :
"Since turtles were my only source of shells, which are ever so important for moods, I am keenly feeling their absence after a few years. I went ahead and modded my world so that hoofs and horns can be used also for shells, (which is actually very cool btw, its awesome to have an artifact decorated with Elk Bird horn.)
However, my ponds were filled with turtles early on, and I accumulated many shells that unfortunately rotted away. It seems like fish populations are regenerating, but turtles are not among those, and since I found that odd I decided to report it."
"Pond turtles lay eggs, which might contribute why they will only breed in off site statistics if that's any relevance.
DF structures used for dfhack only implies that eggs that are hatched go into certain classes, and entity ID is one of them for intelligent species (crow men eggs layed and hatched on site belong to you etc., underground egg laying races have been in constant decline since they were added because of this until recently)
Egg layers have always had a hard time repopulating due to dependency on a object to breed which limited and stagnated them in world gen (made easier by spontaneous population regeneration in world-gen recently). I cannot recollect if its possible to offer pet pond turtles nestboxes to use by pitting or pasturing enough of them in a contained area as a alternative or even if they have additional orientation/marriage barriers to overcome we are not aware of.
All gendered vermin breed (or apparently breed, they have the prequesite animal tags but its uncertain whether they become pregnant whilst in the game world before leaving for the site population tally and being replaced with a new generated creature in their momentary existances, or even if they do become pregnant at all with child/adult born states) hermaphrodital or non-gender typical vermin are usually accounted for by being virtually innumerable to compensate for no breeding on site. Technically if the female pond turtle could get off the map by dissapearing and being replaced whilst pregnant it could spawn additional turtles slowly.
Fish are very visible with ASCII symbols and can be seen in murky ponds and rivers for periods of time if a example is needed, if they are close together they are in the capacity to breed and keep the numbers up. Drop a sizable amount of caught vermin fish into a empty pool and it should sustain the fish in theory as they repopulate with compatible mates in that area.
Similar designs have been used with isolated cave spider rooms with wild vermin which appear to be self sustainable and harvested with burrowed animal trappers & web collectors. "
Gnomoria and Rim World might be simpler games to learn. However not as deep or complex.
Most of that content is uninteresting, free with an antenna, and loaded with commercial breaks. I would rather spend that money for a season pass on Amazon, or to go out and see a movie once a month. If money weren't an issue, my time is still better spent on Netflix or watching lectures by smart people on YouTube. I guess its fine if they want to attract the geriatric crowd, but I cant imagine people in my generation paying that much for a vastly inferior experience. As others have said, a cheaper day pass would be better, as there still is a place for live content. Whether or not that should be a form of life support for the old guard corporate media empire, that's up to you.
First off, there are really two main reasons for someone not to have a cable subscription:
1. They don't care about the content
2. Price sensitivity
At Aereo, we saw these sort of mix together to create hugely elastic demand.
For a while, we offered a $1 "day pass" that would give you access to live TV for 24 hours at a time.
During the Super Bowl and various award shows, we had crazy numbers of people sign up for these day passes. We actually had to stop offering them, because we literally couldn't build out the extra capacity in a cost-effective way (remember, we needed distinct physical antennas and transcoders for every user we served).
It was tough enough to get people to pay $8/month for access to live broadcast TV and a cloud-based DVR. I have no idea how YouTube will convince anyone to pay $35.
If they can work out the licenses, I'd imagine something like a day pass would work well with consumers but it's probably hard to get the economics of that to work out.
Yup, this is going to be exactly like Cable television, and it's no cheaper. No thank you, I cut that cord for a reason.
Maybe if it had no ads? But I'm sure it'll be live television with the ads. There's no point.
Does YouTube care about getting content to the world? Or just getting as many of their fingers in the pie as possible and abusing the current geo-restricted licensing model while they can?
The day will come when TV streaming businesses takes over and removes the virtual location barriers set by the industry.
If they had a way to pick channels and only pay for those, that would be worth looking at.
EDIT: Do you still have to watch commercials? I don't see anything about commercials. If there are commercials then not just "no", but "hell, f*ck you for asking, no".
It seems weird that Google aren't able to create/negotiate something relevant for today. All the things I talked about are becoming common for normal TV providers in Norway.
What's included in "& more"? I can get the listed channels from an antenna.
Both of these services seem like an interesting step forward, but then, who really wants to watch live TV anymore, besides sports fans? The whole idea of watching something at a specific time that it's aired just seems unimaginable to me after years of on demand streaming. Not to mention commercial breaks.
If I want ONLY the Science channel, then I should be able to purchase JUST the science channel for like, $5/mo.
From TV to internet to TV on internet to internet TV on any devices. One thing I wonder about is how people who purchase "bundles" traditionally will react. I'd imagine they are the biggest consumer of TV programs. (TV + INTERNET or TV + Phone + Internet style bundles). Will it end up costing the same when you break you bundle to only get internet from a provider and get "TV" from Youtube?
Another question I have is about the "cloud DVR". How does it work? Is the content already in a server somewhere so when I hit the "DVR" it just tags that? It makes no sense saving the same content multiple time because multiple people tried to DVR the same episode right?
Is lot of the modern TV already through internet? If not, won't this cause an increase in internet bandwidth used? Maybe it's no significant but i'm curious about it none the less.
There was a talk on HN about cell phones and FM being enabled on it. Will a similar thing happen on TV, i.e my TV won't work without internet in the future?
Does privacy concerns increase with this? Is it easier to track users view patterns and what not with this as opposed to traditional tv? Will it be more likely that people will post the episodes or clips they watch to youtube or will it be less common as it will be even easier for youtube to recognize and flag stuff? (I wonder if youtube will provide a tool to post tiny clips directly from TV so people can have discussions and what not as well).Thanks in advance if anyone takes time to answer any of my question!
That's a series chunk of coin for something that you can get for free with an antennae.
I wonder how much of that is licensing. It's got to be a huge chunk of it.
Also, who watches any of those crap channels anyway?!
1) Will it have ESPN? I only really care about ESPN on a daily basis for TV.
2) "Never run out of DVR storage" -> Will I be able to easily save recordings of any show on Youtube TV? I value this a lot for particular sporting events and have years' worth of footage backed up.
Why the f* internet content still depends on where I live in 2017.
Are they including, AMC, MTV, etc? I don't think so or they would say so.
Ps. This is a serious question. Please someone help me understand how they are charging for free channels just to drop their ads (presumably)?
Also, that wording...are they getting the whole regional network or just the NBA basketball?
As a cable cutter, one thing I am really looking forward to is the day that I can flip channels again very easily, without having to think about what I am doing.
With Ads? Ouch, NO.
I do find it interesting though; Youtube/Google is taking all the steps to be the next Time Warner or Comcast it seems. I wonder if this proves that that industry isn't impossible to break into.
(youtube.tv redirects to .com)
Even still, it's great to see that things are still moving on smoothly (and the new logo looks really nice!).
One potential issue:
"If you have lots of back-and-forth between WebAssembly and JS (as you do with smaller tasks), then this overhead is noticeable."
As far as I'm aware, asm.js code does not have an issue with this, as it is just js code. Is this correct?
(edit: I should have mentioned that I'm primarily interested from an electron.js point of view at the moment, where Firefox asm.js optimizations are unavailable)
U.S. border patrol is getting more ridiculous by the day, but considering the ignoramus idiot at the top, it is not surprising. It's good that more and more horror stories are surfacing.
> I will not make any further comments on exploitability, at least not until the bug is fixed. The report has too much info on that as it is (I really didn't expect this one to miss the deadline).
Worth mentioning that "Goes Public" implies there was a human who pulled the trigger; it was a bot:
> This bug is subject to a 90 day disclosure deadline. If 90 days elapsewithout a broadly available patch, then the bug report will automaticallybecome visible to the public.
> Deadline exceeded -- automatically derestricting
What's up with them not being able to patch on time? How is 90 days not enough to get a patch out the door? That's a quarter, for goodness' sake!
On balance, Project Zero believes that disclosure deadlines are currently the optimal approach for user security - it allows software vendors a fair and reasonable length of time to exercise their vulnerability management process, while also respecting the rights of users to learn and understand the risks they face. By removing the ability of a vendor to withhold the details of security issues indefinitely, we give users the opportunity to react to vulnerabilities in a timely manner, and to exercise their power as a customer to request an expedited vendor response."
I see nothing close to Google trying to get MS. Instead it is what should be done.
Mow me with things like Scrougle and MS replaced YouTube as with their own i probably would not be so nice.
Look at Amazon will not allow Chromecast to be sold on their site. Personally i would have removed Amazon from their search engine but not Google.
Look at Uber. If i was Google i would use my power to destroy but not Google.
Feel how ever you want about Google but let's at least be fair.
Is this a common pattern in the bugs world ? publicizing a critical bug after 90 days of no response ?
I wasn't sure if I missed a sign of notification, or if vendors are automatically cc'd/whitelisted on restricted bugs for their products.
When I was a speedcuber I could use three algorithms to solve a cube blindfolded after memorizing where each piece needed to go: one to flip two edges, one to rotate two corners, and one that switched two corners and two edges at the same time (T-perm for you cubers).
Then it was just 1. orient the edges and corners in a way that makes them easy to move around the cube and 2. move the pieces where they need to go.
This is a very rudimentary strategy, and there are MUCH faster ways to solve the cube, but this is all you need.
A commutator is any sequence of moves in the form of A B A' B', where A and B are sets of moves, and A' and B' are those sets of moves undone. So this example basically just restricts B to consist only of top-layer moves.
It really is quite accessible now with online guides and videos. I think about an hour a day for 5 days took me from never having solved a cube to being able to solve any starting configuration in a little less than 3 minutes.
If like me you ever had a cube you never beat as a kid, it is definitely worth revisiting.
For example, if you wanted to make a 3d model you might make a bunch of little cubes each centered in their own object space and then translate them to their location in the cube. If you do that, you have to track the orientation AND location of each piece. However, if you center the entire cube at the origin and then place each piece in its place relative to that origin, all "moves" simply rotate a piece around some axis which both changes their orientation AND moves them relative to the cube center. As such, position is redundant information.
I'm not sure how relevant this is, but to me it seems to point to alternative ways of finding a solution via computer.
But this is probably very old news to people who study the cube.
If you're solving using commutators now and are looking to upgrade to a faster method in the same spirit, check out the Heise method. It's also an intuitive method (no memorization required), which starts off with block-building and finishes using commutators. I made the transition from commutator-only to Heise and am enjoying it, and I'm still very far off the speed cap for Heise. (Its creator reports averaging ~30sec: http://twistypuzzles.com/~sandy/forum/viewtopic.php?p=45076&...)
Fun problem; terrible assignment. I had to scaffold it so much that I was essentially giving most of the solution.
Three features I'd want before using it:
1) Rather than triggering Sedy immediately on a reviewer comment, I'd like the trigger to be the original requester reacting to the comment with a thumbs-up. The requester knows what they're trying to say, and they should decide if the changes get made.
2) I wish there were an option to restrict it to comments for supported languages. Your examples are just changing markdown (not code), and I think rightfully so I can easily see this tool becoming a way for a senior dev reviewer to attempt to avoid the back-and-forth with a junior dev by just posting some complex code substitutions... substitutions which could easily screw things up.
3) sed replacement actually seems too powerful for this job. For instance, if I want to make a replacement like:
s/**bold** thing/**bold thing**/
I feel like other sed expressions might be even more useful in this format. For example:
200i #TODO optimize this
s/.*goto.*/cowsay \0/e T s%^%//% s%\n%\0//%g
For those simple types of changes, I like to amend the original commit, rather than make a new one. Of course, I come from using `git send-email` to send patches to a mailing list, where you are expected to send "[PATCH v2]" after you get feedback.
I will not use this, but I can say this is an amazing piece of hackership.
-- old text
++ new text
Github will highlight that as a diff.
Maybe doing the reverse, if a change is made through inline edit automatically add a comment with the specific change made.
I just can foresee undesired consequences. Someone not escaping their sed correctly (I.e. they try to replace a string with a slash or apostrophe)
Most of the people I know want to write Java/Python/Ruby/Elixir/Scala/C#/Clojure in the browser: high level scripting/VM based langauges which require no memory management and have high level features and data structures. As far as I can tell, these can't target WebAssembly since they don't target LLVM (and you certainly will never be able to compile a language like Ruby to LLVM)
I'd love to hear some of you guys talk about what uses you have for Webassembly and what exciting things it will let you do.
1) Oberon had an idea of 'slim binaries' portable binaries that can be compiled into object code in a single pass. Loading would be almost as fast loading a file. Slim binaries were designed to be slim and fast.
2) Then came java, jvm and java applets People said that jvm is the plaform. Java will be is just one language using that platform. Compiled java objects were not so slim and fast to recompile :(
4) Now we have WebAssembly. Lets' hope that WebAssemply is actually slim and fast representation like slim binaries.
All the arguments in favor about saving bytes and offline compiling would seem like only short term gains since network, cpu's, and memory sizes are going to continue to improve.
And, it's certainly not Flash or Java Applets all over again since there are multiple competent vendors in the mix. Yet, I fear a new wave of unconstrained, impenetrable code schlock will flow from content creators once this thing hits the mainstream.
Oh, how quickly we forget.
I don't know what number crunching web applications the vendors are thinking of. I want wasm for client-side web programming without JS.
Just a few small typos to point out:
"Executing" section - "...know how to write code so that they compiler can type specialize it" - the compiler instead of they compiler
Some things to consider, China has been working up to getting a space capability to send people to the Moon with the full backing of the government funding, by 2035. They started in 2003. SpaceX was founded in 2002 and they are saying they will fly someone around the moon next year? Dragon has the deltaV to land on the moon (not sure if it has enough to get off again though) and SpaceX certainly has the expertise in building spacecraft that land.
The next person to take a picture of the Earth from moon may not be on a government funded mission. That one really blows my mind. For so long it was only countries that could do something like that, now it is nearly within reach of individuals.
The UN has treaties about claiming (or not) the moon by a nation state, but there isn't anything about a privately funded and established outpost that wants to declare independence. All this time I imagined that some country would establish a base there, and grudgingly offer up some space for non-state use, and now there is this possibility of a private facility that states have to ask permission to visit? That is priceless.
1. In a comment about the announcement alluded to it as a "recurring dream"
2. 5 years ago, described a moon orbit as "when I plan to fly in space. I have two specific missions in mind"
3. SpaceX Board Member and investor
4. Has the money
5. Knows Elon "Mr Musk declined to reveal their identities, only saying that they knew each other"
6. Is "nobody from Hollywood"
7. Liked this comment on his FB wall "Can I tag along?!? Ahhhhh!!!"
Can't wait to hear who booked this trip! Definitely one of the coolest ways to spend a lot of superfluous money :)
Last month I was again at the KSC and LCC as a tourist, and the energy was just a minute fraction of what I'd seen 20 years before. We need this kind of vision [from SpaceX and others, e.g., like this other NASA-based article today with the young engineer comments, who did the hydroponics in microgravity at https://news.ycombinator.com/item?id=13743196 ] to push science and technology beyond the video game and entertainment markets. Congratulations to SpaceX, the microgravity hydroponics engineer, and the others with vision who are once-again elevating the bright eyes of brilliant youth, scientists and engineers.
However it is worth noting that there hasn't been a single crewed Dragon flight yet. There are demonstator flights scheduled for this year though with the first NASA crewed mission slated for May 2018. That's an incredibly aggressive timeline but if anyone can achieve it, SpaceX can.
The long duration flight beyond the moon will be a fantastic proving ground, however.
I like how they have avoided committing to the much harder "landing on the Moon and then return" scenario.
Is Musk still maintaining a relationship with Trump? When Uber founder Travis Kalanick left Trump's business council, Musk was still on it AFAIK. I wonder if Musk is doing this or announcing it for related reasons. Certainly Trump has a history, even in his short tenure, of pressuring businesses into announcements that suit his agenda. And the announcement seems to fit Trump's pattern: Impossible, brazen bravado. (Musk gives the impossible some credibility, but that's what is meant by lending someone your credibility.)
It's speculative, but it's also sad and a bad sign when we must look for government interference in the free market at this level, to provide propaganda for the President.
I'm cheering for SpaceX for doing more towards spacefaring, but I'm very skeptical and think this will, at least, end up being negative PR to them, and, at worst, a lot more.
Does anyone have a rough estimate how much a manned mission to the ISS currently costs?
Seems to me like the cost of taking in another person will be negligible in comparison to the funding they could contribute. This is literally a one-in-a-lifetime experience
SpaceX at its usual :) . By which criteria Energiya is less powerful vehicle to reach orbit than Falcon Heavy?
Also, if this succeeds, what happens to Google's moonshot projects? Is rebranding in the works?
Shocking that it's been this long. There is an entire generation that hasn't seen man make it into deep space.
I find it interesting, because usually conspiracy theorists can't really be presented with enough hard evidence to replicate the scenario in question.
Except that these two private citizens are presumably absurdly wealthy. Whereas the nationalize space program which brought forth the Apollo mission gave all private citizens, as well as schoolchildren for generations, hope and aspirational outlooks.
Whereas the current national situation in the US, with respect to primary-school education and government-supported science is quite dire. So things are not at all hopeful right now, and many of us suffer nightmares of violence and deportation.
So, there's that.
Is anyone else imagining the mission is going to discretely drop such a module when it's in the moon's shadow or do I just have an overactive imagination?
I really root for 'em even though I know that China has started working on a similar business-model-trip back in '03 and they still haven't made any public announcement or published a precise launch year...
It's amazing that private companies are now doing things that were previously only one by governments and nations.
I don't know how this will work out but congratulations to Musk, Spacex and NASA.
If you can do it with a Dragon, what niche is Orion left with?
I imagine that would be a pretty easy record to break, if you're doing a translunar flight anyway then getting a bit higher doesn't take much more energy (source: played a lot of Kerbal).
On the other hand the passengers might prefer a close-up view of the Moon to a record.
What if the Sun has a SEP event during that period? Everyone on board will die in the period from hours to days from exposure to radiation.
We presently have absolutely no knowledge on how to predict that this will happen or to protect a ship in case it happens.
The Moon missions where done before we knew of the existence of SEPs and fortunately we were lucky... but we are not supposed to just rely on luck now that we know they exist.
that's cool, but kerbal-easy
Sergey Brin and Larry Page
Two of his freind rich enough, geeky enough, to go first.
A massive crowd will be assembled to attempt a Guinness book of world records, to moon the stars with bare asses all in unison in a soccer stadium just as they blast off into space, yelling out like the Romans did at the coliseum: "We salute you those who are about to DIE !" then post it on YouTube !
There's some fantastic stuff in here about how great design is the key to increased productivity. For example:
"It is very important for a designer to recognize all the parts of a design that are not easy wins, that is, there is no proportionality between the effort and the advantages. A project that is executed in order to maximize the output, is going to focus exactly on the aspects that matter and that can be implemented in a reasonable amount of time. For example when designing Disque, a message broker, at some point I realized that by providing just best-effort ordering for the messages, all the other aspects of the project could be substantially improved: availability, query language and clients interaction, simplicity and performances."
redis itself is a masterpiece of pragmatic design - the feature set is brilliantly selected to make the most of what you can do with shared data structures exposed over a network. Let's talk about that.
* For most of my past clients, the skill / output of their programmers was not the bottleneck, even though they thought so. As long as something is not a bottleneck, there's not point in trying too hard to optimize it (since you can get better ROI somewhere else).
* Software is a team effort. Improving how the team works together / how work flows through the system probably has a bigger impact than raw programmer output (unless you are already very good at that).
* Improving the quality of your software (minimizing defects and rework) will improve the output of everyone in the team, regardless of how good they are.
* I have heard of cases where removing the "top programmer" from a team made the whole team more productive, even though an important person was missing. I don't have data to back that up, though.
Update: Thinking more about this... I have a talk called "Your Company Will Never be Agile", where I talk about how most companies actively prevent their people from doing a good job (by having policies, procedures and a company structure that is not suitable for empowered teams). And then, those same companies complain that they cannot get good people and how all the hip companies can get the 10x programmers that "we cannot hire".
I don't have an English recording of the talk, but I started a series of blog posts about it: http://devteams.at/your_company_will_never_be_agile_intro . I should maybe finish it some day ;)
I've seen cases where one ostensibly 10x developer comes in and solves 90% (the easy parts) of a problem. Management love him. Then he moves on to other projects and leaves a team of "1x developers" to deal with the 10% (which management still insist on having). This team now have to re-write everything this superstar did from the ground up without taking shortcuts this time. The time it takes makes them all look like 0.1x developers.
> Often complexity is generated when there is no willingness to recognized that a non fundamental goal of a project is accounting for a very large amount of design complexity, or is making another more important goal very hard to reach, because there is a design tension among a fundamental feature and a non fundamental one.
It happens all the time that requirements are very complex. Junior programmers will fail to implement them. Better programmers will manage to implement them, but it'll take too much time to develop and especially to maintain their solution.
Experienced programmers recognize what's happening and have the personality to stand up the project leader and get a simplified version of the requirements accepted.
It's also why very large software projects fail, especially the type that is intended to save costs by replacing many different existing informal systems by a unified one. The requirements will be ridiculously complicated (have to do everything all the projects to be replaced do), and nobody in the software development part has the power to change the organisation first.
The lead on my current project micro manages every point of the code's architecture. I have no freedom to make any calls, even on legacy parts of the code that could be refactored to better fit new requirements. None of the other developers "own" anything so no one can make any calls. Anything takes a week or more to be discussed. Arbitrary non-obvious decisions have been made in the code base and it's up to you to figure them out. I am left fixing simple bugs and building things very slowly, treading very carefully rather than making it right. I am now a 0.8x programmer.
I look at my week's schedule. I see that I have a lot of meetings and checkups that, while important, are unrelated to my current project and will only take a chunk of my time and energy. I am now a 0.6x programmer.
I work in an open space office where terrible music is played through its sound system the whole day. I have a hard time focusing and staying focused. I am now a 0.4x programmer.
While I enjoyed the article, I think it overlooks the fact that a developer's efficiency is also often a factor of their environment (not just physical, but the project itself too). I've been 5x, I've been 0.1x, and the biggest contributing factor from project to project has been my environment. My experience and knowledge is also a factor, but this changed slowly, over time, while environment changes can mean I'll go from being super productive to very unproductive in a month. More managers and leads need to be aware of that.
I resisted believing in this phenomenon for a long time, especially because I'm no great shakes myself. But in the end it could no longer be rationally denied.
I am today, a 10x better programmer than I was where I started. In terms of quality, complexity, efficiency, readability, maintainability, everything. I was paid too much when I started and/or not enough now!
Notch and Carmack are 1000x better game programmers than I am. Linus is a 1000x better file system and operating system programmer than me. Monty is a 1000x better database programmer than I am. DHH can build a website at least 10x faster than me and do it in a way that would contribute 100x more to the community than I could.
If you discard people with decades of experience. If you discard people who have specialized. If you discard the many geniuses in our field. And then if you start to make excuses at the other end, and if you narrow it to a specific set of tasks, with a specific set of complexity, then maybe there isn't a huge gap. But even then, I feel that if you apply yourself to that task for a decade or two, you'll find that you're a 10x better programmer than you used to be.
Too many people approach "programming" like it is a simple execution of ideas. It's more art than execution and I've been inspired by the creation of many amazing programmers in my long career. And in 35 years of developing technology to solve real-world problems I've managed to have a couple nice ideas that inspired others.
To me coding is a creative process, often I will code for many hours straight without a break, I will "wake up" afterwards like I was in some sort of trance. My wife laughing at me as I realize it's dark outside, not because it's still morning, but because the day disappeared and its night time again. For me coding is a form of meditation, it's pure thought and comes from somewhere outside of my body out my fingertips like lighting into the keyboard. It's a gentle dance with a computer to dialog with it about a problem I'm trying to solve and ways it can help me or many of its friends can help me.
If you don't feel this way about coding, maybe something else is in your future, but for me coding saved my life and without it my soul would be trapped in a metal box without any way to express itself.
Am I a 10x coder, I don't know, I don't care. What I know is I am inspired by amazing coders and sometimes when I'm really lucky I inspire someone.
EDIT: PS: Antirez has inspired me every time I've looked at his creations. I wish some day others could feel that way about my work.
So whether intentionally, tempermentally, just due to the constant demand for their services, or just because it's tautological, the 10x programmers don't actually contribute back to their team's knowledge, which means the rest of the team stays at 1x, and whent he 10x programmer moves on to another project or company, the rest of the team flounders around while they have to figure out all the things the 10x programmer never bothered to share.
It's about the unintended side effects of trying to be a 'Natural Selector'. One example is selecting individual hens on egg output to create a breed of high-egg producers. The result was mean chickens which had gains b/c they were very aggressive. The breed needed to have their beaks clipped otherwise they would kill each other. When productive groups rather than productive individuals were selected though, they got the desired effect. (Though in another example I can imagine selecting for mean group behavior)
Another example was trying to selectively evolve animals that would self-limit reproduction. (to avoid overpopulation and resource over-consumption) The end result was selecting for cannibalism.
In organizations, the equivalent of propagating a feature are the hiring stage and the promotion stage. Whatever you hire for, or promote for, will be the trait that's optimized. Whatever the side effects may be... (e.g. Enron)
Their code worked, but it was also incomprehensible to everyone else on the team.
I have found that high-speed programmers tend to develop a very personalised workflow style. They do things their way, they code their way and forget that other people may have to maintain that code.
20% - Not productive. They can't get their tasks done, and after awhile, no one even expects them to. They get routed around.65% - Neutral. The quality problems and technical debt they incur matches their productive work.12% - Net negative. They introduce hard to fix bugs and technical debt beyond their productivity.3% - Gods. They do almost all the productive work. Without these types of people, no large project would ever get done.
If we go meta and generalize the disagreement, the skepticism about "10X" is the same as the rejection of other labels such as "ninja" and "rockstar". For some, the idea of categorizing a subset of programmers with a grandiose label is psychologically distasteful. It doesn't matter what the label is; any label that attempts to stratify programmers is a "myth".
As for "10x" specifically, I'll repeat what I've written before...
To make peace with the "10x" label, I suggest people just think of it as a rhetorical figure-of-speech instead of a rigorous mathematical term. We don't get hung up when people say "Star Wars IV was 10 times better than Phantom Menace" or "I'm not even 1/2 the football player I used to be."
Even if people were to use a new term such as "3-Sigma Programmer" instead of "10X Programmer", the ensuing debates would still be the same.
E.g. "Some people say 3- programmers write string parsing loops that are better in speed and quality than 99.7% of the other loops but that 3-standard-deviations-above-the-mean is a myth... etc"
The argument pattern would be the same: take a label, any label, hyperfocus on some literal meaning to the exclusion of all other colloquial usage, and debate why that mathematical interpretation fails in the real world.
tldr: "10x" in discussions is more of an informal ranking of programmer ability and not a rigorous mathematical measurement of output.
I think I experience this in my hobby projects, when I fully own the project even when it is fairly complex, every time I spend an hour or two on an evening I pump out a few features that on a big team project would feel like they could've cost weeks.
The physical analogue I offer which might be a little far fetched is the construction worker. I have been renovating a house, doing demolition, basic construction, electrical, plumbing, and hopefully in the future finishing of the house. I'm a total novice, so obviously it's going slow, but eventually I will have constructed (most of) an entire house. Because I do everything, there is little to no overhead (besides me having to learn everything) when switching between tasks, I own all of the project. I bet that someone who solo-renovates houses as a full time job is ridiculously productive, much more so than a general contractor managing a team of subcontractors.
Anyway, obviously this is all just hypothesizing based on anecdotes.
Yes, it's obvious that some people are getting a lot more done, but it's very hard to quantify and vulnerable to social engineering. It can be hard to spot quieter people working effectively, and it's really hard to quantify those who spend their time helping others or improving team effectiveness or business communication.
One thing that it's missing from the post is a bit of focus on how good developers (and indeed good leaders) concentrate on maximizing their impact. Not only you want to be fast and reasonably accurate, but also make what you do matter. Sometimes shaving down compilation time by a couple of minutes will save each developer in the team two minutes multiple times a day for years, for example. Not all productivity wins are obvious.
A 10x games programmer in a small studio could easily become a 0.1x web dev in a big web dev team.
1. Solving problems for the Nth time instead of the first leads to substantial gains in productivity, easily 10X gains on your first attempt.
2. Architectural decisions add another multiplier on the above: picking the wrong database type, structuring and reorganising your models, spending time to design ahead vs coding right away, all can /10 or 10X your project easily from fixes and re-dos alone - combine both 1 and 2, and you have potential 100X gains.
3. Risk-distributing the project work: eliminating the worst 5% as the author says, and/or writing the most complex parts first to reduce risk of massive rewrites in case something doesn't meet expectations in its most critical functionality.
4. Having competent business requirements providers who won't move the ground beneath your feet. You can be 10X or 100X more productive when writing new code, and that much slower when rewriting someone else's bad decisions. It's no different than trying to build a skyscraper on foundations built for a garage.
Stack the above as A x B x C x D and you can see why you might be able to beat your past self 10X or 100X and more between projects. Having teammates who can beat you is even better if you can learn from them and accelerate your own progress by skipping time consuming mistakes.
This is one of the reason working in software sucks so badly these days. You'll inevitably be forced to work with lesser tools in more ceremonial ways, which takes away most of the leverage from experience and skill; effectively dragging everyone down to the lowest common denominator where everything is done according to some stupid, over engineered specification.
And one of the beefs people seem to have when I share my code publicly. Cutting corners and side-stepping complexity is where coding turns to art for me, where the fun begins; which means that many of my programs look like toys in comparison to "serious" software. Yet they still manage to get the job done for less effort, and a closer look reveals that the simplicity is carefully engineered. I just don't have much time or patience for ceremonies these days.
This is far too extreme and turns you into a counterproductive team member. Like everything in life, a balance demands to be struck.
It looks good at the present, but the team pays for it in the long run - long after Mr 10x (it's always a guy, right?) gets fed up with the process bloat and criticism and whining and leaves for greener pastures.
I've cleaned up after 10x programmers.
I've certainly worked alongside programmers who's output is so bad that I would consider myself both 10 times better and 10 times faster. And I wouldn't even consider myself a top teir programmer. They are often people who's contributions to the project are net negative in that they actually require additional work from someone else to go clean up their mess afterwards.
Stop imaging a mythical coder who is 10 times better than everyone else, and instead think of the worst coder you've ever worked with who is 10x worse than everyone else. There is your 10x'er. We are nearly all 10x'ers when compared to the bottom few percent.
> Surprisingly the ability to use basic imperative programming constructs very efficiently in order to implement something is, in my experience, not as widespread as one may think.
Those words are spun better than I could do, and I'm a native English speaker. Bravo!
Maybe something missing from the list is hard work and long-term dedication ;)
I think Anders Ericsson does a good job of explaining the phenomenon in his book Peak: http://uk.businessinsider.com/anders-ericsson-how-to-become-...
What we should be doing is looking to create companies that can allow workers to reach and sustain peak performance in all areas, not just coding.
I mean at the moment I'm actually changing my code that was simple at first, but over time more and more things were added and it started to complex.The thing I'm doing right now is getting rid of the complexity to add another future.Well mostly I think a big problem is that many people actually think about the design too much, since the design of a program will eventuelly be changed anyway. What worked for me was design something that works in most cases and grow that path or throw it away if it sucks.
btw. I love what antirez did and does for the community of programmers.
I always use a redis client to teach people more about network programming in java. It's a extremly simple, but still powerful command set/protocol. I hope he can keep up his work.
Parnas On the Criteria To Be Used in Decomposing Systems into Modules https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/crit... [PDF]
Christopher Alexanders Notes on the Synthesis of Form http://www.hup.harvard.edu/catalog.php?isbn=9780674627512
Both get deep into how a design emerges from the relationships among what Antirez is calling "sub-tasks". Antirez refers briefly to these relationships in the "Design sacrifice section." Parnas and Alexander put them, correctly I believe, at the heart of the craft.
Parnas was a software engineering authority. Alexander went on to write A Pattern Language, from which the software community derived "design patterns" as a foundational idea.
Probably the biggest aspect not dealt with in these writings along with the discussion around it (is there or is there not such a beast) is bias.
For example, selection bias: e.g. my view over what was the best / worst programmer was much different when working in different teams. I found out there could be much worse than what I thought is the worse. Then I learned that there could be much worse... It's like a fractal :)As you notice these differences a 10x difference doesn't seem that crazy.It's really things that should take a few weeks, which in turn take months or years or never get done.
Also like any other optimization problem, optimizing software development is about hitting a moving target. When the team is balanced, it may be the process that will become a bottleneck, etc.
In my mind, the mythical 10x programmer is the person that can complete business objectives while helping make those around them more effective. This isn't actually a myth, and "10x" is a completely arbitrary number that doesn't mean anything. It might as well be 2x or 1.1x -- they all mean the same thing to me. They can do their work at 1x speed, a baseline set by the developer in question and not their peers, but they can simultaneously help others around them be more productive.
I can be 10x, even 100x when working on something I've already solved & have "big" set of components ready to plug in (with a little bit of tweaking). The more stuff you have, the "luckier" you are. It's also ability to spot patterns & good memory & being in a flow.
Often when you find someone who's really good at what they do, they're the type of person who loves their work, and they've managed to find employment in an environment that suits them. The two are of course mutually helpful.
Also keep in mind it can be very hard to find more than one space for such a person. It's like how certain soccer teams are built around a particular star player; everyone else plays to suit that guy, and it would be hard to fit a clone if you had one. Others may well be suited to a star role, but happen not to have landed the role. Watch the Tour de France to see what happens when the lieutenant steps up to the captaincy. I can often be dramatic.
BTW I have always known that the 10x programmer exists.
A sufficient proof is that on good days (with proper motivation, concentration, no interruptions, enough coffee etc.) I'm 10x the programmer I am on bad days :)
"Perfectionism and fear of external judice insert a designing bias that will result in poor choices in order to refine a design only according to psychological or trivially measurable parameters, where things like robustness, simplicity, ability to deliver in time, are often never accounted for."
In another migration measured the time to cut and paste and concluded a day of grind was better than a week of scripting.
Many tasks have a thin path to completion surrounded by cliffs on either side. Experience teaches when to focus on the critical path only vs when to take a wider view.
There's easily a 10x productivity boost there.
Get your ass up from the chair and go outside to exercise if you wanna reach Antirez levels of mastery
In short, as the saying goes: Decisiveness is overrated.
..and an environment that fosters such low productivity norms, probably also gets it's developer productivity measure/metrics wrong as well, so you don't even need a 1.25x for the perception of a 10x...
Otherwise consider other human qualities such as communication skills, adaptability and critical thinking as more valuable than raw coding skill.
"you cant be good doing things you do not love"
It's the essentially the same as optimizing a computer program: you can't make it do more work faster, you can only make it do less work.
And if 'less work' means the problem is solved anyway then you are a 'faster' programmer, even if you produce fewer lines of code than your 'slower' counterpart.
My number one observation about productivity usually revolves around how an engineer attacks a problem and handles scope creep. There are some programmers who can get a set of requirements, and like a trained surgeon get in, fix the big bleed and get out. While they are in there they might fix a couple close issues but they are not re-architecting the whole application. Then there are others who see all the problems, they notice this problem there, and that problem here and keep asking what does this all mean and it eventually cripples them. They spend some much time seeing all the problems, that they never get around to solving the one they were tasked to fix.
Once you realize you won't understand it all from the beginning and you can't fix every issue you see. You become a much more effective engineer.
Which I think is bull, of course. You can't quantify things like that. Some people are better than others, but any claim of 10X or 200X is missing a much bigger picture of how humans contribute to each other's work.
If programmer productivity, or software developer/software engineer productivity, is measured as a linear function, then it really does no service to the field. Beyond looking at network effects from the impact developers have on each other, there is no universal measure of productivity. It would be more believable to say that certain developers have twice as many or five times the number of lines of code produced that are defect free, than to say that they achieve a certain level of productivity.
The two reasons that this should be immediately seen as nonsense to anyone in the field is that first of all, computer problems deal with asymptotic complexity. In the asymptotic world, linear functions are outshined by constant, logarithmic, polynomial, and exponential functions. Furthermore, the prevailing wisdom among programmers is that 'less is more'. That's why we talk about minimizing lines of code and trying to avoid the most bugs by leaving the least surface area for them to exist to begin with. Introducing a measure where 'more is better' is sort of at odds with this philosophy and should be viewed skeptically.
Finally, if you look at the great successful innovative products in software, and technology in general, you'll see that they often make use of new inventions. There's no way to compare an inventor in terms of productivity by saying one has 10 times as many patents as the other, or to compare a mathematician by the number of papers or pages published. The important difference is the quality of the invention or discovery. The engineers at AltaVista and Yahoo could have been extremely productive, but without a revelation like Page Rank, they never could have competed with Google back in the early days of search engines. Here, two college students writing a small amount of code outperformed larger companies. This has nothing to do with productivity and everything to do with talent.
This leads me to believe that the "10 X" slogan is a product of marketers, head hunters, and pop psychologists. It has no bearing on the field of computer science and it is a harmful concept because it perpetuates the idea that software developers are replaceable parts rather than unique contributors.
Maybe you'd be 10x more productive if it didn't hurt everyone's eyes to read your writing.
- Feature creep
- Poor code organization: coupling, action at a distance, cyclomatic complexity
- Noise: comments that do not get to the point. A tool to mitigate this is https://foxtype.com
- Hacks and lack of consistency
- Lack of automation: tests, builds, deployments
Regarding hacks, imagine what would physics equations would look like if a fundamental constant was wrong. All equations would need to compensate for it by including some arbitrary constant making everything more complicated. That is what messy code bases look like, layers of lies to compensate for lies. Clean code is more straightforward, easier to work with.
Regarding iterations... does the army run an exercise with soldiers and trucks and live ammo each time a general wants to test an idea? No. They use simulations, and only the ones that look promising are turned into exercises. So rather than asking engineers to prototype some throwaway idea, get your hands dirty and use Powerpoint and your imagination, stress the idea, then build it. And if it's built, keep it in a feature branch until you've actually decided to keep it for good.
When a car is built, engineers tell designers to modify their concepts in order to make the production more cost efficient. Same in software... be prepared to negotiate requirements if that is in the best interest of the project.
At least 1/2 of programming is learning. You can make a basic Android App, but have you learned how to do 'deep linking'. Well, it can take a full day the first time because it's awkward, and you have to understand a few things, and set a few things up server side.
Second time - it'll take 1 hour.
There's a lot of that.
If you're really comfortable with XMLHttpRequest, and know the ins and outs of post/form structures - well then you can do something quickly in it. If you don't well, it could take a bit to learn for a new dev.
Those things add up a lot. It takes several years to get comfortable with the variety of tools and tech necessary to be good.
Right. I dont believe in the idea that there are a few peculiar people capable of understanding math, and the rest of the world is normal. Math is a human discovery, and its no more complicated than humans can understand. I had a calculus book once that said, What one fool can do, another can. What weve been able to work out about nature may look abstract and threatening to someone who hasnt studied it, but it was fools who did it, and in the next generation, all the fools will understand it. Theres a tendency to pomposity in all this, to make it deep and profound. Richard Feynman, Omni 1979
Stop the pomposity. Please.
Good on them for starting the stock price axis at 0, that's a rarity in stock-price graphs.
This really goes to show, however, how the everyday man has no hope, as an investor, against the big boys.
Edit: I origionally wanted to link to this startup https://spaceknow.com/ but couldn't remember the name.
That said, there are so many interesting things that having open spy satellites has made available to the non-spy.
any fast fashion retailer really.
I'm not surprised it's also the most adapted novel of all time as it is, in my opinion, the greatest novel ever written and a monument to what literature can be. But I definitely worry that its impact will be more and more limited by alternatives to simply reading the original.
1. Jean Valjean acquired some silver from the bishop.2. It was established that M. Thenardier looted the deceased soldiers at the battle of Waterloo.3. The existence of Fantine and possibly Cosette had been established? 4. Perhaps Valjean had become the mayor of some town after using what was formerly the bishop's silver to become an honest man?
Note that Javert, who is the other major character besides Valjean has yet to put in an appearance, despite showing up in perhaps the first 10 minutes of the musical. M. Thenardier is, at best, a minor player in the musical, and prior to his introduction, we get ~50 pages on the history of the Battle of Waterloo.
The Princess Bride gently mocks the genre by advertising itself as "abridged". I can't help but feel it's justified.
He also packs a fair amount of (occasionally apocryphal) history into it. I love the many pages spent describing Waterloo from start to finish, and his judgment of the battle is sobering:
However, it missed that a key social function of reason and communication is for joint decision making. So one person throws out a suggestion with some reasons for it, a second presents reasons to point out what is right and wrong about it and persuades the first person, and throws out a new suggestion, the first person critiques it, and so on back-and-forth until a course of action is arrived at that both people agree is best. You can see this process even in fairly young children.
It is odd Sperber misses this since he seems to have just this sort of relationship with his collegues and co-workers, and enjoys it very much.
A. Your words are not an encoding of your meaning
> [Y]our words are not an encoding of your meaning; they are a piece of evidence from which your meaning has to be inferred. [T]his can also be expressed by behavior, by gesture, and indeed by cultural symbols, where you convey that relevance will be achieved by orienting in a certain direction, by looking at certain things rather than others, by approaching them with a certain kind of expectation. Theres a continuum of cases between precise meanings that you can paraphrase and much vaguer effects
B. The paradox of cultural transmission
> [T]he paradox is that if you look at cultures, what you see is quite a bit of stability: The same words are being used more or less in the same sense for generation [...] the same tales are being told to children [...] the same recipes are being cooked [...] How can things stay so stable?
> [C]ommunication is not a replication system. When I communicate to you, you dont get in your mind a copy of my meaning. Youll transform it into something else. You extract from it whats relevant to you.
> If you see a friend who has a great recipe for apple pie and you imitate it, you dont really copy it. You look at it and you extract from it a way to do it your own way. There's a loss of information at every step, which is quite significant. [...] So how can you have this macro stability of cultural things with this micro failure to replicate?
> Fidelity [of copying] is not the only way to ensure stability. You can have stability [...] if the transformations that everybody produces at each step [...] converge, if you have what I can call a cultural attractor
C. Reason is an ability to share intuitions and justify ourselves in the eyes of others
> Why are reasons of any relevance to us? In our own individual thinking, reasons dont matter very much. We trust ourselves. [...] You dont need to look for a reason for what you intuitively believe. [...] But if we want to communicate to others what we believe and they dont have the same intuitions, we may still share intuitions about reasons for our belief
> We use reason to justify ourselves. [... Others] have to think that the way we think and behave makes us reliable partners. The evidence they have is from what we do, which can be interpreted in a variety of ways. What we can do is provide reasons for our actions and our thoughts [...] to show that we had good reasons and can be trusted to have similarly good reasons in the future.
> Its an ability to understand others, to justify ourselves in the eyes of others, to convince them of our ideas, to accept and to evaluate the justifications and arguments that others give and be convinced by them or not
>Rather than seeing as a paradox the fact that people can use reason to defend absurd ideas, as we see happen all the time, this is exactly part of what we assume is going to happen.
D. Science progresses by people using their reason to defend what they hit by luck
> Its never the case for me, and rarely the case for anybody, that you gather so much evidence and data that somehow an idea emerges. [...] I think its mostly luck, when you hit on a good idea. Other people, just as bright and smart as you, have the bad luck of hitting on a bad idea. They invest a lot on the bad idea and they dont get anywhere. If you have been lucky enough to hit on a good idea then, indeed, youll find confirming evidence, good evidence that will start explaining lots of things. But initially I think were groping in the dark.
> The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are [...] typically a product of social interaction over generations. They are social, cultural products, where many minds had to [...] progressively explore a lot of directions [...], not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck.