Anyway i started contracting last year for this exact reason, at around 300-400 day rate and now i've saved enough to quit and follow my 'dream', my last day is JULY 7th. I have enough savings to last me 2 years, sustaining my current social life. no frugalness.
Since I've been working as a mobile developer, and also management consultant (my other career), it's always been extremely easy for me to find a new job whenever I needed, so there has been very little risk involved.
Still, it did require some savings, since our startup is very research intensive and will take several years before we see any revenue. We secured some basic funding now though, and things are looking good for the next stage too so I will only have needed 6 months or so of buffer.
In summary, my view is that if you're a reasonably skilled engineer or have some other attractive occupation, there is nothing to fear. The worst that can happen is really that your startup doesn't work, you'll go back to what you did before with a few months of missed income but with plenty of useful experience.
I don't think there are many situations or cultures where a failed technology startup attempt on your rsum would count against you in any way, in most places quite the opposite.
When you're building a Nights and Weekends side project, you get used to stealing whatever free hours you can to work on the product. But you also necessarily build things so that they don't take up much of your time every day. If they did, it would interfere with your day job and that just wouldn't work.
So when you remove the day job, you find that suddenly you have this successful business that runs itself in the background and you can do pretty much whatever you want with your day.
Most people in this situation will immediately fill that time up with work on the product, and I did to some extent. But I also made sure to take a bunch of that time just to enjoy with my family. I eventually settled on 2-3 days a week where I was "at work", with the rest devoted to other pursuits. Both me and my wife are rock climbers, which is an "other pursuit" that will happily expand to fill the time available. We're also parents, so ditto there.
I also make a point of downing tools for a while from time to time. Again, because I can.
I took the kids out of school and dragged them off backpacking around SouthEast Asia for a few months the first year. We did a couple more medium sized trips this year, and I took the entire Fall and Spring off because those are the best times for Bouldering in the forest here. Again, work is happy to ramp up or down to accommodate because I never shifted it out of that ability to run on nights and weekends.
So now, I burst for a few weeks at a time on work stuff (with possibly a more relaxed definition of Full Time than most would use), then slow down and relax for a bit.
It's actually not so bad.
It was scary as hell (no revenue coming from the startup), fun as hell, challenging as hell. Had my savings all planned out to help support the adventure, but still had that daily stress of knowing every dollar I spent was not coming back anytime soon. That part wasn't fun. But I didn't have kids or a mortgage and knew this was my chance to do something of the sort.
10/10 would do again in a similar situation, though knowing what I know now, I might have launched a business instead of chased a cool idea.
On the second time I created this plugin: http://plugins.netbeans.org/plugin/61050/pleasure-play-frame... I tried to make a living of this, I only sold 5 licenses of 25 dollars per year and to develop this plugin took me 2 and a half months of hard work.
Didn't have a problem getting a new job both times.
There is this assumption that one must build a minimum viable product that has to be released as quickly as possible, so much that it's become startup mantra. It's no surprise that a lot of these products seem to be technically shallow, everyone is reaching for low hanging fruit.
I feel rather alone trying to do something that I think hasn't been done before, or if it had, wasn't executed well. I don't think I could possibly commit to it without having strong motivation, which I struggled with while having a full-time job.
The biggest technical/social challenge I have is to make something that a non-technical user could easily get right away and make something with it. I think the automation of web dev is an inevitability, and frameworks were just a historical blip on this path. The same thing is happening to web design. http://hypereum.com
I reached a point after about 4 months where I realised the journey to make the business profitable would most likely be a five year slog, and while the opportunity was there it wasn't a cause I felt I could devote 5 years of my life to.
So gave the software for free to the people that were helping with beta testing and went back to my job. I found the most positive thing was how it helped propel my career at my current employer as I got a better role, they seem to have more respect for me afterwards and I operate more independently now.
So I suppose if you can build some sort of safety net before quitting that helps.
Having the luxury to focus on one thing, rather than juggling several, is much like having an office that is neat, tidy, and uncluttered. It feels good in the same way. At least by quitting a job and focusing on a startup, you have the option to focus 100% on it. Actually focusing 100% on one thing is a difficult skill in itself, even with the right circumstances; however, it's completely impossible (at least for me) with two fulltime jobs at once, especially jobs like teaching (which involve lots of public speaking at scheduled times) or running a website with paying customers (which demands, e.g., responding to DDOS attacks).
I dream of working for myself but I've never taken the plunge. My income from side projects is about 1/3 of the way to my minimum number to quit and go full time.
I do a lot of thinking about this, my number is the same as my financial independence / early retirement number.
One of the biggest things that holds me back is medical insurance for a family of 5. Having an employer offsets this cost a LOT.
I have a previous coworker who'd love to help me, but I don't want to babysit his work and I feel he's not valuable enough to the business. I would like another cofounder, but it doesn't bother me that I'm doing it all on my own, I have spent the last 10 years getting ready for this, so I'm more than ready. I am doing more than okay on my business alone, but I wish I had some expertise for a second opinion. I am really really thinking about going into an accelerator program or seeking angel investment, but I'm apprehensive about taking cash at this (or any stage). My biggest fear is actually having to get a real job again, I will do anything to prevent that from happening since that means my startup is dead.
I didn't take the plunge until quite late on, waiting until it was making enough money to comfortably cover my personal expenses. No regrets there - growth was slow in the early days and if I hadn't had the luxury of a monthly pay packet, I probably would have given up before I got the chance to properly validate the business.
Transitioning to full time brought more stress than I expected, but the experience is priceless. In the past few months alone I've learnt more than I did in 3 years of employment.
Realistically, what's the worst case scenario? I'm a reasonably skilled dev in a strong market so there's not much to lose. If it all goes wrong I'll get another job with a load of experience (and stories!) under my belt.
I'm back working for a startup again now so I guess I'm just going back and forth.
I've worked for a few startups and none of them has had an exit yet but one of those I have shares in is doing relatively well.
Doing contracting work is a smarter decision in general. You can actually plan to make a sizeable amount of money and then watch it happen without taking any risks - It's all within your control. With startups, you might often feel that it's outside of your control, especially if you're not a co-founder.
So, if you are planning to leave a job and have a good product which is getting you even half of the money you need. Leaving your job will only increase the chances of success. However, hanging on to the job while working on a product is going to be much harder.
I guess the "quit your job" problem only exists if you have major responsibilities, like a family, or paying debt back. Otherwise, it makes no real sense to consider it, the opportunity is too big.
Let's just say that my mistake was that I was too afraid to hurt my co-founder's feelings. If we parted ways when we should have, I might have actually gotten somewhere. (Then again, I might have gotten nowhere either!)
1. Give your employer your 2 weeks/1 month notice (depending on locale). Taking this step immediately is critical because the urgency and shock of the change will force you into being fast and practical about all the subsequent steps.
2. Create a monthly budget for yourself which assumes no income that you are not 100% sure about. So if you have interest from investments or a freelance contract that's a absolute guarantee you can include it. For most people the income side of this budget is going to be low or nothing. Your goal with this budget is to stretch your funds out for 6-12 months. The good news is that in 2017, the principle of geoarbitrage allows you to live on virtually any budget. If you live in the Bay Area your next step is going to be to move somewhere cheaper. On the cheapest end of the spectrum, I'll use Thailand as an example because I live here, you can get a basic apartment in the suburbs of Chiang Mai or Bangkok for $100-$500/mo, your initial arrival can be visa-free, and you'll live on delicious Thai food from a restaurant down the road for a few dollars a day. Network heavily with people in your intended destination before you even arrive because it'll make everything 100 times easier.
3. Now create a business plan for your new entity. The business plan should include a description of the product or service which you're going to market, how you're going to market it, what you're going to charge (start high), and any and all costs of development and operation including your own time. It should include monthly profit/loss projections (you're not allowed to use these projections in your budget, they are goals, not guarantees). The most important thing about your business isn't what product or service you initially offer. Once you have assets and control you can try anything you want. Until then the goal of your business is to make enough income that your assets are growing, no matter what that entails.
If you're leaving the country as a part of this process I would advise forming an LLC and opening a bank account before you go, as these things can be difficult from overseas. You'll be very busy trying to make money and living your dream so you don't want to have to deal with paperwork.
Prepare yourself mentally to work very hard for at least the next 6 months and do whatever you need to do to make enough cash. You will become practical and decisive, and you'll learn many realities about business, such as cash flow is king, very quickly. I got my start being nickel-and-dimed by agencies in India over Elance. It sucked and it was hard and it was 100% worth it.
There are many objections to this strategy which typically stem from risk aversion, or a desire to not worry about money. I would submit that if one objects to the risk, this plan is a personal growth opportunity: it will teach them how to handle stress, plan for contingencies, and so on. If the objection is that they don't want to worry about money, I would point out that money is just a way for people to quantify your value to them, and since no man is an island, there are great personal and financial rewards to be reaped from confronting this objection and discovering what other people truly value about you.
Doing step 1 first and now is the key. If your path brings you through Bangkok let me know and we'll grab a beer! I've seen many people succeed at this and a few fail. Your odds are better than you think.
2. 6 month financial backup is usually not enough. I have heard many stories where people try going independent for 6 months, run out of money and start looking for a job. What happens is - entrepreneurship gets into you in that time and if one goes back to a job, I can bet they feel even more frustrated. You need 1.5 years of backup or 2-3 years of "frugal living backup". I struck positive cashflows in just about 5 months, but it wasn't good enough. I distinctly remember thinking "Maybe, I should have done this part time". Then I struck a mini-gold-mine at 8 months. Having a good backup will help you persist longer. I did not have a growth strategy that worked. But I focused on working and doing the right thing. Keep it rolling.
3. The biggest worry I had when starting was about providing "enough" for my family and any emergencies for next 1.5-3 years at any point in time. Unlike many stories, I promised myself not to wait until I go bankrupt or in a lot of debt - Nearing that is a huge red flag, where I would typically exit and take a regular job. However, taking a job is the last thing I want to do. That thing kept me money-oriented for a while and made me work on stuff that generated positive cashflow.
4. Would it have been possible to return to your old job? - Maybe, but I would not want to. I waited too long to jump ship. Infact, my experience on multiple "good" jobs is what is keeping me away from them. Once you taste entrepreneurship, its hard to go back
5. I do not consider myself successful. May be semi-successful, some people see it as success. But I have come a long way from fearing failures. Success may or may not last long. I enjoy the process and the tremendous personal growth it results into. I ensure my financial backup now gives me 5-6 years minimum to start afresh - if I have to. Do not undervalue the role of money - it definitely makes things easier.
6. This is my favorite quote about Karma. I heard it many years back (and thought it was impractical). Especially useful when I feel I did everything right but nothing works:"Karm karo, fal ki chinta mat karo" (Do your duty without thinking about results)
P.S.: I don't know about others, but I have restricted myself into writing lesser HN comments because it takes quite a bit of time/energy. This one is an act of impulse. How do other entrepreneurs feel about this?
1. Took too long to get something working. The common use case of hooking up a Lambda function to an HTTP endpoint is surprisingly fiddly and manual.
2. Very painful logging/monitoring.
3. The Node.js version of Lambda has a weird and ugly API that feels like it was designed by a comittee with little knowledge of Node.js idioms.
4. The Serverless framework produces a huge bundle unless you spend a lot of effort optimising it. It's also very slow to deploy incremental changes edit: this is not only due to the large bundle size but also due to having to re-up the whole generated CloudFormation stack for most updates.
5. It was worth it in the end for making a useful little service that will exist forever with ultra-low running costs, but the developer experience could have been miles better, and I wouldn't want to have to work on that codebase again.
Edit: here's the code: https://github.com/Financial-Times/ig-images-backend
To address point 3 above, I wrote a wrapper function (in src/index.js) so I could write each HTTP Lambda endpoint as a straight async function that simply receives a single argument (the request event) and asynchronously returns the complete HTTP response. This wouldn't be good if you were returning a large response though; you'd probably be better streaming it.
My #1 concern with it went away a while back when Amazon finally added support for Python 3 (3.6).
It behaved as advertised: Allowed us to scale without worrying about scaling. After a year of using it however I'm really not a big fan of the technology.
It's opaque. Pulling logs, crashes and metrics out of it is like pulling teeth. There's a lot of bells and whistles which are just missing. And the weirdest thing to me is how people keep using it to create "serverless websites" when that is really not its strength -- its strength is in distributed processing; in other words, long-running CPU-bound apps.
The dev experience is poor. We had to build our own system to deploy our builds to Lambda. Build our own canary/rollback system, etc. With Zappa it's better nowadays although for the longest time it didn't really support non-website-like Lambda apps.
It's expensive. You pay for invocations, you pay for running speed, and all of this is super hard to read on the bill (which function costs me the most and when? Gotta do your own advanced bill graphing for that). And if you want more CPU, you have to also increase memory; so right now our apps are paying for hundreds of MBs of memory we're not using just because it makes sense to pay for the extra CPU. (2x your CPU to 2x your speed is a net-neutral cost, if you're CPU-bound).
But the kicker in all this is that the entire system is proprietary and it's really hard to reproduce a test environment for it. The LambCI people have done it, but even so, it's a hell of a system to mock and has a pretty strong lock-in.
We're currently moving some S3-bound queue stuff into SQS and dropping Lambda at the same time could make sense.
I certainly recommend trying Lambda as a tech project, but I would not recommend going out of your way to use it just so you can be "serverless". Consider your use case carefully.
I'm not allowed to give you any numbers; here's an old blogpost about Sketch Cloud: https://awkward.co/blog/building-sketch-cloud-without-server... (however, this isn't accurate anymore). For this use-case, concurrent executions for image uploads is a big deal (a regular Sketch document can easily exist out of 100 images). But basically the complete API runs on Lambda.
Running other languages on Lambda can be easily done and can be pretty fast, because you simply use node to spawn a process (Serverless has lots of examples of that).
Let me know if you have any specific questions :-)
Hope this helps.
One thing to note. API Gateway is super picky about your response. When you first get started you may have a Lambda that runs your test just fine but fails on deployment. Make sure you troubleshoot your response rather than diving into your code.
I saw some people complaining about using an archaic version of Node. This is no longer true. Lambdas support Node V6 which, while not bang up to date, is an excellent version.
Anyway, I can attest it is production ready and at least in our usage an order of magnitude cheaper.
- CPU power also scales with Memory, you might need to increase it to get better responses
- Ability to attach many streams (Kinesis, Dynamo) is very helpful, and it scales easily without explicitly managing servers
- There can be a overhead, your function gets paused (if no data incoming) or can be killed undeterministically (even if it works all the time or per hour) and causes cold start, and cold start is very bad for Java
- You need to make your JARs smaller (50MB), you cannot just embed anything you like without careful consideration
I was initially attracted to it as a low-cost tool to run a database (RDS) powered service side project.
- Zappa is a great tool. They added async task support  which replaced the need for celery or rq. Setting up https with let's encrypt takes less than 15 minutes. They added Python 3 support quickly after it was announced. Setting up a test environment is pretty trivial. I set up a separate staging site which helps to debug a bunch of the orchestration settings. I also built a small CLI  to help set environment variables (heroku-esque) via S3 which works well. Overall, the tooling feels solid. I can't imagine using raw Lambda without a tool like Zappa.
- While Lambda itself is not too expensive, AWS can sneak in some additional costs. For example, allowing Lambda to reach out to other services in the VPC (RDS) or to the Internet, requires a bunch of route tables, subnets and a nat gateway. For this side project, this currently costs way more running and invoking Lambda.
- Debugging can be a pain. Things like Sentry  make it better for runtime issues, but orchestration issues are still very trail and error.
- There can be overhead if your function goes "cold" (i.e. infrequent usage). Zappa lets you keep sites warm (additional cost), but a cold start adds a couple of seconds to the first-page load for that user. This applies more to low volume traffic sites.
Overall: It's definitely overkilled for a side project like this, but I could see the economics of scale kicking in for multiple or high volume apps.
Development can be tricky, there are a lot of of all in one solutions like the serverless framework, we use Apex CLI tool for deploying and Terraform for infra. These tools offer a nice workflow for most developers.
Logging is annoying, its all cloudwatch, but we use a lambda to send all our cloudwatch logs to sumologic. We use cloudwatch for metrics, however we have a grafana dashboard for actually looking at those metrics. For exceptions we use Sentry.
Resources have bitten us the most, not enough memory suddenly because the payload from a download. I wish lambda allowed for scaling on a second attempt so that you could bump its resources, this is something to consider carefully.
Encryption of environment variables is still not a solved issue, if everyone has access to the AWS console, everyone can view your env vars, so if you want to store a DB password somewhere, it will have to be KMS, which is not a bad thing, this is usually pretty quick, but does add overhead to the execution time.
Terrible deploy process, especially if your package is over 50mb (then you need to get S3 involved). Debugging and local testing is a nightmare. Cloudwatch Logs aren't that bad (you can easily search for terms).
We have been using Lambdas in production for about a year and a half now, to do 5 or so tasks. Ranging from indexing items in Elasticseaech, to small CRON clean up jobs.
One big gripe around Lambads and integration with API Gateway is they totally changed the way it works. It use to be really simple to hook up a lambda to a public facing URL so you could trigger it with a REST call. Now you have to do this extra dance with configuring API Gateway per HTTP resource, therefore complicating the Lambda code side of things. Sure with more customization you have more complexity associated with it, but the barrier to entry was significantly increased.
* Games are developed as command line tools which use JSON for input and output. They're pure so the game state is passed in as part of the request. An example is my implementation of Lost Cities
* Games are automatically bundled up with a NodeJS runner and deployed to Lambda using Travis CI
* I use API Gateway to point to the Lambda function, one endpoint per game, and I version the endpoints if the game data structures ever change.
* I have a central API server which I run on Elastic Beanstalk and RDS. Games are registered inside the database and whenever players make plays, Lambda functions are called to process the play.
I'm also planning to run bots as Lambda functions similar to how games are implemented, but am yet to get it fully operational.
Apart from stumbling a lot setting it up, I'm really happy with how it's all working together. If I ever get more traction I'll be interesting to see how it scales up.
- No straight way to prevent retries. (Retries can crazily increase your bill if something goes wrong)
- API gateway to Lambda can be better. (For one, Multipart form-data support for API gateway is a mess)
- (For NodeJs) I don't see why the node_modules folder should be uploaded. (Google cloud functions downloads the modules from the package.json)
One thing to be careful of, if you're targeting input into dynamodb table(s), then it's really easy to flood your writes. Same goes for SQS writes. You might be better off with a data pipeline, and slower progress. It really just depends on your use case and needs. You may also want to look at Running tasks on ECS, and depending on your needs that may go better.
For some jobs the 5minute limit is a bottleneck, others it's the 1.5gb memory. Just depends on exactly what you're trying to do. If your jobs fit in Lambda constraints, and your cold start time isn't too bad for your needs, go for it.
Anyways, I'd recommend starting from learning the tools without using a framework first. You can find two coding sessions I published on Youtube.
Then we implemented a RESTful API with API Gateway and Lambda. The Lamdbas are straightforward to implement. API Gateway unfortunately has not a great user experience. It feels very clunky to use and some things are hard to find and understand. (Hint: Request body passthrough and transformations).
Some pitfalls we encountered:
With Java you need to consider the warmup time and memory needed for the JVM. Don't allocate less than 512MB.
Latency can be hard to predict. A cold start can take seconds, but if you call your Lambda often enough (often looks like minutes) things run smooth.
Failure handling is not convenient. For example if your Lamdba is triggered from a Scheduled Event and the lamdba fails for some reason. The Lamdba does get triggered again and again. Up to three times.
So at the moment we have around 30 Lambdas doing their job. Would say it is an 8/10 experience.
API Gateway is a little rougher, but slowly getting there.
The worst part about it by far is CloudWatch, which is truly useless.
Check out https://github.com/motdotla/node-lambda for running it locally for testing btw - saved us hours!
I think a lot of people try to use the "serverless" stuff for unsuitable workloads and get frustrated. We are running a kubernetes cluster for the main stuff but have been looking for areas suitable for lambda and try to move those.
- For serverless APIs for querying the S3 which is a result of the above workload
Difficulties faced with Lambda(till now):
1. No way to do CD for Lambda functions. [Not yet using SAM]
2. Lambda launches in its own VPC. Is there a way to make AWS launch my lambda in my own VPC? [Not sure.]
Building reactive systems with AWS Lambda: https://vimeo.com/189519556
- Runs fast, unless your function was frozen for not enough usage or the like
- Easy to deploy and/or "misuse"
- Debugging doesn't really work
All in all, probably the least painful thing I've used on AWS. But that doesn't necessarily mean much.
Need to say, that you should use gordon<https://github.com/jorgebastida/gordon> to manage it, Gordon makes the process easier.
We also use it to perform scheduled tasks (e.g. every hour) which is good as it means you don't have to have an EC2 instance just to run cron like jobs.
The main downside is Cloudwatch Logs, if you have a Lambda that runs very frequently (i.e. 100,000+ invocations a day) the logs become painful to search through, you have to end up exporting them to S3 or ElasticSearch.
- Use environment variables
- Use step functions to create to create state machines
- Deploy using cloudformation templates and serverless framework
The only negatives are:- cold start is slow, especially from within a VPC- debugging/logging can be a pain- giving a function more memory (~1GB) always seems to be better (I'm guessing because of the extra CPU)
- The CPU power available seems to be really weak. Simple loops running in NodeJS run way way slower on Lambda compared to a 1.1 GHz Macbook by a significant magnitude. This is despite scaling the memory up to near 512mb.
- Certain elements, such as DNS lookups, take a very long time.
- The CloudWatch logging is a bit frustrating. If you have a cron job it will lump some time periods as a single log file, other times they're separate. If you run a lot of them its hard to manage.
- Its impossible to terminate a running script.
- The 5 minute timeout is 'hard', if you process cron jobs or so, there isn't flexibility for say 6 minutes. It feels like 5 minutes is arbitrarily short. For comparison Google Cloud Functions let you work 9 minutes which is more flexible.
- The environment variable encryption/decryption is a bit clunky, they don't manage it for you, you have to actually decrypt it yourself.
- There is a 'cold' start where once in a while your Lambda functions will take a significant amount of time to start up, about 2 seconds or so, which ends up being passed to a user.
- Versions of the environment are updated very slowly. Only last month (May) did AWS add support for Node v6.10, after having a very buggy version of Node v4 (a lot of TLS bugs were in the implementation)
- There is a version of Node that can run on AWS Cloudfront as a CDN tool. I have been waiting quite literally 3 weeks for AWS to get back to me on enabling it for my account. They have kept up to date with me and passed it on to the relevant team in further contact and so forth. It just seems an overly long time to get access to something advertised as working.
- If you don't pass an error result in the callback callback, the function will run multiple times. It wont just display the error in the logs. But there is no clarity on how many times or when it will re-run.
- There aren't ways to run Lambda functions in a way where its easy to manage parallel tasks, i.e to see if two Lambda functions are doing the same thing if they are executed at the exact same time.
- You can create cron jobs using an AWS Cloudwatch rule, which is a bit of an odd implementation, CloudWatch can create timing triggers to run Lambda functions despite Cloudwatch being a logging tool. Overall there are many ways to trigger a lambda function, which is quite appealing.
The big issue is speed & latency. Basically it feels like Amazon is falling right into what they're incentivised to do - make it slower (since its charged per 100ms).
PS: If anyone has a good model/providers for 'Serverless SQL databases' kindly let me know. The RDS design is quite pricey, to have constantly running DBs (at least in terms of the way to pay for them)
I wrote a paper "Eclipse Attacks on Bitcoins Peer-to-Peer Network"  about maliciously partitioning the Bitcoin network. Much of the paper focuses on how to partition the network, but Section 1.1 Implications of eclipse attacks should give a good sense for how Bitcoin's security properties depend on the network not being partitioned.
"Hijacking Bitcoin: Routing Attacks on Cryptocurrencies"  also discusses network partitions and Bitcoin. As with Eclipse Attacks it focuses on both the how and the effects.
Interestingly blockchains built on Algorand  would not fork under a network partition they would just cease to create new blocks until the network is whole again.
: "Eclipse Attacks on Bitcoins Peer-to-Peer Network" https://www.usenix.org/node/190891
: "Hijacking Bitcoin: Routing Attacks on Cryptocurrencies" https://arxiv.org/abs/1605.07524
: "Algorand: Scaling Byzantine Agreements for Cryptocurrencies" https://people.csail.mit.edu/nickolai/papers/gilad-algorand-...
Someone who had 27 bitcoins before the split gets 27 of each type of coin after the split.
Every transaction will be incorporated into one or both of the copies. Some transactions will depend on other transactions, and therefore as time passes, even a small difference in the sets of transactions applied to each copy will snowball into the majority of transactions ending up in only one tree.
There is a vulnerability in the bitcoin design here: Transactions from one partition can be replayed on the other tree at any time, now or the future. If someone sends you coins that only exist on one partition, but they later receive coins to the same address on the other partition, you can steal them by replaying the transaction.
The answers there are mostly right. If left to it's own devices, the fork will be resolved when the country gains access to the network again.
The way it would typically be resolved is that the chain that has done the most work (in a Proof of Work coin) will "win"... in practice this means the one with the longest chain and most transactions.
When this happens, the transactions in the blocks that roll back are likely to be added back to the mempool (in memory list of unconfirmed transactions) in which case they will probably still be added to a block. So for most legitimate transactions they might not notice.
However, there is a problem here. Adding hundreds of thousands of transactions to the mempool on many coins will cause huge problems.
Another problem is if the same output is spent on both forks. Called a double spend. In coins... each transaction has one of more inputs and one or more outputs. Outputs can then be used as inputs to other transactions. Each output can only be used as an input once.
If that happens, the transaction that was on the fork that lost will itself be lost since the network will reject it for trying to spend an already spent output.
Furthermore, if anyone travels from that country and connects to a network outside of it. They will eventually roll back and join the fork on that side of the partition as that partition will inevitable eventually have more "work done" than the one in the partition they left.
Now, if the country never gains internet access again. You effectively have two different coins. But you risk chaos as described above. One possible solution in that scenario is to "hard fork" and have everyone on one side of teh partition install a new blockchain client. Then it's official, they are two separate coins.
(This is phrased in a fashion which Bitcoiners will not appreciate but it is not incorrect. For precedent, see the hardfork around the 0.8 release.)
Assumption 1: a government is able to shut down its entire internet, and block off all electronic communications.
This assumption is fine. There are multiple historical examples of governments of doing this.
When a government does this though, there is no network split. A network split is when you have 2 networks that are cut off from one another. The government "shutting off the internet" does not create 2 networks. it make the population of the country have zero access to ANY network. Which means no split.
Which leads us to:
Assumption 2: A government is able to cut off access to the OUTSIDE internet, while also maintaining an INTERNAL network that can talk to each other, but not talk to the outside world.
This is basically impossible. There are no examples of governments being able to do this in any significant capacity.
Sure, there is some attempted internet censorship in countries like China, but the great firewall is extremely leaky. And even if it were 99% effective, 99% effective isn't good enough.
In order to partition the bitcoin network, you do not need to make it impossible for 99% of the population to get access to the outside world. You need to stop 100%, with no margin for error. This is because as soon as a SINGLE node is able to get access to the outside world, it can rebroadcast the information to all internal nodes.
The block chain is essentially the same as a physical ledger that everyone (collectively) uses to confirm and record all transactions.
If everyone suddenly split (partitioned) in to 2 separate groups with 2 separate ledgers each 'everyone' (now that there are 2) would continue to use the ledger of their group.
I'm not sure if bitcoin makes any arrangements for 'merging' ledgers. My understanding is that among divergent chains the longest chain always 'wins' and any others are considered fraudulent.
So, once the two partitions are re-combined, when individuals reach out to 'everyone' and say "give me the latest version of the ledger" they would find the 2 competing ledgers and should choose to trust the one that is longer.
DESCRIPTION: My first distro was Debian. Then, for a while, I used Arch. But it kept irritating me with its total disregard for backwards-compatibility (symlinking /usr/bin/python to python3), coarse-grained packages (want to install QEMU PPC without pulling in every other architecture as well? too bad!), lack of debug packages (good luck rebuilding WebKit just to get stack traces after a SIGSEGV), and package versioning ignoring ABI incompatibilities (I once managed to disable the package manager by upgrading it without also upgrading its dependencies... and later cut off WiFi in a similar manner). So, when I finally trashed my root partition a few weeks ago, I decided to use the opportunity to return to Debian.
One thing I miss from Arch, though, is having an easy way to create a package. It's simply a matter of reading one manpage, writing a shellscript with package metadata in variables and two-to-four functions (to patch up the unpacked source, check the version, build it, and finally create a tarball), and then running `makepkg`. And it will just download the source code, check signatures, patch it, and build it in one step; it even supports downloading and building packages straight from the development repository. I took advantage of it to create locally-patched versions of some software I use, while keeping it up to date and still under the package manager's control.
Contrast that with creating a .deb, where doing the equivalent seems to require invoking several different utilities (uscan, dch, debuild; though ) and keeping track of separate files like debian/control, debian/changelog, debian/rules and whatever else. All the tooling around building packages seems oriented towards distro maintainers rather than users. I'd love something that would relieve me of at least some of the burden of creating a local package from scratch.
DISTRIBUTION: unstable, I guess
- DESCRIPTION:TL;DR: Debian's web pages are hard to navigate and use and it's very hard to see what's happening.
I contribute to FOSS projects whenever I have time and have been wanting to contribute to Debian, but the difficulty is offputting. I'm used to searching for the program name and arriving at a portal page from which I can easily browse the source, see the current problems and instantly start interacting with the community. Unfortunately, contributing to Debian seems to require in-depth knowledge about many systems and arcane email commands. As a would-be contributor this completely alienates me.
One reason is that Debian has many independent services: lintian, mailing lists, manpages (which btw are fantastic and give me hope), Wiki, CI, alioth, the package listing, BTS, etc. To contribute, you need to learn most of them and For example, searching a package name gives me a page at packages.debian.org, but it's very hard to navigate or even discover the other services from there. I can't easily see if there are any lintian issues, critical bugs or current discussions. Additionally, I find most of the systems very hard to use (I still can't figure out the mailing list archives). Ideally, these services would be more tightly integrated.
Another big reason Debian is very hard to contribute to is the main discussion takes place via mailing lists. I understand that many people enjoy working with them, but for light usage they are a big pain. Submitting and history are in completely different programs, there seems to be no real threading, volume is often high and reading large amounts of emails is a chore to me. A solution here would be an improved mailing list archive with options for replying directly integrated to the site.
- DISTRIBUTION: unstable
- ROLE/AFFILIATION: Student
DESCRIPTION: Any time you do a web search for anything regarding Debian, the search results include a huge amount of official but outdated information. Normally for Linux-related questions I refer to the amazing Arch wiki, but there are topics that are Debian-specific, and then sifting through all the detritus is a huge waste of time. There's a wiki, a kernel handbook, a manual, random xyz.debian.org pages, mailing lists, user forums, the Debian Administrator's Handbook...
Granted, it's a huge effort to clean all of that up, but perhaps there's a way to incorporate user feedback, so that pages can be marked as "outdated" by users, or updated by users (wait, there's a log-in page- does this mean I can edit wiki pages? Did not know that...:( ), or otherwise made more systematic.
In particular, it would be great to have more complete information on the installation process: which images to use (RC, ..., or weekly image?), how to put them on a USB stick (why does my 32GB stick now say it has 128GB?; you mean I can just copy the files to a FAT32-formatted drive?), what the options are (for hostname, is any name, a FQDN necessary?), etc. For every single clarification, there will be a hundred, thousand, ten thousand people who are helped; that seems like a worthwhile investment. Everyone is a beginner at the beginning, regardless of knowledge outside this specific domain, so why not make it easier.
All that said, have been using Stretch/testing for a few years, love it, love the Free/Libre Software ethos, love what you guys do, keep it up, thank you!
There are users who'd like to use a non-corporate community distro but who don't need or want software to be as old as software in Debian stable. The standard answer is "use testing" (e.g. http://ral-arturo.org/2017/05/11/debian-myths.html), but 1) security support for testing is documented to be slower than for stable and unstable (https://www.debian.org/doc/manuals/securing-debian-howto/ch1...) and 2) the name is suggestive of it being for testing only.
Please 1) provide timely security support for testing and 2) rename testing to something with a positive connotation that doesn't suggest it's for testing only. I suggest "fresh" to use the LibreOffice channel naming.
ROLE: Upstream browser developer. (Not speaking on behalf of affiliation.)
Python 3 as default
Just to quote from the packaging manual:
> Debian currently supports two Python stacks, one for Python 3 and one for Python 2. The long term goal for Debian is to reduce this to one stack, dropping the Python 2 stack at some time.
The first step for that would be of course Python 3 as default Python version and I'd like to see that for buster, as Python 3 nowadays offers way more features than Python 2 and should be the choice for new Python projects.
DESCRIPTION: Right now, Debian's default install includes rsyslog, and every message gets logged twice. Once in rsyslog on disk, and once in journald in memory. Let's turn on the persistent journal by default, and demote rsyslog to optional. (People who want syslog-based logging can still trivially install it, such as people in an environment that wants network-based syslogging. But that's not the common case.) This will make it easier to get urgent messages displayed in desktop environments as well.
DESCRIPTION: on distros like arch, to a lesser extent void and even gentoo, writing package definition files (PKGBUILDs, ebuilds, templates) is relatively straightforward; in contrast, i don't even know where to start with finding, editing and building debian packages. i think they're built from source packages but beyond that i have no clue. i think visibility of documentation could help here, if not more radical changes to be more similar to the arch/gentoo workflow.
DESCRIPTION: There have been numerous detailed analyses posted to debian-devel that go through every package in standard and important and list out which ones shouldn't be. However, actual changes have only ever been made here on a point-by-point basis. (I've managed to get a dozen or so packages downgraded to "optional" and out of the default install by filing bugs and convincing the maintainer.) I'd really like to see a systematic review that results in a large number of packages moved to "optional".
This would include downgrading all the libraries that are only there because things depending on them are (no longer something enforced by policy). And among other things, this may also require developing support in the default desktop environment for displaying notifications for urgent log messages, the way the console does for kernel messages. (And the console should do so for urgent non-kernel messages, too.)
DISTRIBUTION: Start with unstable early in the development cycle, so that people can test it out with a d-i install or debootstrap install of unstable.
- DESCRIPTION: This is a feature of the guix package manager. From their website:
"Each invocation is actually a transaction: either the specified operation succeeds, or nothing happens. Thus, if the guix package process is terminated during the transaction, or if a power outage occurs during the transaction, then the users profile remains in its previous state, and remains usable."
They also do transactional rollbacks, but I'm not sure how realistic that is for the apt package system.
- DESCRIPTION: If I installed e.g. postgresql I would prefer it not starting automatically by default. I would rather like a message:If you want x to start on boot, type 'update-rc.d enable x'
- DISTRIBUTION: (Optional) [stable]
- ROLE/AFFILIATION: (software dev, mostly web)
DESCRIPTION: The license conflict between the open source ZFS and open source Linux kernel mean ZFS needs to be in contrib. Unlike a lot of other packages in contrib, ZFS doesn't rely on any non-free software. It just can't be in Debian main because of the conflict of licenses.
However, it would be nice if there was a way to have a more official path to ZFS on root for Debian. The current instructions require a fairly high number of steps in the ZFS On Linux wiki.
The ZFS On Linux wiki also lists a new initramfs file that has to be included so ZFS is supported. It seems odd that Debian couldn't include that as part of initramfs. I realize Debian doesn't want to necessarily promote non-free software, but this is free software that just conflicts with the GPL. It doesn't seem like it should be a second class citizen where you have to manually include files that should already be part of the package.
By the nature of the license conflict, it will be a second class citizen in that it can't be part of the normal installation package and you'll have to compile on the fly. However, it would be nice if there was a mode in the Live CD that could handle ZFS installation rather than doing it all manually.
DISTRIBUTION: currently mixture of testing/unstable but I'd like to use day(s) old sid (see other post).
DESCRIPTION: If you are using Debian, especially stable, you have to put up with outdated packages. This is especially a problem with browsers, although you do include security updates and track Firefox ESR, if I understand correctly. But things like Webkitgtk do not recieve updates, and lack feature and security wise after a while.
I think keeping up-to-date versions and having a stable distribution is not per se a conflict. Stable means to me no breaking changes, no need for reconfiguration when I update. It shouldn't mean frozen in time.
It would be great if certain packages would recieve frequent updates even in stable:
- packages that are not dependencies, have a good track record of backwards compatibility, and are unlikely to break
- packages that have to be updated because of security issues (which I think is already addressed now)
- or because of a fast moving ecosystem - even if it was safe, it is frustrating to use a very outdated browser component. I think many networked packages could fit in this category, e.g. Bittorrent or Tor clients, if there are protocol changes.
I think the situation has improved a lot (https://blogs.gnome.org/mcatanzaro/2017/06/15/debian-stretch...), and it would be great to have a stable basis in future and still have up-to-date applications on top as far as possible.
DISTRIBUTION: stable (but also others)
DESCRIPTION: Long-time Debian user here and free software supporter. One aspect where I don't have any practical choice for free software is my non-free iwlwifi firmware.
It's a huge PITA to install Debian like that when you don't have the fallback of a wired network. You provide "non-free" firmware packages, but these don't have the actual firmware! Rather they're dummy *.deb packages that expect to be able to download the firmware from the installer, which is of course a chicken & egg problem for WiFi firmware.
I end up having to "apt install" the relevant package on another Debian system, copy the firmware from /lib manually, copy it to a USB drive, then manually copy it over in the installer.
I understand that the Debian project doesn't want to distribute non-free firmware by default, but it would be great to be able to run a supported official shellscript to create an ISO image that's like the Stretch installer but with selected non-free firmware available on the image.
DISTRIBUTION: Stable on my server, testing on my laptop.
DESCRIPTION: AppArmor improves security by limiting the capabilities of programs. Ubuntu has done this years ago . I'd like to see profiles for web browsers enabled by default.
I think AppArmor is the right choice of default Mandatory Access Control for Debian because Ubuntu and security focused Debian derivatives like Tails  and SubgraphOS  have already committed to it.
DESCRIPTION: a consensus on the next generation of package management. Please.We have had decades of fragmentation (not to mention duplicated innovation) around the RPM vs DEB ecosystem. Which is why it is still hard for beginners to want to use Linux - try explaining to anyone who comes from a Mac about rpm vs deb vs whatever else. Which is why they would pay for the mac rather than use Linux ("its too hard to install software").
Its not just my opinion - PackageKit (https://www.freedesktop.org/software/PackageKit/pk-intro.htm...) was invented for this reason. So you could have Gnome Software Manager that can work the same on every flavor of Linux. Its time to build this the right way.
You have an opportunity now - but again the camps are getting fragmented. We now have snap (ubuntu/deb) vs flatpkg (redhat) all over again. And pretty strongly divided camps are beginning to form around them. It seems that the new rhetoric is snap for servers and flatpkg for desktops... which absolutely doesnt make sense.
Debian is the place to make this stand - systemd was adopted from fedora despite Ubuntu making a strong push for something else. Debian made Ubuntu adopt systemd. I dont think anyone has anything but respect for that process. Debian 10 must take a stand on this.
- DESCRIPTION: Creating a custom remote/local/CD/DVD repo or a partial mirror is simply a nightmare, mainly because package management internals are poorly documented. There are many tools developed to just solve this problem, but most of them aren't actively maintained. Aptly seems like the best right now, but is way much complicated and inflexible.
DESCRIPTION: Many laptops (e.g. Macbook Pro) come with retina screens, but most of us use 'regular' monitors. Even after setting org.gnome.desktop.interface scaling-factor and playing with xrandr, it can be difficult or impossible to get a single external non-retina display set up in the right position and without one screen containing tiny text (or huge text).
Being able to make it work at all, and persist after a reboot, would be great. Having per-monitor scaling in the Display settings panel (or in 'Arrange Combined Displays') would be amazing.
DISTRIBUTION: I've experienced this with jessie. I haven't tried with stretch.
stretch made OpenSSL 1.1 the default openssl package. Unfortunately, OpenSSL 1.0 was kept around, since so many things depended on it.
There should now be enough time that a firm stance can be taken toward not allowing OpenSSL 1.0 in Debian Buster.
Once TLS 1.3 is finalized, OpenSSL 1.2 will be released with TLS 1.3 support. Not supporting TLS 1.3 in buster would (in my opinion) make Debian appear less in other people's eyes. That means supporting OpenSSL 1.2, and having three OpenSSL packages (1.0, 1.1, and 1.2) is too much for one distribution.
There are users who simultaneously want to get their infrastructural packages like compilers from their distro and want to build fresh upstream application releases from source.
This leads to pressure for Linux apps and libraries to be buildable using whatever compiler version(s) that shipped in Debian stable, which amounts to Debian stable inflicting a negative externality on the ecosystem by holding apps and libraries back in terms of what language features they feel they can use.
To avoid this negative externality, please provide the latest release (latest at any point in time, not just at time of Debian stable relase) of gcc, clang, rustc+cargo, etc. as rolling packages in Debian stable alongside the frozen version used for building Debian-shipped packages so that Linux apps and libraries aren't pressured to refrain from adopting new language features as upstream compilers add support.
(Arguably, the users in question should either get their apps from Debian stable or get their compilers from outside Debian stable, too, but the above still seems a relevant concern in practice.)
100% reproducible packages
While having over 90% of packages reproducible already is awesome, 100% would be even better. The stretch release announcement describes best why:
> Thanks to the Reproducible Builds project, over 90% of the source packages included in Debian 9 will build bit-for-bit identical binary packages. This is an important verification feature which protects users from malicious attempts to tamper with compilers and build networks.
DESCRIPTION: There are a ton of packages in Debian. I sometimes browse through all of the packages looking for some gem that I didn't know about before. It's a time intensive process and I don't have any input into my decision other than reading the description. Sometimes I'll install it immediately. Other times I'll check out the website to see if it's still maintained (or if there's a better alternative). It's all a very manual process.
popcon doesn't fill this void. Popcon tells me what packages are popular across all users. I'm more interested in what a subset of users with similar interests or preferences would install. Or maybe I want to see what it's like to live in someone else's shoes. For instance, maybe I'm learning a new programming language and I want to setup my environment similar to an experienced user so I have all of the popular libraries already installed.
It would be nice if there was a better way to discover packages that are relevant to you. Perhaps you could add this feature as a way of getting people to install popcon? For example, you could say if you install popcon, then it will upload your set of installed packages and make recommendations for you.
If people are able to add metadata about themselves (e.g. I'm an expert Emacs user and I'm a golang developer), then you could use that plus their package list to make recommendations. I could say "show me what packages golang developers tend to install". Or you could say "for someone with a package list similar to mine, find out what packages are popular that I'm missing".
Currently, it's too hard to report bugs, inspect debian source packages, propose fixes, etc. The overhead to making a simple contribution is too high.Note: this isn't a debian specific issue, many open source projects has old infrastructure.
First-class init that is not systemd
I believe it's notorious that systemd is highly controversial, even spinning off a fork called Devuan. It might be more favorable to reunite the community by including one alternative init system that is, fundamentally, a first-class citizen in the Debian ecosystem.
"First-class" implies that the user is given a choice on new installations in a specified prompt. The default should be the option "systemd (recommended)".
buster+1 given the expected effort
Individual and hobbyist system administrator
- DESCRIPTION: The installer should offer an option to install a simple WM, like i3 or awesomewm, in the way that there is an option in the minimal installer to install a DE like Xfce or GNOME. Bonus points if you make it aesthetically pleasing to some extent.
- HEADLINE: Kernels in repo which do more than the mainline/default kernel
- DESCRIPTION: I'm thinking of specifically of the patches by Con Kolivas, but any other useful pre-compiled kernels being available in the repo would be great, it would save me having to figure it out by myself and I'm sure there are many who would welcome the availability of pre-patched kernels, better i/o schedulers etc
- HEADLINE: Look into more optimisation (like Solus)
- DESCRIPTION: Solus (www.solus-project.com) does some optimisation on their distro that would be a good-to-have in any other distro
- ROLE/AFFILIATION: Infrastructure programmer for multinational corp
Secure Boot in Stable
UEFI Secure Boot Support in Debian.
Debian does not run on systems with Secure Boot enabled.
I work at an insurance company and all of our development computers and most of our servers run debian jessie.
We will probably upgrade to Debian 9 very soon! Thanks for all the hard work on debian Iamby!
EDIT: grammar and formatting
DESCRIPTION: The wiki is frequently stale or incomplete. A lot of people get information much more readily out of a wiki than mailing lists. Like me, for example :) Mailing lists have a very high latency (often infinite) and can be difficult to search.
For example, say you want to host your own apt repo to hold a custom package; this page is not very clear https://wiki.debian.org/DebianRepository/Setup - how do you choose which of all the software types to use? It's a reasonable software overview, but not great to help people get a repo set up.
Arch has a fantastic wiki that's clear and concise. It's also more readable (mediawiki) than Debian format, though I understand Debian aims to work as bare html for greater device compatibility.
DISTRIBUTION: Primarily Stable, with later sections for non-stable if needed.
DESCRIPTION: Recently had to reinstall my Debian system for the first time in a while, and was struck by how user-unfriendly the installer still is compared to many of the alternatives. I don't think it's necessarily a problem that it's ncurses, but it could use some more explicit hand-holding. I remember one point where I needed to select some options from a list and there was no indication of what operation was required for selection, for example (I think I needed to hit '+'?). I'm pretty familiar with command lines and curses-type UI's and this was unintuitive for me, I can only imagine how frustrating it might be for a more desktop-oriented user.
I also recall a very confusing UI paradigm where the installer steps are a modal view and there's a hidden 'overview/master menu' you can back out into at any time, and it's not clear to me how those two modes are related and what state it leaves your installation in if you back out and then jump into the installation at a different step.
Generally the explanatory text is quite good at telling you what decision needs to be made, and providing necessary info to research that decision if necessary, but how you make those decisions I think could still be improved.
- DESCRIPTION: Debian has been a great source of innovation and leadership within the OSS world. Make the next big move by adopting pledge(2) from OpenBSD to be the first major mandatory security feature on Linux. There is little hassle in making programs use it, and the LOC in the kernel is tiny compared to say SELinux. See  for more details.
- DISTRIBUTION: Any and all!
- ROLE/AFFILIATION: CS program analysis researcher with MIT/CSAIL.
- DESCRIPTION: Debian is the only distribution that I know of that provides .iso images from which you can install the operating system and subsequently install a wide range of (libre) software. In addition, Debian provides update .isos. These affordances make installing and maintaining a desktop computer without an Internet connection, or with a slow and expensive connection, viable. I hope that Debian will continue to provide this affordance as we transition from optical disks over the next few releases.
- DISTRIBUTION: All Debian distributions.
- ROLE/AFFILIATION: End user (desktop)
Wayland as default display server
X11 is aging, so it's time to switch to Wayland. It'd be cool if buster would ship with Wayland as default display server.
DESCRIPTION: I tested the stretch release candidates in VirtualBox, and while I did eventually get them working, I had to follow the instructions in several bug reports from across both the Debian and VirtualBox probably project websites.
I don't mind following instructions, so if there is a reason why this can't be achieved seamlessly with zero configuration, then I would at least like to see some official instructions prominent on the Debian website.
COMMENT: Debian is awesome, thanks for everyone's hard work!
DESCRIPTION: On rolling release distros there's currently a vim version that ships rust syntax highlighting, rustc and cargo. This is pretty much all you need to get started with rust development. Debian stable currently ships rustc, but lacks cargo, which is rather essential if you actually want to compile your project on a debian server. The vim-syntax thing would be nice to have. :)
DESCRIPTION: The #1 reason why I don't use Debian on the desktop is missing wifi support during installation. I wish Debian could write and include free wifi drivers for all recent laptops.
DISTRIBUTION: Debian 8 on the server. Mint Mate on the Desktop.
ROLE/AFFILIATION: Founder and CEO of a tech startup.
DESCRIPTION: At https://fosdem.org , we are using the nginx rtmp module intensively. It seems it is becoming a de facto standard when an in-house streaming server is preferred, as opposed to an external streaming platform. It combines excellently with ffmpeg, the recently pacakged voctomix and several components of the gstreamer framework, to create an excellent FOSS video streaming stack. Some debconf video people too seem to be interested. Some positive interest from Debian nginx pacakagers. Unfortunately, no clear way forward yet.
Hopefully, Buster opening up might create some opportunities to get things going again!
SEE ALSO: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=843777#23
DISTRIBUTION: Debian 10 stable & Debian 9 backports.
ROLE/AFFILIATION: http://fosdem.org staff (= all year round volunteer), responsible for video streaming & recording since FOSDEM 2017
DESCRIPTION:This is a nitpick/wishlist item really. I started using Stretch while in testing, and noticed that most updates would download rather large sets of icons (few MBs). They look like archive files of icons, and I guess that if any change happens the whole set is downloaded again. This wasn't the case in Jessie.
When on a slow Internet link, it can definitely slow down upgrades. It would only be noticeable for Testing/Unstable, as otherwise these sets of icons would not change much. But when regularly updating testing, often these icons sets were a significant part of the downloaded data.
It could be nice to make updating those icons optional, for people behind slow links. Alternatively, handling them as a versioned list (text, easy to diff efficiently) + independent files could make their update more efficient than compressed archive files.
Again, just a nitpick/wishlist item. It's just that I haven't chased down what this comes from (I guess for GUI package management like synaptic? TBC) and don't know where this could be reported. You just gave me the opportunity ;)
DISTRIBUTION: Testing/Unstable (any version with frequent changes)
- DESCRIPTION: Continue with the values that make debian great. E.g.https://www.debian.org/code_of_conducthttps://www.debian.org/social_contracthttps://www.debian.org/intro/free
- DISTRIBUTION: (Optional) [stable, testing, unstable, or even a Debian deriviative]
- DESCRIPTION: PNG image files use too much space in Debian's source tree; in user's install size; and in Debian's website.
All meta-data that does not affect display should be removed and the file should receive a complete lossless compression run with an optimizing tool.
Just try: find / -name "*.png" 2>/dev/null | xargs -d '\n' optipng -preserve -o7 -zm1-9 -strip "all"
A byte here, a byte there, and then suddenly your system is now several MB smaller and runs actually faster.
Upstream should be made aware of this.
DESCRIPTION: Any plans to go ahead and stabilize the dpkg library for buster? Having access to a stable package management library is essential in our software. Ie. being able to verify package signatures and querying the database for files. Both of which are not supported.
DESCRIPTION: It would be great to have a central keychain where keys (SSH, PGP) could be unlocked on a sessions basis (think of a merge between gpg-agent [who wouldn't scream about being hijacked every other day] and ssh-agent [who wouldn't be shell-specific and able to handle multiple keys without having to manually :
> eval $(ssh-agent -s)> ssh-add /path/to/key1> ssh-add /path/to/key2> ...
As a desktop user, what I would like is, on a session basis, when I first provide the passphrase for a given key (when I ssh into a server from the CLI or decrypt a PGP encrypted email from Thunderbird [with enigmail] for instance) have a keychain securely unlock these keys for the duration of the session (that is, until I lock the screen, close the lid or log out).
- DESCRIPTION: I think there is lots of ways. Things like flatpak look promising but also docker. It would be nice if there where less papercuts when using those things. I also dream about a command named "playground [name]" which instantly gives me a shell where I can try stuff without interfering with anything else. When finished I can just "playground remove [name]". I know that it's possible today but it's a but of a hassle.
- ROLE/AFFILIATION: (software developer, mostly fullstack webdev)
Also get rid of all interactivity during install and upgrade. It's deadly for managing big fleets.
- DESCRIPTION: I have tried installing debian many times on various machines and have had huge trouble getting the install usb stick to boot properly (or in the end for the bootloader to install) with Debian. Ubuntu installs flawlessly on these machines.
- ROLE/AFFILIATION: (Optional, your job role and affiliation)
Description: More KSP security features enabled by default, perhaps even Firejails pre-installed, Wayland as default along with flatpaks, etc
DESCRIPTION: In the past I've often ran into stuff in Debian just being too old for my needs. I don't need the bleeding edge, but two years is a really long time. I've switched to Ubuntu a few years ago, but not being a fan of Canonical it would be nice if I could come back to Debian.
ROLE: full stack web developer
DESCRIPTION: GCC 6.4 will be released soon (July). I wish Debian will get all the regression fixes that this update will bring (according to the new numbering convention, version 6.4 does not mean new features, so no breaking-changes, only fixes). Same for CUDA 8.0.61 (already available for ~5 months) which is a maintenance update after version 8.0.44, the one available in Stretch. I'm saying this because Jessie never got the latest bug fix release (4.9.4) for the 4.9 series of GCC, not even in the backports (it still offers the 4.9.2 instead). I wish there was a policy that allowed regression fixes from upstream to be ported and with the same priority as security fixes. GCC and CUDA are only examples, the same scheme would be applicable to any other package as well. In my view, this would foster Debian adoption on desktops at a higher level. If this can't be done for the current Debian Stable, I hope my (and other people's similar) concerns will be taken into account in the future. As a developer, I care about this level of support. We all love Debian, we'd just like to make it better. Thanks.
DISTRIBUTION: Debian Stable
- DESCRIPTION: Jessie had a standard Live CD. While the HTML still refers to this flavor, it is not found on any mirror that I checked for Stretch.
I have to use the live CD to install ZFS on Root. I would prefer to not bother downloading or booting a desktop environment when I don't need one.
I don't know why it was removed, but the name was always strange to me. Name it textonly or expert or something so people don't choose it. Standard sounds like it is the recommended image.
- DISTRIBUTION: Live CD
Using WiFi direct on most debian-based distros is a hassle, requiring a lot of manual terminal work. A GUI in the network section for WiFi Direct would make connections easier and faster.
DESCRIPTION: Please disable pcsprk by default :-)
- DESCRIPTION: Since Debian testing / unstable are often advertised as targeted for desktop usage, they can benefit from some more focus on preventing breakage. I know it's somewhat counter intuitive to expect stability from "unstable" or "testing" variant, but at the same time Debian can benefit from avoiding the stigma of server only distro. Having out of the box robust desktop experience (which is not falling behind) is the goal here.
In the period between Jessie and Stretch, testing had a number of breakages in my KDE desktop. Packages fell out of sync (like KDE frameworks and Plasma packages weren't properly aligned, because some packages were stuck in unstable because of not building on some archs) causing all kind of instability issues. It lately became a bit better, but I think desktop stability can get some more love, especially for the most popular DEs like KDE.
And if neither testing nor unstable fit that role, may be another branch should be created for it?
- DISTRIBUTION: Debian testing / unstable.
- ROLE/AFFILIATION: Programmer, Debian user.
- DESCRIPTION: The last laptop that I bought from Lenovo had a thunderbolt port, and I had to use that port to get 3 x 4k monitors to work. The hardware shipping with non-functional firmware. The only way to upgrade the firmware was by booting Windows. I was not sure if there were other devices with old firmware, so I spent hours waiting for a full OS upgrade. Dell was working on a thunderbolt firmware loader at the time, not sure if they released it by now.
Similar situation with the AMI firmware security issues (CVE-2017-5689). The only way to upgrade (afaik) is by running a particular windows installer.
It seems really dumb having to buy a throw-away drive just to be able to boot windows to upgrade firmware. Obviously, I understand this at the feet of the hardware vendor. I was going to suggest pre-installed Debian, but Lenovo will ruin that with pre-installed crapware.
- DISTRIBUTION: stable
- ROLE/AFFILIATION: entrepreneur
Description: in debian installer you can chose a few standard setups. The default options are a bit crazy to me and also i miss alot of pkgs by default.
A bit of cleanup would be nice (iirc you can select database server for example, that ll give you my/maria)
It would be nice if you yould specify a code like "xxx/yyy" that would resolve to a public repo of predefined templates in which you could also define your own.
I for one, would define a server, workstation and laptop setup.Server setup would includr sshd, screen, etc
DESCRIPTION: There are a few Debian meta packages but they are really broad. Example: it would be great if there were a few developer leaning packages grouped into one meta package.
For instance, I always install etckeeper, apt-listchanges and apt-listbugs. I think anyone following testing or unstable would want to install those and I'm not aware of any real alternatives to those. I can't imagine using unstable without apt-listbugs to warn you when there high priority bugs in the packages that were already uploaded.
DISTRIBUTION: mixture of testing/unstable.
DESCRIPTION: It is often recommended to separate the OS partition from the users data partition containing /home. This should be available as an easy option for non IT users. If 1 partition exists, a recommended split MB size is is default. If 2 partitions exist, they are checked for OS files and home files, so the user sees which one will be overwritten. This is convenient and a safety net for most users, and a lifeline for non IT people who may not know the recommendation, or how to proceed.
- DESCRIPTION: It would be nice if Debian testing freeze is delayed until an enough stable version of gtk4 is included in testing (and thus eventually in next stable).
DESCRIPTION: The Linux x32 ABI, for the most part, combines the best of both worlds in x86: the lower memory footprint of 32-bit software (and likewise, 4GiB process limits to go with it) by keeping pointer sizes and data types the same as i386, but still allowing applications to take advantage of the expanded registers, instructions, and features of x86_64 CPUs. For most systems that aren't database servers, this can result in large memory footprint reductions and greater performance as a result. Debian has had an unofficial x32 port for years, that is presently difficult to install and get running.
DESCRIPTION: Debian unstable still has elixir 1.3.3. It looks like the "official" path forward is to add Erlang Solutions as another apt repository and install packages from there. However, this feels wrong to me as a user. I want to get packages from Debian.
I can't remember which distribution it is, but IIRC one of the other ones has developers upload builds from their personal machines and they are signed with GPG. I don't like this because it is opening yourself up to problems. Perhaps someone uploads a malicious binary build. Or perhaps their developer machine is compromised and someone else uploads it for them or infects their upload.
All of this would go away with 100% reproducible builds in Debian and when it builds on Debian infrastructure. That's not the case when Erlang Solutions is setup as the provider.
I realize this is a minor point as few people will install it, but I was surprised that other distributions include the latest Elixir but Debian does not. The latest is 1.4.4 and I couldn't find anything related to 1.4.x in the upload queue or bug reports. It seems like the package maintenance has been outsourced to Erlang Solutions.
DESCRIPTION:It would be great if Debian finished its LXD (container hypervisor)packaging and got it up to a decently complete level (per Ubuntu).
- DESCRIPTION: This request might not be considered in a short term or never be considered, but personally I hope that can be done.
For Desktop, I wish there exists Debian defined environment or interfaces which transparently integrates with desktop environment like power manager. So when switching between different, for instance, desktop environment or window manager, I don't need to tune for specific setting (particularly non-Debian way) in order to get it work.
For Kernel, I would like to see integration with seL4.
- ROLE/AFFILIATION: Software Engineer
No systemd (and pulseaudio if desktop) for me.
DESCRIPTION: something like 'apt-get deps <package>' returning a list of all deps for a package. This would be super duper when trying to install a standalone package file on a system where the deps aren't already present.
- DESCRIPTION: I would absolutely love a well supported container system for running testing/unstable in a container. I feel that docker requires a lot upfront work with mixed results.
We often develop software using packages of the next debian version (such as Python 3.6) and these packages aren't always available in backports or otherwise outside of testing, in these cases it would be really nice to easily boot up this software in a container.
- ROLE/AFFILIATION: Lead Product Developer at Cetrez
DESCRIPTION: installing Debian should be a straightforward process for average Joes and Jannes, that's not the case currently. The process to acquire the proper ISO and have it on a bootable USB stick/SD card is overly complicated (because the information is hidden, missing or incomplete).
As an average Joe, when you visit debian.org there is no obvious place to click to get the latest stable ISO. The default (in a tiny box in the upper right corner on the homepage) is a net-install ISO. net-install are sub-optimal for users who require special firmware for their network card (dvd-1 amd64 should be the default).
You should consider that the default install process for most desktop users will consist of installing Debian from a USB stick on an amd64 system. Once the the right ISO is properly put forward, you should provide clear and detailed info on how to properly transfer the ISO to the USB stick and make it bootable.
Etcher is a free/libre, cross-platform, user-friendly, straightforward GUI (over "dd" iirc) that takes care of the process of making a bootable drive. It should be promoted and made part of the install doc.
Same goes for SD-card installs, many single-board computer enthusiasts (who are not necessarily tech savvy) renounce trying to make a bootable SD card themselves and simply buy a pre-installed one. Simply because the information isn't provided in a straightforward fashion on Debian website and they are not offered with a relatively simple process .
no, using "dd" from the CLI isn't simple: as a Joe you must care about many concepts that are un-obvious (wait what does it mean "the volume is mounted" ? how do I unmount it ? how do I identify the proper volume ? fuck I unmounted the drive, it won't auto-mount anymore ! file-system ? what are you talking about ? MBR ? DOS-compat...)
ROLE/AFFILIATION: electronics engineer, based in Europe, involved in local initiatives to promote free software (LuGs, crypto parties, hacker spaces,...)
Thank you for your awesome work, I wouldn't be involved in promoting free/libre operating systems if it wasn't for Debian (a great community project that cares for users rights/freedoms and provides an overall simple desktop experience).
- DESCRIPTION: XD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
- DISTRIBUTION: Stable
- ROLE/AFFILIATION: Enthusiast and wanna be developer
Instead of pinning to, say PHP 7.1.5, pin to 7.1 and stop backporting fixes. It's okay to have 7.1.6.
Describtion: Debian should have easy usability to set the desktop theme to a light color theme. Right now it is quite difficult for users to change desktop look and feel. Please also make usability testing of changing desktop settings. The current color scheme which is dark does suit all users. A dark and light theme should more users covered.
Many thanks to all the Debian developers for creating a great distribution!
DESCRIPTION: Personally I'd like something like 'apt-get update --local' which pulled down a remote copy of every repo. That's be super handy for something like a build machine, and it'd reduce the need to install & maintain an Aptly repo.
DESCRIPTION: I think I represent a number of users. We want to use unstable as a rolling distribution, but we don't want to run into every edge case. Testing doesn't update fast enough and doesn't have as good of security. There's no middle ground between absolute bleeding edge and the too conservative testing.
I used to use unstable but there's that annoying race condition where I could upgrade at the exact wrong time when brand new (broken) package versions were uploaded and not enough time has passed for even the first round of bugs. I'd like a day safety buffer so apt-listbugs has a chance to warn me about catastrophic bugs.
Setting up a true rolling distribution may be too much work for Debian. Actual Debian developers will be running unstable. It would be nice if there was a middle ground for non-Debian developers who want a rolling distribution but don't want to get hit by every edge case in sid.
I think a nice compromise would be to cache the sid packages for a day (or two) and set that up as another branch. A full day of possible bug reports from people on bleeding edge sid would give us a chance at missing the catastrophic edge cases while still being very current.
I think this could encourage more Debian developers. If I wanted to join Debian as a DD, I would need to have an unstable installation somewhere. It wouldn't be my daily driver because I don't want to run into those breaking edge cases. If my daily driver was day old sid, I could have another machine / VM that runs sid and would almost be identical to what my daily driver is running. It's not like testing where packages could be entirely different due to the delay in migrating.
Unlike testing, day old sid would migrate all packages even if there are release critical bugs. There would be no waiting period beyond the strict day limit. If there is a catastrophic edge case, people already on day old sid using apt-listbugs would be able to avoid it. New installations would hit it but you could warn users (see below).
If you make apt-listchanges and apt-listbugs as required packages for day old sid, then people could be informed about what broke on the previous day.
It would be nice to integrate apt-listbugs into an installer for day old sid and fetch the latest critical or high priority bugs before the installation. A new user could then decide if that's a good day to install. Or you could have a simple website that says here's the day old sid installer and these packages currently have critical or high priority bugs. If you would install those packages, maybe wait another day or two for it to settle down.
Maybe day old sid is too close. Perhaps 2 day sid or 3 day old sid? I don't feel that testing fills this role already because testing waits for 2-10 days and won't update if there are release critical bugs. I'm fine with something closer to bleeding edge sid, but I'd really like to allow a few days for the bleeding edge users to report bugs so I can decide whether to upgrade. I don't have an expectation that day(s) old sid is more stable than testing or less unstable than sid. All it provides is a buffer so I can get bug reports and make my decision about whether to upgrade.
DISTRIBUTION: day old sid.
- DESCRIPTION:Tool to log process spawns, kills, network connection start/stop, file modifications etc. onto event logs for review.
- DISTRIBUTION: Kali
- ROLE: Security Analyst
SELinux installed by default
Not sure what else to say...
DESCRIPTION: Systemd is creating far more issues than benfits. Everyone knows it except for its author, L. P. Still Debian has chosen to go down this road, and the result is that people had to fork and to move to Devuan. Go back to a sane, simple, stable init system. This is expecially true for a server-oriented distribution.
ROLE: Fabio Muzzi, freelance linux sysadmin since 1995, loyal Debian user up to Debian 7, now Devuan user and supporter.
DESCRIPTION:For many years I've been fond of Debian and have used it for side hobby projects. But I've had to use Ubuntu and Fedora for real work because I need a modicum of certainty about the intervals between releases.
I acknowledge that Ubuntu's rigid release-every-6-months, LTS-every-24 is impractical for a volunteer project with high standards. But without any firm timeline it's impossible for me to plan and use Debian in production.
For example, a commitment that releases will always be spaced somewhere between 6 and 24 months, would go a long way.
DESCRIPTION:In my use cases, which I think are common, I want a stable base operating system and user interface, but for the applications I work with every day (browser, compiler, office suite, etc.) to be cutting edge.
My dream is to separate packages into two tiers with different update policies, similar to the Android and Apple app stores, and for that matter BSD ports. Platform software like the kernel, system libc, X11, and desktop environments release and update like stable. "Apps" like Firefox and LibreOffice are easily installed and updated on a rolling basis.
I know that I can achieve this now with a custom backports and apt pinning config, but that's more of a low-level project than I'm envisioning. My request is for something that's more of a newbie-friendly point-and-click sort of thing.
There are a few others.
Angel.co has Remote OK as a job search parameter
I don't know if they have any openings currently but definitely worth checking them out in my opinion.
I'm one of the engineers, and can honestly say it's a great company to work for.
Personal experience is local jobs turn into remote positions easier than trying to apply against 1000s of others.
After running my own email server for 15 years I gave up a couple of years ago and paid for someone else to solve the nightmare of dealing with the big email gatekeepers.
SMTP isn't a secure transport.
Having your email stored on someone else's computers (ie the cloud) is not necessarily 'secure'.
Having a well-constructed and well-managed host somewhere you physically control seems to me the most 'secure' arrangement, which is what I have always had. Currently for the cost of a Raspberry Pi and occasional 'apt-get update' etc.
Sometimes you don't want to prevent change because it's an implementation detail. That can mean private APIs, but also whole layers of abstraction.
Common layering, for example, is:
1. Low level operation that needs to be stable, so needs tests.
2. Medium level abstraction layer, not core business logic and builds on stable layer.
3. High level abstraction, the exposed public business logic API. Public API so needs tests.
Medium layer does not necessarily need tests in this case.
I have a bit higher level discussion of this in a talk I gave in May (first video): https://codewithoutrules.com/talks/
1) https://www.ssllabs.com/ssltest/ - try to get an A+. It's not important to in most cases in practice, but you'll learn a lot getting there. Their rating guide is also handy: https://github.com/ssllabs/research/wiki/SSL-Server-Rating-G...
2) MITM yourself. I've done this using Charles, you can do it with any HTTP proxy that lets you rewrite requests on the fly - I hear Fiddler is popular. MITM yourself and try changing the page for an HTTP site. Then try doing it on a website that is part HTTP part HTTPS (e.g. HTTPS for login page for example) and "steal your password". Try again on a website that redirects from HTTP to HTTPS using a 301 but does not have HSTS. Finally try on a site with HSTS (nb: you won't manage this one). Congratulations, you now truly understand why HSTS is important and what it does better than most people!
3) Set up HTTPS on a website. You've probably already done this. In which case maybe do it with LetsEncrypt for an extra challenge?
It doesn't hold your hand at all, but it gives you a nice "task" to accomplish. Reading up on all the terminology and exactly how and why it works was really fun.
There was also a nice web page presenting all kinds of PKI concepts that I came across a few years ago but haven't been able to find since then. :-(
Go a small claims court and pay probably not more than USD 200.
Most other suggest to ask nicely. You already did, nothing happend and from history i doesn't seem like it ever will.
Moved my ETH and LTC elsewhere and sold to a private buyer.
Read more at CB Insights: https://www.cbinsights.com/company/coinbase
Then if they haven't put things right in the time limit you'd go to court, which can be done online now.
I think a similar process exists where you are, and would focus coinbase into fixing the problem.
Maybe this degree is different. But generally the biggest hurdles to becoming an entrepreneur are (1) cash and (2) access to a target market.
- save 25x your annual spending and never work again
- start by saving at least something (even 1%) and save 50% of all future raises
- long commutes are for fools. So are new cars--buy used.
- spend on things that you value. I've given myself a tech budget for years because good tools matter to me
- host a dinner party instead of eating out (most of the time)
- If you have a gamblers mindset to investing, carve out a small portion (10%?) of your money and use it for risky investing. I call mine the 'casino fund'. Track your returns.
- read voraciously about finance and early retirement. You only need about 20 books or so to gain a background that is easily more valuable than your college degree. This is a good start:https://www.reddit.com/r/financialindependence/wiki/books
Your spouse should have a career or should think of having a career of his/her own. Its not about having a lot of money, its to eventually have someone as a financial backup in case things go wrong. Works for both partners.
I love my wife but financially I am in trouble. I make enough money but she has no career aspirations. Her family is quite poor and I had to get mortgage for a house for her parents. In future I will also need to worry about their health expenses.
This effectively means I can never get out of the rat race.
A good piece of advice I picked up from Rami Sethi (when it was still worthwhile reading his blog) was think about how much of your free time per month do you spend on various activities (Facebook, Gaming, etc). How much of that free time is devoted to thinking about your personal finance? If it's less than you think you should be doing, schedule it in.
Also starting reading https://www.reddit.com/r/personalfinance/ regularly.
Now I make sure I have a year's living expenses available outside of my investments. If I get tired of my job, I can just quit knowing that I have the cash to float myself for a while.
A year may be more than you need but at least six months is a good minimum. You'll have the cash to cover a job loss, car troubles, most medical expenses, etc. on hand without going into debt. And an emergency fund should be liquid and save, not invested and at risk. You may only get 1% in a savings account but view the low returns as the cost of insurance, since that's effectively what an emergency fund isself-insurance.
But if I could go back to age 25, before I was married, I'd have told myself to travel to more far-flung places. Being married, I have to:
A) Agree with my wife on where we want to travel
B) Have time to travel that works for both of our schedules (which is difficult to find... plus we have to spend at least some of our time off going to visit our respective families, and now I have two families to visit instead of one)
C) Have the money to travel. In our case, we have two incomes, but still, it was much cheaper when I'd travel with friends and cram four people into a cheap hotel room.
I'm not complaining here. I'm fortunate to have spare income that let's me travel quite a bit with my wife and it's really a fantastic experience to travel with your partner. But there are trade-offs that I simply didn't have as a 25 year old. So those places that are far away and hard to get to? See them while you're young.
Meet and keep in touch with as many people as possible. Switch jobs, travel the world, volunteer and always _always_ make new connections.
The best financial (and personal) gains you will make in life will come from the right connections.
Anyway, my advice would be:
Start meditation sooner.
I would give myself many other advices about risks, people and self-acceptance, but I would have not being able to listen to them at that time.
That's the problem with advices, you must be in a place in your life where you can actually use them.
But I would be able to meditate and figure it out, since that's how it happened.
Replace that with any tool that helped you develop yourself.
If you don't have such a tool, find one quickly that suit you.
Oh, and yes, travelling help, so do so. But you'll reach a limit in what it brings to the table. You need to find a better tool on the long run. Just like money helps, but has a max amount after which it won't make you happier.
1. Reduce all bills/belongings to bare essentials to live minimally.
2. Pay off all debt while maintaining $1,500 emergency fund.
3. Save 6 months living expenses.
4. Invest in yourself with excellent groceries, gym membership/local park visits, medical/hygiene care and other healthy habits.
5. Invest in Vanguard's Total US Stock, Total International Stock and Total Bond ETFs (% as age) and don't touch it.
6. Invest in building your own business - tech or otherwise.
It cannot be translated easily, but more or less it amounts to:
Do the (financial) math often, limit your cravings, spend less than you can gather.
Also, start mining bitcoin.
Keep your cost of living the same when you see large pay bumps or raises. This means the big things like car, house/rental, etc. Don't just go get a new car and increase your spending or move to a "nicer" apartment or buy a house because you have the money available. Keep the car, stay in the apartment and save the extra money.
People will stay that owning a home is an investment - maybe in some areas it is - but not all. If home values are relatively flat in that area or grow very slowly then it is a losing proposition. You will be paying property taxes, school taxes and all the other "taxes" of owning a home: maintenance, repairs, accumulating "stuff" to fill it, etc. If the growth in that area is slow then that is all money down the drain - you won't get it back when you sell.
Contribute more to an index fund.
Save harder for a deposit. High rent/shared housing is horrible.
Don't try to keep up with the Joneses. There'll always be someone richer, with a nicer car you can't win that game. You weren't born in to money, don't even attempt to act like it. Live below your means.
You need to treat yourself far less often than marketing companies would have you believe.
Oh yeah, you don't need to save that internship money, I'm good now. Besides, I make like 5-6 times what you are making.
Understand buying vs renting before doing either.
Keep monthly (recurring) expenses low.
If you absolutely need a car, keep your ego in check and look at mileage & reliability.
Think long term.
Best financial decision i've made was to buy some ETH (Ethereum) last year.
What are your personal goals in the next 5, 10, 30 years? What do you plan on doing that requires money? How much money does that require?
Without knowing anything about your goals then any advice you will get (as demonstrated in this thread) will steer you toward structuring your life around saving money and getting safe but modest returns. Is that what you're asking for?
Here is a question you should ask yourself probably every 6 months:
"If I had infinite resources (money/whatever) what would I do?"
Take that answer and then figure out how to accomplish that without infinite resources.
The only advices missing are predictions that are possible only with actually observing the future (e.g. buy GOOG/AAPL/AMZN).
- Always take into consideration mental health cost. Your commute, your work, the people you choose to surround yourself with. Debt in this area is unpredictable and therefore dangerous in the long run.
- No one has it figured out. Youth will always be wasted by the youth.
Rent. Home ownership only starts making sense on a 5+ year time frame, in some markets 10+ years. Having the ability to move for a better job will reap huge financial benefits, and moving for a short commute will allow you to have so much more free time.
Save vigorously, but not so much that you have a dreadful life now, pining for the future when things will get better once you have "retirement money."
2. Be highly skeptical of most of the financial services industry, especially those selling load funds, insurance, annuities, and who want to manage your money.
3. Enjoy simple cars, or no car if you can manage. The amount of money I've seen friends and family dump into vehicles over 25 years is staggering. I don't even see cars at this point. I don't care what others drive, and I don't care what I drive so long as it's reasonably comfortable, safe, economical.
The more money you make, the more nice things you acquire, and the harder it is to imagine living without them. At the furthest reach, it's a private jet--the crack cocaine of travel.
Develop these appetites with great caution.
Ideally, you'll come to realize that trading is a waste of your time, and you should set and forget a regular investment flow into the Wilshire 5000 or something equally diverse.
Lastly, don't let FOMO lure you into into investing in the new hotness of your age. For me, it was Internet stocks in 98-99. By the time you're hearing about it and it's productized in a way consumers can get involved, it's too late.
If I had all the money lost from 'stock market corrections' on my investments, I could retire comfortably today.
Stop being a little fishy swimming with big fishies.
The interest paid today is a pittance compared to the risk. Save your after tax money in something with near zero risk until the interests rates rebound to make the reward worth the risk.
Move closer to your office. Even if the rent is a little more, the price is worth it if you don't have to use your car all the time.
If you can manage to get that one done, you can actually act on the rest of the financial advice in this thread. If not, you will have to be in unanimous agreement to do anything wise with money (i.e. keep emergency fund, plus six months living expenses in liquid savings), whereas foolishness may be undertaken unilaterally.
- dont waste money on tv subscriptions
- play less videogames
- take greater care of your friends and relations
1. Max out employer's 401k match. 2. Build emergency fund to 3-4 months expenses3. Max out Roth IRA4. Pay off low interest loans (if you have high-interest loans, which I don't/haven't then this becomes #1 and pay them off first).
- Go get yourself a savings account and a checking account. Fill only enough amount in your checking account that you need to get by the month. Remaining goes into savings.
- Buy a home as quickly as you can, in an affordable place in the outskirts. By the time your kids arrive necessary infrastructure will be in place. Also rent is just another form of tax. And having your own home also means some place to rest without financial implication when you are old.
- Take the 401K plan seriously.
- Max out other instruments such as the IRA and Roth IRA.
- Buy a durable, long lasting car. And stick with that as long as it lasts.
- Healthy life style. Nothing pays as well as good health. Buy a bicycle or play a sport. Ensure your heart is healthy and you are not obese. There are other things to this, like learning to cook healthy food. Remember bad health too will account for a big chunk of your earnings in a place like the US.
- Be frugal. Frugality means making decisions that pay on the longer run. $5 may buy you burger combo in McD but trust me it will cost you on the longer run. You don't want that kind of frugality. Which is why the learning how to cook makes even more sense on the longer run.
- Be productive, in all ages. Have free time to network and develop new skills. Never be afraid to start from the beginning or learn and do something new.
- Lastly save. Save a lot.
Also, put some money aside for savings.
Im building my start up ejgiftcards.com. Its generating revenue with about 20-30% margins. Current revenue is about 50-60k per month.
Based on what I've read, this doesn't seem very legit. It's easy to say that you have connections to x industry to get y% of equity in z startup. But, it's hard to deliver actual business, even if you have those connections.
Basically, bringing in the wrong person and giving out equity too soon is a recipe for disaster. Worst case scenario, you end up with an angry outsider on your cap table. I know that you said you're bootstrapped, but if you ever raise funds, an angry outsider on your cap table can royally fuck things up.
If I were in your shoes, I would treat this as a recruiting process for a CEO (even though you don't have $$$ to pay and presumably don't want him to be CEO). If you were hiring a CEO, you would check references, maybe send some cold contacts to former employers/co-workers (LinkedIn is amazing), and generally do a metric shit tonne of due diligence. If he is offended, run. Anyone who is qualified enough to work out in this kind of situation wouldn't hire himself based on what you've told us!!
The thing is that anyone can claim to be anything. You need to ensure that they can actually deliver. Don't give equity and then find out. Find out before you give equity. Remember, equity means they own a percentage of your company possibly for a really long time.
I remember an old post were they get a salesman for a few month, he never sold anything, and later asked for severance payment and equity. He was the viceprecident of something in the previous company. There was a funny detail that he left his special big chair in the office and never passed again to get it. I can't find the link now.
Get everything in writing and reviewed by YOUR lawyer. With an oral contracts is hard to be sure how bad it can be later.
I like the idea in a sibling comment of hiring him as a freelancer until he can prove he can sell something.
Your decision basically boils down to time. How many people are on your team already? If you spent time doing the BD yourself, what would the opportunity cost be? Could your time be spent better on the product or tech? Do you see him bringing value proportional to his cost? (x% equity)
Hire a lawyer. Do serious due diligence on the guy.
Protect yourself before you agree to anything permanent.
As far as any valuable habits at work, I do well to not be as stressed as everyone else. From the philosophy of Hitchiker's Guide to the Galaxy that I try to live by: No matter what situation you find yourself in, DON'T PANIC.
This, however, can appear as a blessing and a curse: The fact that I come off as too down to Earth or too laid back or lackadaisical, which can upset my supervisor or co-workers because it appears I'm not taking something seriously [enough], although I certainly am, I'm just not stressing about it as much as everyone else.
Whatever the situation, there are only two outcomes: We'll either get through it... or we won't.
Since life is pretty much in a linear dimension, especially when it comes to work, it would be rare that we wouldn't get through any situation and we'd eventually move forward and learn from whatever we went through.
And again.. the downside of this? I'm probably more likely to get passed over as someone who could be in a "supervisory" or "management" position. The plus side? Less stress ;)
My main job requires a ridiculous amount of file and data transfers that are mostly scheduled to run during off-peak hours. I needed a way to centralize the results of these jobs in order to keep tabs on things. I built this as an in-house tool and then discovered a few services already existed for this. I thought my solution offered some things these others didn't, and if somebody was paying these other services I might have some success as well. It's been a lot of fun, and if anyone has any suggestions I'd love to hear them.
The "stream-of-consciousness" bit is enabled by the two key features: you choose a finite duration within which to write, and if you stop typing more than a few seconds, your writing is deleted. This essentially forces you to continuously type for the session, and at least for me and the users I've spoken to, this forces out thoughts/ideas/feelings that otherwise wouldn't have made it to the keyboard.
I've personally been using it routinely for months as a therapeutic journal, and at this point I've practically been Pavlov'd into opening it up whenever I'm under cognitive/emotional duress.
it's open source (http://github.com/krrishd/write), and I appreciate feedback!
A telegram bot that sends me NBA related tweets from the ESPN Stats & Info twitter - https://t.me/nbaespnstats - https://github.com/assafmo/nba-espn-stats-and-info-telegram-... - which was amazing during the 2017 playoffs and made the whole watching experience awesome for me. The channel also have around 20 followers right now, so I guess others like it to. :-)
A script that downloads all my shows every day - assafmo/DownloadMyEpisodes
But it can be used for so much more ( https://mypost.io/post/what-can-i-do-with-mypost ).
It is completely free to use. I don't have any plans to charge for it, and have not even added advertising or anything to it yet, but it still receives maintenance and updates, though no more major feature implementations are planned. It was my first web app and taught me a lot, from learning the basics of database programming to a friendly UI that could be understood by everyone. My sister, who is not very tech or computer savvy, was the beta tester. Whenever she questioned something or got stuck on something, I redesigned that feature to make it even easier. Whether it was functionality or the wording.. if she questioned it, it was redone.
It boosted my confidence into the web app world. Right now, I've got about 8 more web apps in the works, 3 of which are in the stages of beta testing, and though there is a free version, they will actually be paid subscription to access additional features. So I am proud to boast about this project, as it was the start to my empire.
I've built many things before but why am I proud of this one specifically? Basically because I've built it with no expectations what-so-ever if this thing will ever be needed by someone else but me. Also, I've built it fast (less than 1 week), polished it a bit, and released it as soon as it was working ok-ish..
And why am I proud of being able to build it although it is not complete? Because I used to deal with perfection for so long that I had to force myself to release anything at all. In fact, it used to be very hard for me to even start doing anything for myself, as I would have analysis-paralysis. For quite some time I had to force myself to think "when good is good enough", read a lot of things about that subject, read other people opinions on these things, etc. etc. etc. After figthing with my own perfectionism, it seems that I finally can do things having lower expectations.. That's why I'm proud..
Waiting for Firefox to approve the add-on now.
I've built http://remindoro.com, a chrome extension to get repeat reminders.
http://palerdot.in/moon-phase-visualizer/ - A simple web demo to understand moon's phases and eclipses.
All of these stuffs are open source (my github - https://github.com/palerdot) and I'm proud of these tools
Not because it was technically difficult, but because it solved a problem that me and seemingly hundreds of other people who signed up are having.
Just a little tech news aggregator I put together using React and Node. Pulls the top 10 stories from HN and a bunch of subreddits, and pushes updates to the browser every 15 minutes via socket.io.
I've still got plenty of improvements to make to it, but I'm trying to break the habit of working on side projects that I don't ship. So I've shipped this one, even though I won't consider it 'done' for quite a while yet. :)
I found a naive, yet effective way of adblocking podcasts which is easily scalable. Although it's not yet released, early access is close to releasing and I'm hoping that it takes off. Really proud of it because it's incredibly cross-dimensional (i.e., marketing, programming, &c.) and that having a podcast adblocker is non-trivial problem to solve.
None of it is public, however, for obvious reasons.
Anyway... this stuff sits there, taking up space and maybe bringing back memories, but adding no value otherwise. But I can't bring myself to just pitch them all in the bin. I guess I'm a bit of a book hoarder and I have a hard time getting rid of books.
Hopefully if you get some good answers, it will help motivate me to try and get rid of some of the cruft I have here.
All of that said, I can see how some of this stuff could have value to somebody. Even the Inside Appletalk book could be useful. Maybe somebody who's way into retrocomputing and actually wants to implement an AppleTalk network. Maybe a historian writing a history of computer networking technologies. Or a hobbyist looking for an older, simpler networking standard to reverse engineer and build something off of. Who knows?
One option might be to see if you have a hackerspace/makerspace in your area, and offer to donate some or all of you superfluous books to them, or their members. I think that's the kind of crowd where you might find somebody who actually wants a copy of Undocumented DOS or Inside AppleTalk.
Another option might be to try selling them on Amazon.com.
If you work in a large enough office, that could be a good way to give a decent number of other programmers the option to grab some free books, and anything nobody wants gets recycled.
I used to try and sell them if I thought they had some value but it was just too much hassle.
The simplest answer is just to parity the math courses of a Berkeley / MIT / Stanford CS degree, although it will likely be a little overkill, especially if you intend to limit yourself to a strict subset of TYCS. For example, databases and networking generally require very different math prereqs than computer graphics or machine learning.
You will need a high school level of math (grammar school math, algebra, trigonometry, basic stats) to be able to program most things.
Discrete math is used heavily in many parts of CS (it is integral to understanding how to accurately negate programming expressions).
You should probably understand calculus at a high level, although my experience with actual calculus usage in my career is zero.
Probabilities are used heavily in concepts like caching / performance, which will touch OS, arch, data structures, and likely others. For this, you should find a "statistics for engineers" type of course / book for undergrads, which may or may not make use of calculus to prove some of the statistical concepts.
Linear algebra is used heavily wherever graphics cards are used, so graphics, video, machine learning, etc. Linear algebra will likely have calculus as a prerequisite.
Modulo math is used heavily for cryptography and some data structures (hash tables). An undergrad will get a few days or weeks of this, and probably not an entire course.
Set theory and graph theory are used sporadically. Networking, distributed systems, etc will make use of them.
In a block level encryption each sector is encrypted below the file system. Doing the nave thing of encrypting each sector with the encryption key is fundamentally insecure. This is called the EBC mode of operation. There's a nice picture of a penguin on wikipedia encrypted with ECB which demonstrates this:
Secure mode of operations generally try to propagate the result of previously encrypted blocks to the next ones. But this approach is not really suitable for mass storage devices. You cannot re-encrypt all the sectors behind the one you just changed. That's just impractical, since writing to sector #0 amounts to rewrite the entire disk.
So in practice schemes like AES-XTS are used. They work by having some kind of way of "tweaking" the encryption, so that it is different for each block (avoiding the pitfalls of ECB), but in a way which allows random access to sectors (i.e. in a way that is predictable). AES-XTS is a tradeoff for this special use case but it is not as robust as more classical modes of operations which would typically be used in an encrypted filesystem.
Details about AES-XTS issues:https://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/
The encryption standards it uses are pretty good, but that is not where blanket whole-disk encryption (which I assume you're talking about) fail. For example, hackers could analyze the preboot environment of an encrypted mac and sniff out the password using a variety of methods. Simply put, whole-disk encryption is too complicated and bug-prone process to really trust to closed-source software.
As for single-file encryption, which is relatively neat and simple, Disk Utility would probably do a pretty good job.
I have also recently switched from Adobe Creative Photography plan to Zoner Photo Studio - for about 1/4 of the price(it has the same functionality as Lightroom and I wasn't using Photoshop anyway)
Amazon Prime (trying Google Express right now)
$5 VPS on DO for small projects
Google Apps for a couple of domains, mainly for email
Straight Talk, 2 lines
Sam's Club (no Costco up here)
Beach parking permit
Children's Museum (go with the ACM Passport level membership if your local one is part of this association, you get benefits when visiting member museums in other cities)
Disney Vacation Club
Citi Prestige Mastercard (has some benefits going away, but a great card for travel benefits and insurance)
Also I have never heard of Lapham's Quarterly - any reviews on that?
For the former, the source of Django, Django Rest Framework, Requests and Flask (as well as most things by Kenneth Reitz and Armin Ronacher) are all great codebases to look at.
A few other good resources are the blog PyMOTW 3  and Brett Slatkin's book Effective Python .
I haven't read it yet but have heard good things.
Yes, followers matter because with more followers you get in front of maximum people.
* Use a patent that was placed in public domain (either it is old or the user did not pay the annual fee).
* Use a patent that a big company offers for free (like Tesla)
* Use an external service that may use patents, if it gets sued it is its problem, not yours.
Anyway long story short, about 90% of what was submitted was objectively terrible, to the point that it made the entire site feel dumpy. Much more so than your standard terrible content because it had the added negative of trying to sell something. I don't know if PH started that way or not, but if we had continued it was clear that a high level of curation was needed.
Never found PH or the stuff that showed up on it real compelling myself.
All of your devices need a folder where you'd store data, those devices need to run a process called chunkserver (mfschunkserver) and simply add their storage space to the pool.
Mounting the pool is done using mfsmount (uses FUSE).
You'd need one of the devices (or another dedicated computer) to be the master (mfsmaster) that needs to be always online.
The restrictions are that all the machines must see each other over the network - either local lan or routed network (or a vpn where you set up routes).
Moose uses concept of goal and storage classes, you can simply set your files or folders to say goal of 3 and it will attempt to store 3 copies across however many devices (minimum 3 to meet this goal).Storage classes are more complex and allow you to specifically control on WHICH chunkservers to store certain files/folders.
I'm using it across 4 machines, where 1 server is physically in Italy (Milan), another in Los Angeles (in a datacenter) and two more at home (also L.A.). One of my servers at home has 4TB drive so this one always stores everything, plus 2 copies across remaining 3 servers for redundancy.
All I need to do to have my data available to me is to VPN in to my network and run mfsmount on my laptop.
It's simplistic and doesn't have all the features that MailChimp offers, but it is way more intuitiv and cheaper once your audience scales.
Of course it means you need to setup Sendy. I was going to setup as Sendy as a Service business, but most of the money would be paying license costs for software I could easily build with enough time, so I canned the idea and focussed on other projects.