If you are exceptionally unlucky and you decide to not work at this major tech company and then your friend who pitched the startup can't come through. You are doubly in trouble (can't go back and don't have a job) not to mention a partially transferred visa.
The smart thing to do is to take the job at the Major Tech corp, spend a couple of years saving some money and learning what you can, and then joining your friend with the startup if they still look like a viable offer. That strategy maximizes your future value. You have additional work experience, your friend knows that you will stick with your word even when you might not want to, and you will have a bit more knowledge of how well your friend can put together a startup or not.
New boss was pissed, but it was the right call for me (and I couldn't have done it any earlier).
In your case, I'd give careful thought to the visa issue; I wouldn't sweat it on behalf of the major tech company at all. They've no doubt had worse behavior from candidates and they'll survive the loss. You probably won't be blackballed at that company, but do expect a question about it if you do decide to apply later.
- A sentence indicating you're withdrawing your acceptance of the offer.
- One or two sentences explaining you have an unforeseen unique opportunity. No need for great detail.
- One or two sentences thanking them for their consideration, and hope for future opportunities together.
That's really all there is to it. A professional company that values it's people will understand that these things happen in life and will not hold any thing against you.
Let's go over the points...
Your friend pitched a startup idea. Based on your wording, it sounds like it is more an idea than an actual company. Either way, unless your friend's startup has the legal resources to sponsor a H1B visa and the financial capability to pay you a prevailing wage, you run the risk of not being able to transfer your visa. The government agency can reject your visa transfer, as well.
By declining the offer, you are putting yourself in jeopardy of losing your ability to work in the country.
I assume you are not married, but for the sake of this response, let's assume you are. What would your wife think about this? Don't be so selfish, imagine that others would be affected.
You are assuming that you will be able to work at the startup legally (H1B transfer). I don't believe you can assume this. Does your friend even know what is involved in sponsoring a H1B? Major tech companies have dedicated departments for managing the H1B process for their employees.
The more time and money they spend on you before you tell them, the angrier they'll be. They may already have grounds to sue you, although I've never heard of a company doing that in such a situation.
Visa issues even pre-trump were hard.
Does your friend have funding?
Does your friend have market validation?
What will happen to you personally if the startup fails - like so many do?
It's a general trivia email. Every day I send a fun fact of the true story behind it.
Been writing for seven years. Not going to disclose how much I make a month, but I describe it as not quite full time job money but a lot more than beer money.
I've got 445 subscribers that pay $5/m or $50/yr for it. No ads, no tracking, but I do insert affiliate links - primarily eBay.
Costs me about ~$1000/yr to run (Mailchimp, web hosting for a Wordpress blog & Discourse forum and Zapier costs mainly).
This financial year (July 1st 2016 to June 30th 2017), revenue sits at about AUD$18,000. I expect around ~$30,000 next year if there's 0% paid subscriber growth and affiliate link revenue stays the same.
I make money from a variety of sources on it, including affiliate links, sponsorships (I recently had some success with http://upstart.me on this front), donations from readers (both Patreon and PayPal, because folks want options), banner ads, and fees for syndicating the content to outlets like Vice's Motherboard.
The pieces are written more like stories than link roundups, giving them an evergreen appeal. This week I wrote about the history of the 911 system; last week I wrote about CGA graphics and Windex. It actually has a smaller profile than my last project, ShortFormBlog, but it's more sustainable from a financial and work-life balance perspective.
Last month, I did a T-shirt sale with the help of a vendor (Vacord Screen Printing, http://vacord.com) and made a few hundred dollars through that.
All of this together is not enough to stop me from working a day job, but the mixture of sources and the fact that I syndicate content helps build exposure and ensures that if one source is weaker than another on a different month, the whole machine doesn't fall apart.
If you want to run a profitable newsletter, be willing to rely on more than one revenue stream.
(though the landing page is SFW, the newsletter is not)
I use the newsletter to market tshirts. I sell through about 100 of each design over the course of 2 months. It pays for hosting and funds the next shirt.
List growth has been slow and steady and im looking to increase my shirt order on the next design.
- Stratechery (Ben Thompson) - $100k/month (conservative estimate) via subscriptions (https://www.stratechery.com)
- WTF Just Happened Today (Matt Kiser) - $8k/month via Patreon (https://www.patreon.com/wtfjht)
My most popular newsletters are about Python (https://python.libhunt.com/newsletter/58), Go (https://go.libhunt.com/newsletter/58) & Ruby (https://ruby.libhunt.com/newsletter/58)
P.S. it took me around 13 months to reach the 5k subs...
We have 289,000 on the newsletter which represents a large group of VC, tech M&A, corporate strategy and startup folks interested in data-driven discussion of technology trends. It's the primary way we sell subscriptions to our SaaS platform.
We messed around with ads in the newsletter but they don't monetize nearly as well as the "house ads" to our data/product or to our events.
It's our company's golden goose.
Interview here: https://www.indiehackers.com/businesses/scotts-cheap-flights
I should probably spend more time finding sponsorship.
I have a sign up form at the bottom of my site and use double opt-in to make sure people really want to subscribe. I also periodically trim the list down by removing people who don't open or click anything. I figure 1000 subscribers and a 50% open rate is better than 5000 and a 10% open rate as the list is more highly engaged. Plus on MailChimp you're wasting $$ sending to people who don't engage.
I write cron.weekly , a weekly newsletter on Linux & open source. When I started around 2y ago there wasn't much competition.
I'm at 6k subscribers now and making roughly 1k (eur) a month via sponsorships. Members don't pay, it's entirely sponsor driven.
It took a little over a year of hard work for free before the first sponsor landed, right now it's pretty good value for the time I put in. As the subscribers grow, that ratio should only get better.
The newsletter fills probably 3/4 of the placements and then I use the other placements to promote conferences that I love. This is usually in exchange for a ticket (which I give away if I can't attend) and media sponsorship which gives the newsletter some extra exposure.
I only introduced advertising after the subscribers reached over 5000 and the Mailchimp costs became a little too much, now it's sitting just over 29k subscribers. It's a great side project that I'd love to invest more time towards but at the moment it makes enough to cover mailchimp, servers, cloudflare, speedcurve and allows me to patron a couple of other newsletters that I love.
In the indie book publishing world, I subscribe to a paid newsletter called The Hot Sheet (http://hotsheetpub.com/). I pay $60/year or thereabouts, which I think is reasonable considering the insights, analysis, and tips I get every 2 weeks. I doubt it's a full-time income for the two writers, but on the other hand I don't think it's a full-time writing gig, either.
It's worth noting that in many of these niches there is very little in the way of established trade pubs as print magazines covering the industry have folded or become shadows of their former selves when they moved online. It doesn't surprise me that some of the more talented or insightful writers have decided to launch their own brands and build their own audiences.
Another one I know of that is successfully getting a good following is www.thesizzle.com.au
At this stage, my pitiful MRR covers costs, etc but not all the time I put in. Soon as I get to 10k+, at $25/cpm it starts adding up to $1k a month, which is a little less painful to look at. :)
It was roundabout 5 years just editing and making no money. Now it's making money - but just a side income.
1. Content is king!2. Start to build a community (if you link someone in your newsletter just ping him on twitter).3. Go to community events.
Doing ads on Facebook and Twitter actually didn't work that well.
That said, email is just another medium. There's no one way to make money just like there's no one way to make money with an app.
I'm guessing it's not the kind of newsletter you meant, but we make millions selling ads in B2B email newsletters.
NtK pioneered some things that got taken up elsewhere: dohgifs highlighted terrible algorithmic placement of online ads next to news stories. Private Eye now does this as Malgorithms.
A newsletter like this would fare better now we have things like Patreon.
It lists artist studios, coworking and apartment sublets, mostly in NYC and probably does about $500k in revenue year.
Weekly newsletters have been averaging about 300-325 listings @ $30 per week. There are also sponsored emails that I'm sure are in the thousands of dollars per email.
Exactly about news classification with DL.
If you have hundreds of classes and a training dataset with about 500+ examples per class, you can also try fastText, Vowpal Wabbit or even Naive Bayes. If you want to use neural nets, there are some 1D CNNs floating around on GitHub, but they don't work all that well compared to simpler classifiers or simple dot product between vectors. Hundreds of classes usually make classifiers sluggish and accuracy is not so great compared to the binary case (spam/not spam). I wouldn't try to do that to predict the best subreddit for an article for example, because there are too many subreddits, but with vectors it's still OK.
Edit: You haven't heard from them because they are aiming very high so it will take years before any of their work hits the general public.
Edit 2: From my understanding, they are still working on their Universal Basic Income Research as well and have chosen Oakland as the testbed: http://basicincome.org/news/2017/04/httpswww-youtube-comwatc....
For those who like the outdoors, just get your off road vehicle and face the indomitable and untouched nature. No paved roads, no concrete, nothing outside these habitable malls interconnected by hyperloops. Of course there will be supply roads for trucks but they will be just like highways interconnecting mega farms to mega malls.
Nah, scratch that, there is nothing like a house in the suburbs with a huge yard and a barbecue.
AFAIK, Ben Huh is still in charge of the project.
 http://www.subtext-lang.org/AboutMe.htm https://twitter.com/jonathoda/status/871784998113882118
1. Resizing with imagemagick: https://bash.rocks/Gxlg31/3
2. Resizing and convert to webp: https://bash.rocks/7J1jgB/1
After creating the snippet, you could either use GET https://bash.rocks/0Be95B (query parameters become environment variable) or POST https://bash.rocks/jJggWJ (request body become stdin).
It's not hard to roll your backend like this for private usage (simply exec from node). I'm also working on an open source release.
The only tool I ever found which does this job reliable even for huge images is http://www.vips.ecs.soton.ac.uk .
It compresses and optimizes png, gif, and jpeg, creates webp for browsers that support it, inlines small images into your html, longcaches images, and even creates srcsets.
Images are complicated and important enough that I don't see that changing any time soon.
It works really well for UGC as an ondemand optimizer but you can easily make some URL calls to include it in the build time as well.
Be especially careful with these utilities when running them on UGC. PNG / JPEG bombs can easily cause OOM or CPU DoS conditions etc.
As for metadata, today I decided to add it back in.
For ecommerce it will eventally help to have product data, e.g. brand, product name etc embedded in the image.
My other tip if you go the Imagemagick/PAGESPEED route then you can use 4:2:2 colour space and ditch half the bits used for chroma.
Tbh the UGC side is just triggering the "build process side" as the upload occurs.
As far as best,
I'd suggest you look there for some decent examples of how to go about it. They may be defunct but I use a similar approach (slightly different knob tweaks with the same binaries) and it works fine. May not be 100% optimal but its good enough imo.
A lot of developers are using what is essentially a "portable desktop", this card fits in with that ethos.
1. Took too long to get something working. The common use case of hooking up a Lambda function to an HTTP endpoint is surprisingly fiddly and manual.
2. Very painful logging/monitoring.
3. The Node.js version of Lambda has a weird and ugly API that feels like it was designed by a comittee with little knowledge of Node.js idioms.
4. The Serverless framework produces a huge bundle unless you spend a lot of effort optimising it. It's also very slow to deploy incremental changes edit: this is not only due to the large bundle size but also due to having to re-up the whole generated CloudFormation stack for most updates.
5. It was worth it in the end for making a useful little service that will exist forever with ultra-low running costs, but the developer experience could have been miles better, and I wouldn't want to have to work on that codebase again.
Edit: here's the code: https://github.com/Financial-Times/ig-images-backend
To address point 3 above, I wrote a wrapper function (in src/index.js) so I could write each HTTP Lambda endpoint as a straight async function that simply receives a single argument (the request event) and asynchronously returns the complete HTTP response. This wouldn't be good if you were returning a large response though; you'd probably be better streaming it.
My #1 concern with it went away a while back when Amazon finally added support for Python 3 (3.6).
It behaved as advertised: Allowed us to scale without worrying about scaling. After a year of using it however I'm really not a big fan of the technology.
It's opaque. Pulling logs, crashes and metrics out of it is like pulling teeth. There's a lot of bells and whistles which are just missing. And the weirdest thing to me is how people keep using it to create "serverless websites" when that is really not its strength -- its strength is in distributed processing; in other words, long-running CPU-bound apps.
The dev experience is poor. We had to build our own system to deploy our builds to Lambda. Build our own canary/rollback system, etc. With Zappa it's better nowadays although for the longest time it didn't really support non-website-like Lambda apps.
It's expensive. You pay for invocations, you pay for running speed, and all of this is super hard to read on the bill (which function costs me the most and when? Gotta do your own advanced bill graphing for that). And if you want more CPU, you have to also increase memory; so right now our apps are paying for hundreds of MBs of memory we're not using just because it makes sense to pay for the extra CPU. (2x your CPU to 2x your speed is a net-neutral cost, if you're CPU-bound).
But the kicker in all this is that the entire system is proprietary and it's really hard to reproduce a test environment for it. The LambCI people have done it, but even so, it's a hell of a system to mock and has a pretty strong lock-in.
We're currently moving some S3-bound queue stuff into SQS and dropping Lambda at the same time could make sense.
I certainly recommend trying Lambda as a tech project, but I would not recommend going out of your way to use it just so you can be "serverless". Consider your use case carefully.
The strategy Lambda seems to suggest you implement for testing/development is pretty laborious. There's no real clear way for you to mock operations on your local system and that's a real bummer.
A lot of things you run into in Python lambda functions are also fairly unclear. Python often will compile C-extensions... I could never figure out if there was really a stable ABI or what I could do to pre-compile things for Lambda.
All of those complaints aside - once you deploy your app, it will probably keep running until the day you die. So that's a huge upside. Once you rake through the muck of terrible developer experience (which I admit, could be unique to me), the service simply works.
So, if you have a relatively trivial application which does not need to be upgraded often and needs very good up-time.. it's a very nice service.
Lambdas have a lot of benefits - for occasional tasks they are essentially free, the simple programming model makes them easy to understand in teams, you get Amazon's scaling and there's decent integration with caching and logging.
However, especially since I had to use them for whole solution, I ran into a ton of limitations. Since they are so simple, you have to pull in a lot of dependencies which negate a lot of the ease of understanding I mentioned before. The dependencies are things like Amazon's API Gateway, AWS Step Functions, and AWS CLI itself, which is pretty low-level. So now, the application logic is pretty easy, but now you are dealing with a lot of integration devops. There's API Gateway is pretty clunky and surprisingly slow. Lambdas shut themselves down, and restarting is slow. The Step Functions have a relatively small payload limit that needs to be worked around. Etc. So use them sparingly!
One thing to note. API Gateway is super picky about your response. When you first get started you may have a Lambda that runs your test just fine but fails on deployment. Make sure you troubleshoot your response rather than diving into your code.
I saw some people complaining about using an archaic version of Node. This is no longer true. Lambdas support Node V6 which, while not bang up to date, is an excellent version.
Anyway, I can attest it is production ready and at least in our usage an order of magnitude cheaper.
Claudia.js also has an API layer that makes it look very similar to express.js versus the weird API that Amazon provides. I would not use lambda + JS without claudia.
For usage scenarios, one endpoint is used for a "contact us" form on a static website, another we use to transform requests to fetch and store artifacts on S3. I can't speak toward latency or high volume but since I've set them up I've been able to pretty much forget about them and they work as intended.
- CPU power also scales with Memory, you might need to increase it to get better responses
- Ability to attach many streams (Kinesis, Dynamo) is very helpful, and it scales easily without explicitly managing servers
- There can be a overhead, your function gets paused (if no data incoming) or can be killed undeterministically (even if it works all the time or per hour) and causes cold start, and cold start is very bad for Java
- You need to make your JARs smaller (50MB), you cannot just embed anything you like without careful consideration
Development can be tricky, there are a lot of of all in one solutions like the serverless framework, we use Apex CLI tool for deploying and Terraform for infra. These tools offer a nice workflow for most developers.
Logging is annoying, its all cloudwatch, but we use a lambda to send all our cloudwatch logs to sumologic. We use cloudwatch for metrics, however we have a grafana dashboard for actually looking at those metrics. For exceptions we use Sentry.
Resources have bitten us the most, not enough memory suddenly because the payload from a download. I wish lambda allowed for scaling on a second attempt so that you could bump its resources, this is something to consider carefully.
Encryption of environment variables is still not a solved issue, if everyone has access to the AWS console, everyone can view your env vars, so if you want to store a DB password somewhere, it will have to be KMS, which is not a bad thing, this is usually pretty quick, but does add overhead to the execution time.
Terrible deploy process, especially if your package is over 50mb (then you need to get S3 involved). Debugging and local testing is a nightmare. Cloudwatch Logs aren't that bad (you can easily search for terms).
We have been using Lambdas in production for about a year and a half now, to do 5 or so tasks. Ranging from indexing items in Elasticseaech, to small CRON clean up jobs.
One big gripe around Lambads and integration with API Gateway is they totally changed the way it works. It use to be really simple to hook up a lambda to a public facing URL so you could trigger it with a REST call. Now you have to do this extra dance with configuring API Gateway per HTTP resource, therefore complicating the Lambda code side of things. Sure with more customization you have more complexity associated with it, but the barrier to entry was significantly increased.
* Games are developed as command line tools which use JSON for input and output. They're pure so the game state is passed in as part of the request. An example is my implementation of Lost Cities
* Games are automatically bundled up with a NodeJS runner and deployed to Lambda using Travis CI
* I use API Gateway to point to the Lambda function, one endpoint per game, and I version the endpoints if the game data structures ever change.
* I have a central API server which I run on Elastic Beanstalk and RDS. Games are registered inside the database and whenever players make plays, Lambda functions are called to process the play.
I'm also planning to run bots as Lambda functions similar to how games are implemented, but am yet to get it fully operational.
Apart from stumbling a lot setting it up, I'm really happy with how it's all working together. If I ever get more traction I'll be interesting to see how it scales up.
I was initially attracted to it as a low-cost tool to run a database (RDS) powered service side project.
- Zappa is a great tool. They added async task support  which replaced the need for celery or rq. Setting up https with let's encrypt takes less than 15 minutes. They added Python 3 support quickly after it was announced. Setting up a test environment is pretty trivial. I set up a separate staging site which helps to debug a bunch of the orchestration settings. I also built a small CLI  to help set environment variables (heroku-esque) via S3 which works well. Overall, the tooling feels solid. I can't imagine using raw Lambda without a tool like Zappa.
- While Lambda itself is not too expensive, AWS can sneak in some additional costs. For example, allowing Lambda to reach out to other services in the VPC (RDS) or to the Internet, requires a bunch of route tables, subnets and a nat gateway. For this side project, this currently costs way more running and invoking Lambda.
- Debugging can be a pain. Things like Sentry  make it better for runtime issues, but orchestration issues are still very trail and error.
- There can be overhead if your function goes "cold" (i.e. infrequent usage). Zappa lets you keep sites warm (additional cost), but a cold start adds a couple of seconds to the first-page load for that user. This applies more to low volume traffic sites.
Overall: It's definitely overkilled for a side project like this, but I could see the economics of scale kicking in for multiple or high volume apps.
- No straight way to prevent retries. (Retries can crazily increase your bill if something goes wrong)
- API gateway to Lambda can be better. (For one, Multipart form-data support for API gateway is a mess)
- (For NodeJs) I don't see why the node_modules folder should be uploaded. (Google cloud functions downloads the modules from the package.json)
Anyways, I'd recommend starting from learning the tools without using a framework first. You can find two coding sessions I published on Youtube.
One thing to be careful of, if you're targeting input into dynamodb table(s), then it's really easy to flood your writes. Same goes for SQS writes. You might be better off with a data pipeline, and slower progress. It really just depends on your use case and needs. You may also want to look at Running tasks on ECS, and depending on your needs that may go better.
For some jobs the 5minute limit is a bottleneck, others it's the 1.5gb memory. Just depends on exactly what you're trying to do. If your jobs fit in Lambda constraints, and your cold start time isn't too bad for your needs, go for it.
- works as advertised, we haven't had any reliability issues with it
- responding to Cloudwatch Events including cron-like schedules and other resource lifecycle hooks in your AWS account (and also DynamoDB/Kinesis streams, though I haven't used these) is awesome.
- 5 minute timeout. There have been a couple times when I thought this would be fine, but then I hit it and it was a huge pain. If the task is interruptible you can have the lambda function re-trigger itself, which I've done and actually works pretty once you set up the right IAM policy, but it's extra complexity you really don't want to have to worry about in every script.
- The logging permissions are annoying, it's easy for it to silently fail logging to to Cloudwatch Logs if you haven't set up the IAM permissions right. I like that it follows the usual IAM framework but AWS should really expose these errors somewhere.
- haven't found a good development/release flow for it. There's no built-in way to re-use helper scripts or anything. There are a bunch of serverless app frameworks, but they don't feel like they quite fit because I don't have an "app" in Lambda I just have a bunch of miscellaneous triggers and glue tasks that mostly don't have any relation to each other. It's very possible I should be using one of them anyway and it would change how I feel about this point.
We use Terraform for most AWS resources, but it's particularly bad for Lambda because there's a compile step of creating a zip archive that terraform doesn't have a great way to do in-band.
Overall Lambda is great as a super-simple shim if you only need to do one simple, predictable thing in response to an event. For example, the kind of things that AWS really could add as a small feature but hasn't like send an SNS notification to a slack channel, or tag an EC2 instance with certain parameters when it launches into an autoscaling group.
For many kinds of background processing tasks in your app, or moderately complex glue scripts, it will be the wrong tool for the job.
a few years back, the mantra was "hardware is cheap, developer time isn't". when did this prevailing wisdom change? Why would people spend hours/days/weeks wrestling with a system to save money which may take weeks, months or even years to see an ROI?
- You can't trigger Lambda off SQS. The best you can do is set up a scheduled lambda and check the queue when kicked off.
- Only one Lambda invocation can occur per Kinesis shard. This makes efficiency and performance of that lambda function very important.
- The triggering of Lambda off Kinesis can sometimes lag behind the actual kinesis pipeline. This is just something that happens, and the best you can do is contact Amazon.
- Python - if you use a package that is namespaced, you'll need to do some magic with the 'site' module to get that package imported.
- Short execution timeouts means you have to go to some ridiculous ends to process long running tasks. Step functions are a hack, not a feature IMO.
- It's already been said, but the API Gateway is shit. Worth repeating.
Long story short, my own personal preference is to simply set up a number of processes running in a group of containers (ECS tasks/services, as one example). You get more control and visibility, at the cost of managing your own VMs and the setup complexity associated with that.
Then we implemented a RESTful API with API Gateway and Lambda. The Lamdbas are straightforward to implement. API Gateway unfortunately has not a great user experience. It feels very clunky to use and some things are hard to find and understand. (Hint: Request body passthrough and transformations).
Some pitfalls we encountered:
With Java you need to consider the warmup time and memory needed for the JVM. Don't allocate less than 512MB.
Latency can be hard to predict. A cold start can take seconds, but if you call your Lambda often enough (often looks like minutes) things run smooth.
Failure handling is not convenient. For example if your Lamdba is triggered from a Scheduled Event and the lamdba fails for some reason. The Lamdba does get triggered again and again. Up to three times.
So at the moment we have around 30 Lambdas doing their job. Would say it is an 8/10 experience.
Here are my recommendations:
1) Use Serverless Framework to manage Functions, API-Gateway config, and other AWS Resources
2) CloudWatch Logs are terrible. Auto-stream CloudWatch Logs to Elastic Search Service and Use Kibana for Log Management
3) If using Java or other JVM languages, cold starts can be an issue. Implement a health check that is triggered on schedule to keep functions used in real-time APIs warm
Here's a sample build project I use: https://github.com/bytekast/serverless-demo
For more information, tips & tricks: https://www.rowellbelen.com/microservices-with-aws-lambda-an...
Since then I've been using Serverless for all my projects and it's the best thing I've tried thus far. It's not perfect, but now I'm able to abstract everything away as you configure pretty much everything from a .yml file.
With that said, there are still some rough spots with Lambda:
1) Working with env vars. Default is to store them in plain text in the Lambda config. Fine for basic stuff, but I didn't want that for DB creds. You can store them encrypted, but then you have to setup logic to decrypt in the function. Kind of a pain.
2) Working within a subnet to access private resources incurs an extra delay. There is already a cold start time for Lambda functions, but to access the subnet adds more time... Apparently AWS is aware and is exploring a fix.
3) Monitoring could be better. Cloudwatch is not the most user friendly tool for trying to find something specific.
With that said, as a whole Lambda is pretty awesome. We don't have to worry about setting up ec2 instances, load balancing, auto scaling, etc for a new api. We can just focus on the logic and we're able to roll out new stuff so much faster. Then our costs are pretty much nothing.
I think a lot of people try to use the "serverless" stuff for unsuitable workloads and get frustrated. We are running a kubernetes cluster for the main stuff but have been looking for areas suitable for lambda and try to move those.
I'm not allowed to give you any numbers; here's an old blogpost about Sketch Cloud: https://awkward.co/blog/building-sketch-cloud-without-server... (however, this isn't accurate anymore). For this use-case, concurrent executions for image uploads is a big deal (a regular Sketch document can easily exist out of 100 images). But basically the complete API runs on Lambda.
Running other languages on Lambda can be easily done and can be pretty fast, because you simply use node to spawn a process (Serverless has lots of examples of that).
Let me know if you have any specific questions :-)
Hope this helps.
A few pointers (from relatively short experience):
- The best UC for Lambda seems to be stream processing where latency due to start up times is not an issue
- For user/application-facing logic the major issue seems to be start-up-times (esp. JVM startup times when doing Java or your API gets called very rarely) and API Gateway configuration management using infrastructure as code tools (I'd be interested in good hints about this, especially concerning interface changes)
- The programming model is very simple and nice but it seems to make most sense to split each API over multiple lambdas to keep them as small as possible or use some serverless framework to make managing the whole app more easy
- This goes without saying, but be sure to use CI and do not deploy local builds (native binary deps)
I do remember logging being a confusing mess when I was trying to get this started. I feel better about the trouble I had now that I see it wasn't just me. But for a side project that's very simple to use, Lambdas have been a blessing. I get this functionality without having to manage any servers or create my own API with something like Python+Flask. Having IAM and authentication built in for me made the pain from the initial set-up so worth it.
Doesn't like big app binaries/JARs and Amazon's API client libs are bloated - Clojure + Amazonica goes easily over the limit if you don't manually exclude some Amazon's API JDKs from the package.
On the plus side, you can test all the APIs from your dev box using the cli or boto3 before doing it from the lambda.
Would probably look into third party things like Serverless next time.
The worst part about it by far is CloudWatch, which is truly useless.
Check out https://github.com/motdotla/node-lambda for running it locally for testing btw - saved us hours!
1. Installing your own linux modifications isn't trivial (we had to install the bpg encoder). They use a strange version of the linux ami.
2. Lambda can listen to events from S3 (creation,deletion,..) but can't seem to listen to SQS events WTF? It seems like amazon could fix this really easily.
3. Deployment is wonky. To add a new lambda zip file you need to delete the current one. This can take up to 40 seconds (which you would have total downtime).
- Runs fast, unless your function was frozen for not enough usage or the like
- Easy to deploy and/or "misuse"
- Debugging doesn't really work
All in all, probably the least painful thing I've used on AWS. But that doesn't necessarily mean much.
For logging, we pipe all of our logs out of CloudWatch to LogEntries with a custom Lambda, although looking at CloudWatch logs works fine most of the time.
Building reactive systems with AWS Lambda: https://vimeo.com/189519556
We also use it to perform scheduled tasks (e.g. every hour) which is good as it means you don't have to have an EC2 instance just to run cron like jobs.
The main downside is Cloudwatch Logs, if you have a Lambda that runs very frequently (i.e. 100,000+ invocations a day) the logs become painful to search through, you have to end up exporting them to S3 or ElasticSearch.
Need to say, that you should use gordon<https://github.com/jorgebastida/gordon> to manage it, Gordon makes the process easier.
API Gateway is a little rougher, but slowly getting there.
- For serverless APIs for querying the S3 which is a result of the above workload
Difficulties faced with Lambda(till now):
1. No way to do CD for Lambda functions. [Not yet using SAM]
2. Lambda launches in its own VPC. Is there a way to make AWS launch my lambda in my own VPC? [Not sure.]
It fails once in a while and the experience is bad, but that's mostly due to our tooling around failure states instead of the platform itself.
The only negatives are:- cold start is slow, especially from within a VPC- debugging/logging can be a pain- giving a function more memory (~1GB) always seems to be better (I'm guessing because of the extra CPU)
Would be really great to have this configurable along with CPU/memory.
Additionally being able to mount and EFS volume would be very useful!
- The CPU power available seems to be really weak. Simple loops running in NodeJS run way way slower on Lambda compared to a 1.1 GHz Macbook by a significant magnitude. This is despite scaling the memory up to near 512mb.
- Certain elements, such as DNS lookups, take a very long time.
- The CloudWatch logging is a bit frustrating. If you have a cron job it will lump some time periods as a single log file, other times they're separate. If you run a lot of them its hard to manage.
- Its impossible to terminate a running script.
- The 5 minute timeout is 'hard', if you process cron jobs or so, there isn't flexibility for say 6 minutes. It feels like 5 minutes is arbitrarily short. For comparison Google Cloud Functions let you work 9 minutes which is more flexible.
- The environment variable encryption/decryption is a bit clunky, they don't manage it for you, you have to actually decrypt it yourself.
- There is a 'cold' start where once in a while your Lambda functions will take a significant amount of time to start up, about 2 seconds or so, which ends up being passed to a user.
- Versions of the environment are updated very slowly. Only last month (May) did AWS add support for Node v6.10, after having a very buggy version of Node v4 (a lot of TLS bugs were in the implementation)
- There is a version of Node that can run on AWS Cloudfront as a CDN tool. I have been waiting quite literally 3 weeks for AWS to get back to me on enabling it for my account. They have kept up to date with me and passed it on to the relevant team in further contact and so forth. It just seems an overly long time to get access to something advertised as working.
- If you don't pass an error result in the callback callback, the function will run multiple times. It wont just display the error in the logs. But there is no clarity on how many times or when it will re-run.
- There aren't ways to run Lambda functions in a way where its easy to manage parallel tasks, i.e to see if two Lambda functions are doing the same thing if they are executed at the exact same time.
- You can create cron jobs using an AWS Cloudwatch rule, which is a bit of an odd implementation, CloudWatch can create timing triggers to run Lambda functions despite Cloudwatch being a logging tool. Overall there are many ways to trigger a lambda function, which is quite appealing.
The big issue is speed & latency. Basically it feels like Amazon is falling right into what they're incentivised to do - make it slower (since its charged per 100ms).
PS: If anyone has a good model/providers for 'Serverless SQL databases' kindly let me know. The RDS design is quite pricey, to have constantly running DBs (at least in terms of the way to pay for them)
- Use environment variables
- Use step functions to create to create state machines
- Deploy using cloudformation templates and serverless framework
You don't need to use the API gateway.
Just talk direct to Lambda.
#2 - Gain maximizer. Sell only half of your position on a winning trade when you reach your target point. You're covered in case it goes down (still need a stop loss). Then if it goes up it's all good :) Make sure to setup a second price target to exit. Always stick to rule #1. If you hang on to your stocks with no target price - like your average Joe - you'll go negative.
Welcome to the market.
Trading is a 0 sum game. It's the same as playing in the casino. The only one who makes money consistently with trading is:
. your broker
. people holding illegal priviledged info not released yet in the media
. big funds, tricking people like you
In the long term, the value of stocks follow the profits which companies have, If you manage to learn how to do value investing, slowly and calmly, never really selling the companies you commit yourself to, while focusing at your work(which usually pays way more than what you can earn in the stock market).
I think generally people are addicted to losing, to having the thrill of winning/losing something, but when actually, if you want to get rich and consistently become wealthier, there's no thrill and it's a quite boring road. It's up to you which one you could take. The internet is full of forums with crazy people which pretend to be traders and winners but they are all actually addicted to the thrill and losing. Just check how much people are crazy about failed businesses and penny stocks. I wouldn't bet my money or even less something more important: the peace of my mind into this crap.
I want you to answer me a question: how anxious are you? I think a lot. You even came to HN to talk about your loss. I think you are one of the people who will never win on this game, but can actually win in life. You just need to learn how to control it.
I've been managing to grow my assets slowly for years. Have been enough time in the market. If you want to know more, shoot me an e-mail. I won't tell you what stock to buy, but guide you so you can make your own choices. I can already live a few years without working, but I'm so satisfied with my life that I don't have to. I travel every year and so on... and don't even make that crazy software dev salary some people in the US can.
Also... practice Sports. The more you live, the more money you can make.
I used to be very sick about trading and so on. Now I'm healthier, wealthier, less anxious and everything.
From my "fun" portfolio personally, which is mostly tech stocks, there are many small losers but a few massive winners that outweigh the losses. When I was trading on a daily / weekly basis I never saw this, but after holding for quarters it's become my new approach. Instead of checking daily, now I check the numbers every few weeks.
If it's any consolation, I got started with investing in the 3 months before the housing crisis in 2008 and watched 50% of what I invested in relatively stable funds disappear basically instantly. That was depressing but in time all of them came back. For longterm, you've really gotta trust dollar-cost averaging and just waiting out the bumps. After surviving that rough introduction, everything else feels minor.
Trading is a really hard way to make money.
The best way to bounce back from a loss is to make sure you will never ever place yourself in the position to make the kind of loss from which you can't recover.
You did this, now never ever do it again. There is no sense in trading unless you have a provable long term edge.
Most people would say good luck at this point. I would instead say, re-evaluate if trading makes any sense. Why do it?
Are you willing to spend 20 years "mastering it" only to accept one almost never truly masters the strangeness of financial markets, but perhaps develop a bit of character and pick up some sound principles and some unique chance insights for which there are no shortcuts.
Here's how it works for a casino: the outcome of any given roll of the dice (say) is random. But over a thousand rolls of the dice, the odds of the game are not random. For a casino, obviously the games are set up so the odds favor the house, and "in the end the house always wins." For a trader, your "odds" against the market are determined by your skill. Every individual trade is still essentially random, but if you're good at what you do - if your market analysis, entry timing, risk/money management, and adherence to your trading rules is solid - over a large enough sample size the outcome will be net positive. It's a strange duality for some people to grasp.
A good first read: High Probability Trading (Marcel Link)
The Bible: Reminiscences of a Stock Operator (Edwin Lefevre)
The basics: Technical Analysis of the Financial Markets (John Murphy), Japanese Candlestick Analysis (Steve Nixon - skip the first chapter), Trading and Exchanges (Larry Harris)
Good follow ups: All the Market Wizards books (Jack Schwager)
Psychology: Trading in the Zone (Mark Douglas), The Nature of Risk (Justin Mamis)
Less theory, more practical: Mastering the Trade (John Carter)
Putting it all together: How to Buy (out of print) and When to Sell (both Justin and Robert Mamis; slightly outdated but 100% worth reading.)
Some of these books are "expensive." But even if you only learn one thing from a book, it'll have paid for itself 10 times over. Take copious notes. Find the commonalities. Filter out the bullshit. Watch the markets; see what "clicks" for you. The trading exercise towards the end of Trading in the Zone is a great way to test yourself without risking a whole lot. (On that note, a theme you'll find, if loosing 2 or 3 thousand is going to be an issue for your current well being, perhaps better to wait to put money up until such a cost is somewhat closer to pocket change...)
If that all sounds like too much effort, then perhaps the more typical buy index funds and hold until you retire answer you'll often get around here is more your speed. No shame in it.
It's a bluetooth device that allows controlling the mouse cursor with body movement (head or finger etc) It's cheaper. Coupled with a free dwell clicking software, should work!
2. Eye tracker - there are a lot of options, visit reddit.com/r/eyetracking that and reddit.com/r/ALS and ask them for advice. These devices let you control a PC with your eyes are especially designed for people who have ALS. The ones that work really well cost money, but most insurance companies cover them in full. Avoid Tobii, they are not reliable and are more marketing than anything. Mygaze,LC Technologies, Eyetech digital, smi vision. These are all companies you can trust. All should offer free trail periods and should have a rep who can come and visit your dad to do an evaluation. If they don't offer at minimum 2 week trail, they're not a trusted company. Secondly you can contact your local cities AT clinic they have donated equipment for situations like this.
I hope this helps!
Quadriplegic just means all four limbs are impaired. The degree of impairment can vary substantially. One of these men had use of his arms, but did not have full use of his hands. He drove himself to work, had a full time job, wife and kid. He broke his neck in a pool accident in his teens. He used a manual wheelchair. He was able to use a manual wheelchair because he had use of his arms. He chose it over an automatic wheelchair to get in regular exercise.
The other was substantially more impaired. He broke his neck in a riding accident later in life. He had been a brilliant surgeon. He used an automated wheelchair. I think he had partial use of one arm and maybe a couple of fingers, which allowed him to navigate a smartphone with that hand. He came in once a week for a few hours to review surgical reports for the company. When ordinary claims processors (like I was) could not figure out if the surgery was covered and their boss with more training couldn't either, we printed off the entire file and hand delivered the paper version to this man on Friday afternoon. I had one claim go to him and hand walked my papers to the meeting.
I also attended an educational talk given by the two of them. This is how I know how they each broke their neck and other details.
Since your father was a consultant, he may be able to return to doing consulting work at some point in the future. The specialized knowledge in his head does not stop being valuable just because of his physical limitations. I am mentioning this because new quadriplegics are often suicidal. They feel that life is simply over. It's not. He was a professor and consultant, like this former surgeon, his knowledge and expertise still has value. Even though the former surgeon could no longer work as a surgeon, his knowledge of surgery was valuable and he had a unique very part time job at a world class company.
Depending on the exact details of your father's limitations, he may also benefit from the use of ordinary things like smart phones with apps. There are also a lot of non-tech assistive devices, like chairs to help them shower and spoons that can be strapped to their hand so they can feed themselves if they have arm movement but limited hand control.
It occasionally has to be reset by hand if the voice recognition locks up, which is the only barrier. But I'm fairly certain it's the best option available for people in your father's situation.
First, if your dad can still move his head you can use Apple's assistive tech to "tab" through the items on the screen with a turn one way, and "clicking" on an item by turning his face the other.
Second, MS Windows' voice control is actually really decent. You can browse, search, send emails, etc. all with your voice. It takes some training (both for the user and the machine) but my dad has gotten pretty quick with his.
Lastly, there's a bunch of eye trackers out there now, and you can use them for a lot of things. I setup CameraMouse (http://www.cameramouse.org/) for when voice wasn't quite cutting it (or my dad got tired of talking.)
Unfortunately, there's no perfect solution, and all require time to adjust.
Source: https://www.twitch.tv/nohandsken quadriplegic streamer who plays Diablo/Path of Exile, Heroes of the Storm, World of Warcraft, etc. (I encourage Amazon Prime subscribers to give him ~$2.50 every 30 days via their free Twitch sub! https://help.twitch.tv/customer/portal/articles/2574674-how-... )
slightly related/helpful discussion: https://github.com/melling/ErgonomicNotes
I remember hearing about this project some time ago: https://github.com/OptiKey/OptiKey
It might be helpful as it's an open-source project and if extra features are needed you might be able to add them yourself if you are a programmer.
I mentally bookmarked it because I felt it would be a good "make the world a better place" type project to contribute to if I ever had some spare time.
Thanks for bringing this request here to allow the community the chance to contribute!
- Smartnav (if Mac you need to buy via 3rd party, but includes the software)
Fairly expensive, there are other variants that cost less/more, some gaming devices like TrackIr might work as well?Possible that health insurance would pay for these types of devices?
I personally use Smartnav about 50% of the time I am programming, along with Dragon/Voicecode due to RSI issues.
Smartnav + Dragon might be enough for using laptop/desktop, not so much for mobile devices. If he actually programs I would recommend voicecode.
All of these technologies have a massive learning curve.
You might want to checkout the voicecode forum and slack channel, I know there are some quadriplegic programmers in that community who would have better insight than I.
1st a voice setup with Alexa or similar can really help.
With regards to phone use, some of our users have an attachment to put the phone close to their head and use their nose to "click/select" (they can move their head).
Eye tracking technology is really impressive these days (can be as fast as using a mouse). I've recently demoed a system with a Tobii sensor (https://www.tobii.com/) that was hooked up to a laptop, very impressive when combined with appropriate software (it handles scrolling, keyboard shortcuts, etc in a custom interface). I'm not sure with regards to phone/tablet use how well they integrate.
Ping me on Linkedin if you'd like to talk more.
I'm truly sorry about your dad. That's a scary situation for him to be thrust into.
I have tried most of the commercial solutions available and I think the best headmouse for your dad would be Zono mouse http://www.quha.com/products-2/zono/. It is very easy to use and is as accurate as normal table-top mouse.
Tecla is great you should give it a try. Depending on his comfort and ability a head tracking mouse from Orin is pricey but works really well with a laptop/desktop setup. Dragon Naturally speaking is useful too.
Also he should make an appointment with a local assistive technology practitioner soon to get a run down of all the options, both low and high tech. You can find these ATP folks at most all rehab hospitals.
I think they've created software that can bypass captchas and will work with you to develop software that can help your dad.
Sepsis now dominates the hospital ICU. It is what kills most AIDS patients too. Antibiotic resistance is driving costs.The ICU is now 40% of US hospital budgets.This is bankrupting state and federal budgets. This is why medicare, medicaid, obamacare are bankrupting US Govt (Fed and State) budgets.In 2013 health dominated state budgets.
State spending on health care now exceeds education spending. Look at NM's past budgets.http://www.usgovernmentspending.com/compare_state_spending_2...
Today 1/4 of US VA and Indian Health patients are diabetic. US Defense Dept. funding must now compete with Medicare.Today 40% of hospital costs are for growing ICU's and chronic disease. 1/2 of US Medicare cost = chronic disease from diabetes.
NM ICU's are dominated by chronic disease. http://www.amazon.com/Where-Night-Is-Day-Politics/dp/0801451...
40% of US hospital budgets now pay ICU/chronic disease costs. This cost is going up annually. http://money.cnn.com/video/technology/2013/07/24/fortune-tra...
Can MinION help pre ICU patients better control diet and sepsis infection.http://www.bloomberg.com/news/articles/2015-06-03/deadly-inf...A complete bacterial genome with MinIONhttp://www.nature.com/nmeth/journal/v12/n8/abs/nmeth.3444.ht...
Minion can find septic bacteria fast.https://genomemedicine.biomedcentral.com/articles/10.1186/s1...
A friend of mine studied really hard for three months while working a full time job to get into one of the big four.
He got the job purely because he studied coding interview questions. So he moved to the other side of the world with his gf to work there.
A year later he went into PIP, got out of PIP and then had so much anxiety he couldn't concentrate on his work. Now he has to move back home with little to show for it.
I know this is hard to hear but you most likely will fail until you change your mentality. You're spending your time being very hard on yourself rather than constructively asking why you want this and what is the next step I can take.
Also, based on my experience with interviewing and being interviewed, the questions are difficult on purpose. If everyone got the solution, you couldn't differentiate between applicants, but if they are very tough, you can get a sense of a persons problem solving skills, how determined they are, what types of questions they ask to proceed through the problem. etc. It shows a lot about a persons thought process and how likely they are to be successful when a tough problem is thrown at them. Often, getting to the solution isn't the point of the question. Seeing how someone solves something, they haven't encountered before is much more informative (especially for companies and startups working on things that are game changers).
It's pretty in depth and extremely popular (44k stars). Hope it helps.
I wouldn't worry about dynamic programming or network flow per se. Everyone finds those hard and looks up the algorithms.
These are at the harder end. Recursion is maybe somewhere in the middle.
That said the competition level could be very high at these companies and ask yourself if you really want to work in such an environment. Also ask if you want to work on what they work on.
Interview questions are the same as word problems we see in math. Class. You have to map the problem into the CS, then apply what you know. Forget finding something optimal at first. Once you have the toolbox of algorithms/data structures you will find success.
Don't worry so much. Keep preparing by focusing on small bite size chunks you can master. Then attempt some interviews.
Which is why the interview process is broken. We already go through that painful exercise in college. Companies should stop being lazy and look for a better solution to double check that you really went to college. They could focus on a lot of other things in 45 minutes, projects, behavior, culture fit, etc.
Once you're more confident, consider using something like Triplebyte or interviewing.io to do your tech challenges and hopefully skip past some of the earlier tech challenges.
For what it's worth, as a hiring manager I would also say that, generally speaking, I'm not interested in whether you got the "correct" answer in 45 minutes, and I'm certainly not interested in perfection in 45 minutes. Don't worry about being perfect. Just worry about being competent. I'm far more interested in the bigger picture. Things like:
1) Can you write code in the first place? The basics should be easy for you.
2) Are you familiar with the language you're writing? You shouldn't have to look up how to sort an array or how to declare a function, for example, and your code should be clean and readable.
3) Can you properly assess the problem and begin working on a solution? Take a moment to think about it. Ask follow-up questions if necessary. I always try to repeat the challenge back in my own words, just to make sure I understand what's expected.
4) Is your solution heading in the right direction? If not, you either don't know what you're doing or I didn't explain the challenge well enough.
5) Can you identify and fix edge cases? Usually the problem I give you will have some reasonably obvious edge-cases, like the popular FizzBuzz test has.
Trust me. As an interviewer, I know you're nervous. I don't expect perfection. I expect thoughtfulness, progress, and adaptability. I expect you to think through a problem, make progress toward a solution, and be able to make changes as necessary to fix edge cases and unexpected problems. That's all. (ha.)
IMO, if your interviewer is fixated on whether it's "perfect" or whether you got the exact right answer, they're not good interviewers and perhaps you shouldn't want to work with them in the first place. In fact, an ideal programming challenge should have multiple solutions (e.g., there are many ways to sort a list). The challenge should be more about figuring out how you think than whether you can add 2+2.
Also, keep in mind that the interview works both ways: you should be interviewing them just as much as they are interviewing you. Every question you're being asked is a question you can ask them. If they don't think you're doing a good job, they won't hire you, and if you don't think they're doing a good job, you don't have to work there. Keep in mind that the bare minimum for a programming job is... programming. If that's all they focus on during an interview, they're missing out on everything else you can bring to the table. Imagine if the only question they asked truck drivers was "Can you drive?" or the only question they ask a journalist is "Can you write?"
Try leetcode if you haven't already, start with topic wise easy questions, then medium and then difficult, participate in the message boards discussions about problems, you need time and discipline, that's all.
1. Time became my most valuable asset. Everything was filtered through the lens of "does this save me time?" and so I optimized everything: The gym (worked out at home), shopping (got delivered), dating (used fleshl...joking! :).
In the words of Joel Spolsky, "Every day that we spent not improving our products was a wasted day".
2. I worked harder than ever before. My job was tough but output ebbed and flowed with meetings, management, plus the usual office time wasters. The startup workday is more straightforward: wake up, coffee, write code, listen to users, coffee, learn how to add value to the market, coffee.
3. Every two months or so I look back and shake my head at how lame the product was, how little I knew, and how inefficient my workflow was. Which is to say, I continue to learn at an incredible clip yet realize I still don't know a thing. I expect this trend to continue - if it doesn't, I'm not growing.
So, yeah, overall it's been An Incredible Journey. My only regret is that I didn't start sooner.
It's actually a gift how easy it is to go from idea to product to business. To paraphrase Murakami, 'If you're young and talented (or can code), it's like you have wings'.
We're living in the best of times.
Anyway i started contracting last year for this exact reason, at around 300-400 day rate and now i've saved enough to quit and follow my 'dream', my last day is JULY 7th. I have enough savings to last me 2 years, sustaining my current social life. no frugalness.
When you're building a Nights and Weekends side project, you get used to stealing whatever free hours you can to work on the product. But you also necessarily build things so that they don't take up much of your time every day. If they did, it would interfere with your day job and that just wouldn't work.
So when you remove the day job, you find that suddenly you have this successful business that runs itself in the background and you can do pretty much whatever you want with your day.
Most people in this situation will immediately fill that time up with work on the product, and I did to some extent. But I also made sure to take a bunch of that time just to enjoy with my family. I eventually settled on 2-3 days a week where I was "at work", with the rest devoted to other pursuits. Both me and my wife are rock climbers, which is an "other pursuit" that will happily expand to fill the time available. We're also parents, so ditto there.
I also make a point of downing tools for a while from time to time. Again, because I can.
I took the kids out of school and dragged them off backpacking around SouthEast Asia for a few months the first year. We did a couple more medium sized trips this year, and I took the entire Fall and Spring off because those are the best times for Bouldering in the forest here. Again, work is happy to ramp up or down to accommodate because I never shifted it out of that ability to run on nights and weekends.
So now, I burst for a few weeks at a time on work stuff (with possibly a more relaxed definition of Full Time than most would use), then slow down and relax for a bit.
It's actually not so bad.
Since I've been working as a mobile developer, and also management consultant (my other career), it's always been extremely easy for me to find a new job whenever I needed, so there has been very little risk involved.
Still, it did require some savings, since our startup is very research intensive and will take several years before we see any revenue. We secured some basic funding now though, and things are looking good for the next stage too so I will only have needed 6 months or so of buffer.
In summary, my view is that if you're a reasonably skilled engineer or have some other attractive occupation, there is nothing to fear. The worst that can happen is really that your startup doesn't work, you'll go back to what you did before with a few months of missed income but with plenty of useful experience.
I don't think there are many situations or cultures where a failed technology startup attempt on your rsum would count against you in any way, in most places quite the opposite.
On the second time I created this plugin: http://plugins.netbeans.org/plugin/61050/pleasure-play-frame... I tried to make a living of this, I only sold 5 licenses of 25 dollars per year and to develop this plugin took me 2 and a half months of hard work.
Didn't have a problem getting a new job both times.
Having the luxury to focus on one thing, rather than juggling several, is much like having an office that is neat, tidy, and uncluttered. It feels good in the same way. At least by quitting a job and focusing on a startup, you have the option to focus 100% on it. Actually focusing 100% on one thing is a difficult skill in itself, even with the right circumstances; however, it's completely impossible (at least for me) with two fulltime jobs at once, especially jobs like teaching (which involve lots of public speaking at scheduled times) or running a website with paying customers (which demands, e.g., responding to DDOS attacks).
I dream of working for myself but I've never taken the plunge. My income from side projects is about 1/3 of the way to my minimum number to quit and go full time.
I do a lot of thinking about this, my number is the same as my financial independence / early retirement number.
One of the biggest things that holds me back is medical insurance for a family of 5. Having an employer offsets this cost a LOT.
I reached a point after about 4 months where I realised the journey to make the business profitable would most likely be a five year slog, and while the opportunity was there it wasn't a cause I felt I could devote 5 years of my life to.
So gave the software for free to the people that were helping with beta testing and went back to my job. I found the most positive thing was how it helped propel my career at my current employer as I got a better role, they seem to have more respect for me afterwards and I operate more independently now.
So I suppose if you can build some sort of safety net before quitting that helps.
I didn't take the plunge until quite late on, waiting until it was making enough money to comfortably cover my personal expenses. No regrets there - growth was slow in the early days and if I hadn't had the luxury of a monthly pay packet, I probably would have given up before I got the chance to properly validate the business.
Transitioning to full time brought more stress than I expected, but the experience is priceless. In the past few months alone I've learnt more than I did in 3 years of employment.
Realistically, what's the worst case scenario? I'm a reasonably skilled dev in a strong market so there's not much to lose. If it all goes wrong I'll get another job with a load of experience (and stories!) under my belt.
It was scary as hell (no revenue coming from the startup), fun as hell, challenging as hell. Had my savings all planned out to help support the adventure, but still had that daily stress of knowing every dollar I spent was not coming back anytime soon. That part wasn't fun. But I didn't have kids or a mortgage and knew this was my chance to do something of the sort.
10/10 would do again in a similar situation, though knowing what I know now, I might have launched a business instead of chased a cool idea.
I'm back working for a startup again now so I guess I'm just going back and forth.
I've worked for a few startups and none of them has had an exit yet but one of those I have shares in is doing relatively well.
Doing contracting work is a smarter decision in general. You can actually plan to make a sizeable amount of money and then watch it happen without taking any risks - It's all within your control. With startups, you might often feel that it's outside of your control, especially if you're not a co-founder.
I have a previous coworker who'd love to help me, but I don't want to babysit his work and I feel he's not valuable enough to the business. I would like another cofounder, but it doesn't bother me that I'm doing it all on my own, I have spent the last 10 years getting ready for this, so I'm more than ready. I am doing more than okay on my business alone, but I wish I had some expertise for a second opinion. I am really really thinking about going into an accelerator program or seeking angel investment, but I'm apprehensive about taking cash at this (or any stage). My biggest fear is actually having to get a real job again, I will do anything to prevent that from happening since that means my startup is dead.
As with everything there are pros and cons. The pros are obviously that you get to spend your time doing something you enjoy (hopefully), and can work whenever and wherever you feel like (this can also be a con!). The cons are that you will always be worrying about stuff like churn, whether servers will go down whilst you're away on holiday, how you're going to grow enough to support a family etc etc.
The long, slow Saas ramp of death really is a thing, and there are no silver bullets in terms of growth/marketing - just many small things that all contribute. I also always used to think 'if only I could just get to $x MRR then everything would be so much better and I'd be much more comfortable and relaxed', but when you do eventually break through that barrier you realise you're just more worried about how you are going to achieve the next one, so it's kinda never ending!
I also agree with other posts here that if you're already a decent developer in a good market, then what is the worst that can happen really? Try doing some fearsetting. I'm sure you could always find another job if your thing doesn't work out, but you do need to give these things time. I also failed a bunch of times with other startup ideas, one of which was also YC backed.
There is this assumption that one must build a minimum viable product that has to be released as quickly as possible, so much that it's become startup mantra. It's no surprise that a lot of these products seem to be technically shallow, everyone is reaching for low hanging fruit.
I feel rather alone trying to do something that I think hasn't been done before, or if it had, wasn't executed well. I don't think I could possibly commit to it without having strong motivation, which I struggled with while having a full-time job.
The biggest technical/social challenge I have is to make something that a non-technical user could easily get right away and make something with it. I think the automation of web dev is an inevitability, and frameworks were just a historical blip on this path. The same thing is happening to web design. http://hypereum.com
Let's just say that my mistake was that I was too afraid to hurt my co-founder's feelings. If we parted ways when we should have, I might have actually gotten somewhere. (Then again, I might have gotten nowhere either!)
For somebody like me, and probably a lot of HN readers, its _actually_ a fairly low risk proposition because qualified, experienced software engineers are so sought after. Whatever you are doing, you will always be able to pick up a $1000-$1500 a day gig when you need to bootstrap your actual project.
My old boss has contacted me a few times to see if I want to come back- definitely do not want to.
You talk about "fear", and you talk about a "successful" startup. Here's the thing: You never know if a startup will be successful, and you just have to give it a go for the love of it, rather than any expectation of success. Don't be afraid- there are plenty of worse things in this world than a failed company.
Have learned a lot about bookkeeping.
I still have the original client 3 years later and the company grosses about $3,500 per month and I net $1,250. It pretty much runs itself, requires maybe 2 hours of work every 2 - 3 months. I spent a little over a year trying to grow it from the initial customer with no luck.
Landed a job as a full stack engineer afterwards and I really like it. I am actively looking to start a new project but I will keep my main job while doing it. I had the benefit of a wife who makes a good salary to support me during that prior adventure (Still do :) )
So, if you are planning to leave a job and have a good product which is getting you even half of the money you need. Leaving your job will only increase the chances of success. However, hanging on to the job while working on a product is going to be much harder.
Previously I contracted as a full stack developer bringing in other developers on projects as and when the project timescales wouldn't have been achievable with just me. Running a software consultancy alone, dealing with all of the usual rigmarole of a business and performing proper client outreach was stressful, but financially and personally very rewarding (especially when you close a big deal completely on your own).
In order to get involved in my current startup, which at the outset was comprised of a designer, biz dev (CEO) and myself as CTO I had to cut off ties with my previous clients and dedicate all of my available time to the new startup. I had leveraged myself quite a bit running the previous consultancy as billings were growing year on year, so my VAT/Corporation tax accounts were generally paid out of job fees towards the end of the year rather than set aside throughout the year, leaving me in a negative cash flow position when stopping work for existing clients. Luckily there were some ongoing payments that didn't require development resource, so the small admin time required to invoice and chase up was all that was required, and enabled me to setup payment arrangements with HMRC to settle these liabilities over a period of time, out of this cashflow. Setting up these arrangements was very stressful, and I would strongly advise anyone coming into a startup to fully evaluate their financial situation before committing even if the opportunity seems huge.
Initial salaries in the new start-up were minimal (1000 p/m approx), and it took a solid three years, extremely long working days, almost unmanageable personal stress and around 0.5m of funding before we're now up to an above average average salary, 1m ARR, a team of 15 and strong growth projected for the coming year.
Success is a subjective term and occasionally I have to refocus to see the light at the end of the tunnel, but with enough grit, luck and determination, its possible to tip the balance to a point where success is now more likely than not.
The first time I was two years out of school with $12k in the bank, had a partner with a ton of experience, and a decent idea. We crunched for six months, launched, failed, and then tried to pivot. I ran out of cash a few months before the iPhone launched and had I had a longer runway we could have ported our app to the iPhone and potentially seen success.
A year ago and nine years later than that attempt, I've started a small video game company with another friend (justintimegame.com). Despite my life situation being more complicated and expensive to maintain, the prior nine years success combined with my wife's income basically lets me try and fail until I get sick of it instead of when the money would run out. Obviously I'm aiming for success, but the massively reduced stress from barely worrying about money let's me be much more open to experimentation while also being resilient to failure.
I don't regret starting and failing my first company however. It set me up for having a higher risk threshold and an interest in startups that ended up working out quite well for me.
Expecting to get it right is the failure we all make at some point (even when we say out lout "this might not work out" we still somehow expect it to work). Expecting failure to lead to something positive is the long game I'd urge you to wait for, it's hard to remain in a good mental state at times while you're working hard and feeling under appreciated, but that is sadly just what it's like.
I guess the "quit your job" problem only exists if you have major responsibilities, like a family, or paying debt back. Otherwise, it makes no real sense to consider it, the opportunity is too big.
2. 6 month financial backup is usually not enough. I have heard many stories where people try going independent for 6 months, run out of money and start looking for a job. What happens is - entrepreneurship gets into you in that time and if one goes back to a job, I can bet they feel even more frustrated. You need 1.5 years of backup or 2-3 years of "frugal living backup". I struck positive cashflows in just about 5 months, but it wasn't good enough. I distinctly remember thinking "Maybe, I should have done this part time". Then I struck a mini-gold-mine at 8 months. Having a good backup will help you persist longer. I did not have a growth strategy that worked. But I focused on working and doing the right thing. Keep it rolling.
3. The biggest worry I had when starting was about providing "enough" for my family and any emergencies for next 1.5-3 years at any point in time. Unlike many stories, I promised myself not to wait until I go bankrupt or in a lot of debt - Nearing that is a huge red flag, where I would typically exit and take a regular job. However, taking a job is the last thing I want to do. That thing kept me money-oriented for a while and made me work on stuff that generated positive cashflow.
4. Would it have been possible to return to your old job? - Maybe, but I would not want to. I waited too long to jump ship. Infact, my experience on multiple "good" jobs is what is keeping me away from them. Once you taste entrepreneurship, its hard to go back
5. I do not consider myself successful. May be semi-successful, some people see it as success. But I have come a long way from fearing failures. Success may or may not last long. I enjoy the process and the tremendous personal growth it results into. I ensure my financial backup now gives me 5-6 years minimum to start afresh - if I have to. Do not undervalue the role of money - it definitely makes things easier.
6. This is my favorite quote about Karma. I heard it many years back (and thought it was impractical). Especially useful when I feel I did everything right but nothing works:"Karm karo, fal ki chinta mat karo" (Do your duty without thinking about results)
P.S.: I don't know about others, but I have restricted myself into writing lesser HN comments because it takes quite a bit of time/energy. This one is an act of impulse. How do other entrepreneurs feel about this?
1. Give your employer your 2 weeks/1 month notice (depending on locale). Taking this step immediately is critical because the urgency and shock of the change will force you into being fast and practical about all the subsequent steps.
2. Create a monthly budget for yourself which assumes no income that you are not 100% sure about. So if you have interest from investments or a freelance contract that's a absolute guarantee you can include it. For most people the income side of this budget is going to be low or nothing. Your goal with this budget is to stretch your funds out for 6-12 months. The good news is that in 2017, the principle of geoarbitrage allows you to live on virtually any budget. If you live in the Bay Area your next step is going to be to move somewhere cheaper. On the cheapest end of the spectrum, I'll use Thailand as an example because I live here, you can get a basic apartment in the suburbs of Chiang Mai or Bangkok for $100-$500/mo, your initial arrival can be visa-free, and you'll live on delicious Thai food from a restaurant down the road for a few dollars a day. Network heavily with people in your intended destination before you even arrive because it'll make everything 100 times easier.
3. Now create a business plan for your new entity. The business plan should include a description of the product or service which you're going to market, how you're going to market it, what you're going to charge (start high), and any and all costs of development and operation including your own time. It should include monthly profit/loss projections (you're not allowed to use these projections in your budget, they are goals, not guarantees). The most important thing about your business isn't what product or service you initially offer. Once you have assets and control you can try anything you want. Until then the goal of your business is to make enough income that your assets are growing, no matter what that entails.
If you're leaving the country as a part of this process I would advise forming an LLC and opening a bank account before you go, as these things can be difficult from overseas. You'll be very busy trying to make money and living your dream so you don't want to have to deal with paperwork.
Prepare yourself mentally to work very hard for at least the next 6 months and do whatever you need to do to make enough cash. You will become practical and decisive, and you'll learn many realities about business, such as cash flow is king, very quickly. I got my start being nickel-and-dimed by agencies in India over Elance. It sucked and it was hard and it was 100% worth it.
There are many objections to this strategy which typically stem from risk aversion, or a desire to not worry about money. I would submit that if one objects to the risk, this plan is a personal growth opportunity: it will teach them how to handle stress, plan for contingencies, and so on. If the objection is that they don't want to worry about money, I would point out that money is just a way for people to quantify your value to them, and since no man is an island, there are great personal and financial rewards to be reaped from confronting this objection and discovering what other people truly value about you.
Doing step 1 first and now is the key. If your path brings you through Bangkok let me know and we'll grab a beer! I've seen many people succeed at this and a few fail. Your odds are better than you think.
If you want the value of a currency to be stable, you need to be in a position to economically support the stable price - if lots of people want to sell, the stabilizer must be prepared to buy a lot of the currency at the stable price. If a lot of people want to buy, the stabilizer must be prepared to sell a lot of the currency at the stable price. This is potentially a very, very expensive - and perhaps impossible - undertaking.
See https://en.wikipedia.org/wiki/Monetary_policy for more.
Ultimately, value isn't something that's designed, it's the overall effect of a lot of individuals' preferences and guesses about the future.
One cool way to do this that I've been thinking about is to tie the mining reward rate to the exchange rate somehow. This has the effect of more coins being created when prices are low, and less created when prices are high. In theory this should stabilize the price. The problem with this method is you need a way to measure the "price" in a decentralized way separate from any other currency.
One way to do THAT is to aim for a certain velocity of money (https://en.wikipedia.org/wiki/Velocity_of_money). Theoretically it should correlate to inflation: if people are hoarding, the *coin will gradually start to decrease in value until more people are spending. If people are spending like crazy, the value will gradually increase. Not sure if that is a good solution, just one that I thought up.
I think a currency could be stabilized by automatically adding coins to the total when people feel they want to hoard them, such as economic panics and reducing them when they feel they want to use it without care such as when the economy is booming.
It has to be automatic but finding a way to do that is the big question. But what and how? Use an index, maybe but it has to be impartial and give a true view of how people feel about the economy at the time.
For the US dollar the Federal Reserve is responsible for increasing and decreasing the money supply but they have truckloads of people telling them how the economy is doing.
A cryptocurrency has potential since in theory you can figure out exactly how people are using them.
Additionally, the supply has to increase as more people use it to create economic goods and services. If you don't, there won't be enough currency to go around and it will limit the potential products/services that people can create and consume.
This becomes the question: can the system be designed to respond dynamically to demand in a way that mimics the actions of a central bank, without there actually being a central bank? If the goal was to make it so that it was reasonably constant in terms of purchasing power, there's probably a number of metrics one could choose to measure that, but how to achieve it, that's above my pay grade.
option (ii) replicate the fiat system: i.e. the spendable currency is mostly credit with a future repayment obligation and thus actual future demand to hold the currency, and imbalances between rates of borrowing and repayment which drive the currency value up and down are kept in check by algorithmic adjustments to "interest rates" creditors face when borrowing to meet margin calls and "taxes" which make debtors have to buy even more coin to meet payment obligations.(This still isn't going to work unless credit creators are regulated though...)
whether it's still a cryptocurrency after all that is another question....
Because there's a limited amount of it, it's deflationary in nature. If a currency becomes deflationary, then it's not going to be a currency for very long. People will hoard it instead of using it in their everyday transactions.
Cryptocurrency gets most of its value from speculation. In the case of Bitcoin, at least, that speculation is spurred by some people's belief that it has importance as a potential future currency. It doesn't matter that they're wrong. Their beliefs are still enough to boost the price of the speculative asset. Then you have this cycle where its price will crash every so often. If Bitcoin's price ever stabilizes, it will be because most people have given up on it as a currency.
Once the hype dies down, the question is whether it can do what gold did and become a valuable asset with a relatively steady price. Will there be fringe Bitcoin bugs like there are gold bugs who create enough demand for it to keep the price from going to zero? I guess we'll see.
At its core, Bitcoin is gold. And if you want an answer to why we'll never be using Bitcoin as our currency, ask yourself why gold is never again going to be our currency. It's the same answer. Without a central authority, you can't keep the price stable.
Now we could certainly construct a credit structure on top of Bitcoin. So instead of trading Bitcoin directly, we trade IOUs for Bitcoin. That's what we did with the gold standard. If they're "just as good" as Bitcoin, then it effectively increases the circulation of Bitcoin-denominated assets and can help keep prices stable.
You'd of course need some kind of trusted authority to tweak various macroeconomic variables to ensure that the value of Bitcoin (or Bitcoin IOUs) remains stable. But it's possible at least for a while. You just have to give up one of the core principles of cryptocurrency.
Eventually you're going to have so much money circulating and so little actual Bitcoin to back it up that the system is going to become too brittle and you'll have to leave the Bitcoin standard.
Hey look. You just bootstrapped another fiat currency.
Bitcoin is gold. It can be manipulated by governments just as much as gold can. And it has just as much potential to be a real currency as gold does. For a long time there, we were lucky and we were mining new gold at roughly the rate required to keep prices stable. No such luck with the cryptocurrencies. And even if you manage to hit on the right rate of currency minting today, the future looks different.
- In Our Time. Legendary BBC Radio 4 show in which four experts discuss a topic (e.g. 'enzymes', 'The Egyptian Book of the Dead', 'The Paleocene-Eocene Thermal Maximum') in terms a layperson can understand for about an hour, guided by a host who asks all the dumb questions for the listener.
- Norm Macdonald Live. Former SNL castmember spends a couple of hours interviewing e.g. Billy Bob Thornton, Adam Sandler, etc.). One of the most consistently funny and off-key shows I've ever heard.
 How I built this - Interviews successful entrepreneurs on their background, motivations, challenges etc. in building their businesses.
 Revolutions - Podcast on some of the biggest political revolutions in history. I am going through season 2 (American Revolution against the British Empire).
 War Stories - "Traces the evolution of warfare through the eyes who lived it". Season 1 focused on armoured warfare (a.k.a. Tanks). Waiting on Season 2.
 Science Vs - Researches the fads/opinions (organic food, meditation, ghosts etc.) to figure out if they are based on science.
 http://tmsidk.com/ http://www.npr.org/podcasts/510313/how-i-built-this http://www.revolutionspodcast.com/ https://angrystaffofficer.com/war-stories-podcast/ https://gimletmedia.com/science-vs/
Both a great in different ways.
Linux Action News (Jupiter Broadcasting) : 30 min overview of news from the Linux world.
No Agenda: For a healthy news diet .
TWIT: Loving the over-friendliness and forced extravertedness less and less and missing Dvorak, but still, a nice Tech overview.
Story Grid: (From time to time) In depth analysis of books from the perspective of a writer and editor. Very insightful.
 http://linuxactionnews.com/ http://www.jupiterbroadcasting.com/115911/halls-of-endless-l... http://www.noagendashow.com/
Exponent -Ben Thompson of Stratechery -- very insightful commentary on business and technology/
NPR Planet Money -- Economics is a second love of mine.
Startup -- by Gimlet Media -- Stories about the startup culture
Science vs -- Researches fad and compares them to the actual science.
Acquired -- discusses technology acquisitions
Internet History Podcast -- just what it says it is.
Freakonomics -- Because it's Freakonomics, should be required listening for anyone who wants to talk about economics.
Political Gabfest -- definitely liberal leaning political commentary.
Career Tools/Manager Tools -- I suggest these two podcasts to anyone who is working. Binge on them from the beginning and skip the ones that aren't relevant to you.
The Talk Show w/John Gruber -- required listening for Apple nerds.
Accidental Tech Podcast -- same as above/
Slate Money -- Did I mention I'm an economics nerd?
Conversations with Tyler for the same reason as Econtalk.https://itunes.apple.com/us/podcast/conversations-with-tyler...?
Bodega Boyz because nothing makes me laugh like Desus and Mero.https://soundcloud.com/bodega-sushi
It's great if you're interested in continuous delivery, startups, fundraising, product development, best practices etc. from two founders who have been and continue to be successful at their roles.
These are for listening pleasure. CBC & BBC both have comedy of the week podcasts. Because News on CBC is hilarious. Drama of the Week on BBC is good though sometimes off the wall.
I could listen to Larry Kudlow for business reasons but lately I cling tight to my comedies. I need the escape.
Many of my other favorites have already been mentioned, but I also listen to:
Twenty Thousand Hertz ("stories behind the world's mostrecognizable and interesting sounds")https://www.20k.org
and have started listening to this new NPR podcast:
Wow in the World ("a new way for families to connect, look up and discover the wonders in the world around them. Every episode, hosts Mindy and Guy guide curious kids and their grown-ups away from their screens and on a journey. Through a combination of careful scientific research and fun, we'll go inside our brains, out into space, and deep into the coolest new stories in science and technology")http://www.npr.org/podcasts/510321/wow-in-the-world
Planet Money -- my favorite
Marketplace with Kai Ryssdal
Six Feats Under
My wife is more into Sunday School Dropouts than I am, but I listen to it occasionally. She also listens to some other history podcasts but I don't recall what they are.
Internet History Podcast:
http://exponent.fm/ Exponent by Ben Thompson (of Stratechery) and James Allworth is great for analysis of big tech issues and news.
https://trackchanges.postlight.com/ Track Changes by Paul Ford and Rich Ziade can be quite light, but they have some interesting guests and have lived on the web since it started.
Funny stuff - if you like Football (Soccer) then The Football Ramble is essential. http://www.thefootballramble.com
For british nonsense humour two of them have just stated a spin off. Humour is subjective though so YMMV and don't judge me! http://stakhanovindustries.com/lukeandpeteshow
(edited all of my beautifully crafted markdown links because I forget HN can't do that)
1. "How I Built This" with Guy Raz https://www.npr.org/rss/podcast.php?id=510313
2. "Startup" by Gimlet media http://feeds.gimletmedia.com/hearstartup
3. Stanford's DFJ ETL: https://web.stanford.edu/group/edcorner/uploads/podcast/Educ...
4. "This week in startups" http://feeds.feedburner.com/twist-audio
Stuff to blow your mind -- has some great in depth analisys about science and more
Techstuff -- loved the series about all the story of Sony, Nintendo, Samsung...
The Bikesheed -- two very technical guys, very funny
The changelog -- great interviews
Software engineering daily -- some guest with some technical topic every day.
It is short and quick, and very interesting to see creativities of people to generate some side income.
The Economist (Paid for but worth every cent, 8 hours of news)
The Economist asks
No such thing as a fish
All songs considered
- Reply All
- Planet Money
Reply All is about the internet and planet money is about money, but in both cases it's as much about people and the interesting things that we do.
Hanselminutes is Scott Hanselman interviewing interesting guests about aspects of software development. It has a laid back and friendly pace. Scott is always well prepared and a very nice host.
Risky Business, Pod Save America, Lawfare Podcast, Chat 10 Looks 3, The Dollop, Bombshell (War on the Rocks), FiveThirtyEight Politics
The Pitch - Shark Tank on a podcast essentially. Somewhat deeper. The more recent episodes are way better than the first ones so just skip to the end.
Waking Up - so refreshing to hear someone as thoughtful as Sam Harris on a regular basis. I love that he is so calmly rational that he can have productive conversations with everyone from left to right, atheist to Muslim.
If I would recommend one from the list it would be Ted Radio Hour
- 99% invisible - Hanselminutes
NPR's podcasts (and How I Built This especially) is of incredible quality - they even write music for each episode.
Why? Because I love horse racing and it is funny.
Security Now - Steve Gibson basically reviewing the week in software and hardware security.
Rationally Speaking - Intellectual stuff
No such thing as a fish - fun trivia from the people behind the QI tv show.
- Obsessed with Joseph Scrimshaw
Indie Hackers - Insightful 1:1 interviews with founders of smaller 'lifestyle' businesses. https://www.indiehackers.com/podcast
In Our Time - Wonderful history podcast from the BBC http://www.bbc.co.uk/programmes/b006qykl
Lovett or Leave It - Insightful weekly political podcast from Jon Lovett, a former speechwriter for Barack Obama who was once called "the funniest man in Washington." https://getcrookedmedia.com/lovett-or-leave-it-6077c7aca95c
The Perceptive Photographer - 10-15 minute podcast released every Monday from my favorite photography teacher. Insightful and brief snippets about a variety of topics of interest to fine art photographers. https://www.danieljgregory.com/perceptivephotographerpodcast...
Pod Save America - Twice-weekly podcast from four guys who used to be in the Obama White House. Super-insightful political commentary. Lots of coarse language. https://getcrookedmedia.com/here-have-a-podcast-78ee56b5a323
Pod Save the People - DeRay McKesson's weekly podcast on social justice and activism. Even if you don't know DeRay's name, you'd probably recognize him based on his blue Patagonia vest. https://getcrookedmedia.com/pod-save-the-people-56bc42af53d
S-Town - A co-production from Serial and This American Life. It starts off as a murder mystery and then goes off into left field. A beautiful, sort of American Gothic look at our country. The ending left me feeling a bit...empty maybe? Still, an incredibly worthwhile way to spend seven hours. https://stownpodcast.org
2. Acquired - Podcast about Tech Acquisitions + IPOs
3. Recode Decode by Kara Swisher
The West Wing Weekly
The Adventure Zone
This American Life
In Our Time with Melvyn Bragg
The Tobolowsky Files
Coffee Break German
Bill Burr - Monday Morning Podcast
Joe Rogan - PowerfulJRE (not every episode)
why oh why
all songs considered
Dinner party Download
The Splendid Table
You are not so Smart
AI: Talking Machines
Misc: Waking Up with Sam Harris, a16z
Throws out a new perspective on what motivates people.
It is really boosting my understanding of the French language, and giving me more confidence to speak it.
It's a simple story that's easy to follow, especially having read the book in English and seen the film a couple times. And really, how lost can you get? If you can't follow a paragraph or to, chances are he'll still be stuck on Mars for a while and you won't have missed much.
It's written in an informal, conversational style, using language that real people might use. I find myself reading a phrase that translates back to a saying I've used in English. Ah, looks like they use that in French too. I'll add it to the repertoire.
I can pick it up after a while off and quickly get back in to it without explanation. Hmm... this looks like the part where the guy is stuck on mars...
And as a bonus, it's kinda hard work to read in a foreign language, so if I pick it up in bed it's guaranteed to put me to sleep inside of half an hour.
Engineering a Safer World, https://mitpress.mit.edu/books/engineering-safer-world
Software Specification Methods, https://www.amazon.com/Software-Specification-Methods-Henri-... (also available through Safari Books Online, at least at my office)
Read most of the third one this week, a useful comparison of the various approaches. My objective is to understand how to better produce formal (or more formal) specifications. Either for whole systems or just for significant or critical portions of them.
It is a wonderfully written memoir that perfectly details the grad school experience and also includes some helpful notes from the author. I'll be graduating next year (bachelor's in CS), and my dad asked me if I wanted to enter grad school. The book sure did add some fuel to the fire.
Here are the books I've read and want to read: https://booknshelf.com/@tigran/shelves
Just started this book last night. The story begins as the Founder of Clif Bar walks away from selling his company and a $40M personal pay-out. Big idea so far, your business is an ultimate form of self-expression. > https://www.goodreads.com/book/show/29691.Raising_the_Bar
Here's my (unfinished) reviews of the books I've read so far this year: https://github.com/bcbrown/bookreviews/tree/master/2017. At the end of the year I'll flesh them out a little more.
It's a history of where all this - startup culture, silicon valley, computers, internet, hackers - came from. Should be essential reading for anyone working in IT.
Highly recommended for anyone interested in an "outside the box" perspective on mental health and society at large.
Harold Coyle, Team Yankee - WW3 in Europe in the 1980s from the perspective of a tank company commander. Poorly written, in my opinion, but the accurate (or so I hope) descriptions of the military tactics and equipment almost make up for it.
James Gleick, The Information: A History, A Theory, A Flood - excellent book about the history of information.
It goes into detail about the Mount Everest disaster in the 90s.
The Dark Tower II: The Drawing of the Three
Wanted to read the first one before the movie came out, now I am hooked...
Seveneves, Neal Stephenson
Astrophysics for people in a hurry, Neil De Grasse Tyson
First and foremost, an investor wants to know whether your algorithm works. Have you backtested it with price history, and then what are the performance metrics?
And if it works, then the obvious question is: why do you need to sell your service?
The other thing that I'd worry about as an investor is that oftentimes value stocks are cheap for a reason. It's one of the last places I'd want to algo trade.
* Use metrics to demonstrate capability.
* Hit real investors and get real feedback.
1) Large order sent to market
2) Exchanges with a serious lack of liquidity
3) stop loss orders making things worse.
Everyone has their personal pet peeves, mine is stop loss orders. It's one of the 3 things that amateurs tend to use with out any understanding of markets. The other two being use of margin, probably doesn't need any explanation, and trading currencies/currency pairs.
In today's markets, stop loss orders are like market orders........99.99% of the time only people who don't know what they are doing use them.
And circuit breakers.
You can even program jscript on server side with asp, or execute standalone with ActiveScript, even control native GUI like customizing your folder, the browser can be morphed to the file explorer. You can make apps with few kb of jscript unlike 55MB electron install bundle.
The Windows help files (CHM) are like thousand years better than macOS counterparts and linux man files. CHM was the de facto ebook format back then and it works really well with features like indexable topics and full text search. We now have to use devdocs.io or dash.
yes it has its quirks and worms, but it was way ahead of its time.
3D Printing - This is going to be the main way to manufacture things in the future. The lab that is 3D printing houses with concrete. That makes me terrified for home values going forward. It will likely shift all the value into the land. The house will just become something you tear down and reprint every 10 years.
CRISPR - s/shitty gene sequence/perfect gene sequence/g That's insane. It's like an anti-virus product for the body (irony intended). We're going to live a very long time and be practically disease free pretty soon. I'm planning on living until 150 (27 now). It's placing a big bet on medical science, but I feel like we're on the edge of some huge things.
Neuralink - Develops high bandwidth and safe brain-machine interfaces. (https://neuralink.com/)
Magic Leap - Mixed Reality (https://magicleap.com)
Crispr-Cas9 - A unique technology that enables geneticists and medical researchers to edit parts of the genome by removing, adding or altering sections of the DNA sequence. (https://en.wikipedia.org/wiki/CRISPR#Cas9)
This is a great question. The acceleration of technology has made it important for entrepenuers to look further ahead than ever when deciding where they want to make their impact in the world. Tomorrow's successful leaders in business will be the ones that peered into the most obscure places of the future to find it's problems and it's solutions.
What was 20 years ahead of its time then? What would you have looked at and thought "That'll be massive in 20 years"?
About the only thing I can think of is VR. Which Sega tried to launch in the late 90s, and only now is selling over a million units.
I want to believe.
* No accounts, no passwords, just secret keycaps
* Instead of messy and complex role-based tables, capabilities always know exactly what they are capable of doing
* No more confused deputies
* Fine-grained trust
It's a clock. A physical clock. Designed and built to run, accurately, for 10,000 years without human intervention.
People can do it but they prefer living the way they do, which is what is causing the problems, knowing in principle that they should change their behavior but not actually doing so.
Miami flooding more and more is not enough of a burning platform yet. Nature will provide it if we don't choose to change ourselves.
His Digital Monetary Trustshttps://en.wikipedia.org/wiki/Digital_Monetary_Trust
The End of Ordinary Moneyhttps://www.memresearch.org/grabbe/money1.htm
Cycan artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.
The project was started in 1984 by Douglas Lenat at MCC and is developed by the Cycorp company. Parts of the project are released as OpenCyc, which provides an API, RDF endpoint, and data dump under an open source license.
Prolog, Backward chaining, forward chaining, opportunisitic reasoning.
1: VR, self driving vehicles, nuclear fusion, artificial photosynthesis, quantum computers, robots that can manipulate things like men, wave energy harvesting, colonize mars, cure cancer, cure Alzheimer's disease.
2: no idea!
3: drones, deepmind, blue led, electric sports cars, flyboards, voice activated assistants, smart wearables...
Cryptocurrencies. 3d printers.
I am being sarcastic. But it's very hard to see, today, any technology that could make my life significantly better (at least than fixing climate change).
I could see this happening within 20 years, but not in the confines of the current project.
Does that count?
I believe controlling a massive number of nodes in the network via infection techniques like WannaCry used would open the door for many actual and hypothetical attacks. Please see the Bitcoin Wiki page titled Weaknesses  for more details about attacks involving the control of network resources.
More realistically, a simpler attack would be to go for control of the wallets if you have that kind of access to the infected hosts. However, if an actor had an interest in devaluing Bitcoin, to buy after a crash and sell after recovery perhaps, or just destabilize users' trust and destroy it (states?) then there could be a lot of profit in it I believe. Bitcoin has many competitors and enemies, is this something we should worry about?
Once you have the offer letter in hand, CONSULT A LAWYER on your own dime to prepare the following. (Even if the employer's lawyers prepared the agreement and modified clerky agreements, you would have to get your own lawyer to review any agreement before you sign them)
Step 1:Write down your verbal agreement in an email. For example:"Thank you for the offer letter. I would like to take this opportunity to document in writing what we discussed over the phone. Specifically,-- As part of my employment agreement, you will provide me with a blue pony within 2 business days of my landing in SFO. -- Within 90 days, You will also provide me with a visa and bear all costs for the same.If this is also your understanding of our discussion. Please send me an email back specifically stating that."
Step 2:-- Strike out any clause in the standard agreement that says that there shall be no other verbal or written agreements (your lawyer can help identify these sneaky clauses). Initial all pages. Sign it.-- Attach and Send it back to him/her. -- Ask him to make 2 copies of that set, initial all pages AND any struck-out clauses and send one copy back to you.
This way he/she gets to use Clerky and you get a good written documentation any agreement.
If he refuses to do that. Don't take the job.
NOTE OF CAUTION: Its not up to your employer to give you a green card or a visa. The US Government has to give you one. Your employer will have to pay "thousands of dollars to hire a lawyer" to a visa or green card anyway.
> he has to pay thousands of dollars to hire a lawyer to include the terms
This is utter bullshit. Call 5-10 local lawyers, tell them what is needed, and find out how much it would cost. Present the employer with the names of those lawyers and their prices.
> Is my request unreasonable?
ABSOLUTELY NOT!!! Always get everything in writing, no matter how much you trust someone. For all you know, the person who made you the promises will be at a different company or dead in a few years.
It's completely normal and reasonable to get something in writing, and it's either naive or malicious that someone is refusing.
After running my own email server for 15 years I gave up a couple of years ago and paid for someone else to solve the nightmare of dealing with the big email gatekeepers.
The problem there is as they moved from beta - you need to pay them 8$/mo to get catch-all email and updates..
SMTP isn't a secure transport.
Having your email stored on someone else's computers (ie the cloud) is not necessarily 'secure'.
Having a well-constructed and well-managed host somewhere you physically control seems to me the most 'secure' arrangement, which is what I have always had. Currently for the cost of a Raspberry Pi and occasional 'apt-get update' etc.
That said, there are some things you should be aware of when running a mail server:
1. You need to make sure that the IP address and domain name that SMTP is bound to is not on a blacklist. You also need to consider the trustworthiness of your host because you could very well get caught in the cross-fire if one of their other customers gets them range banned. Certain cloud providers that make it very easy to change IP will more than likely have all of their addresses on some blacklist or another.
2. You also need to make sure you have matching forward (A record) and reverse (PTR record) DNS records for that IP address. This is called Forward-confirmed reverse DNS, aka FCrDNS. Many mail servers will reject email from servers that do not have or have mismatching records for FCrDNS.
3. You must set up SPF and DKIM. Many mail servers will either reject mail from servers without these, or at least weight heavily against it.
4. You probably want to make sure TLS is set up properly, otherwise your mail is going to travel the internet in plaintext.
5. The IP address you're sending from is going to start off with no reputation. The volume, type of mail, and how many people mark your mail as spam is going to decide whether other mail servers start filtering you or not. You may have no problems here. If you're unlucky, you will need to try to reach out to whichever major mail provider is filtering your mail. Many of them have a ticketing system for this, but you'll be at the mercy of whomever is working that ticket. There are also various whitelists that might be worth trying your server on. They're usually very selective and will probably reject your request.
6. You really, really need to make sure you've got your policies set up correctly because you do not want to accidentally set up an open relay that will be used to spam other people.
7. Greylisting is a very, very effective means of spam filtering. The downside is that mail from new servers wont be delivered instantaneously and will instead be delivered whenever their mail server tries to deliver it again. Other than that, most spam is malformed in some way so some basic DNS checks will filter a ton of it. There are also free RBL and DNSBL lists that will pick up the slack.
 http://www.iredmail.org/ https://mailinabox.email/ https://mxtoolbox.com/blacklists.aspx https://en.wikipedia.org/wiki/Open_mail_relay
I'd be interested in looking into any examples of searches where the results aren't good enough or where it seems to have gotten worse recently.
As far as I know there haven't been any changes over the past few weeks that would have made things worse.
It's not an easy or memorable name at the moment, and branding matters.
I used DDG a while when it was introduced. But returned back to google since the results were no as good.
But recently I was feeling googles results has got a lot worse and gave DDG another try.
Big difference! Like Google vs Yahoo back in the days.
Now DDG is my default search.
I guess my estimated worth to Google must be fairly low because I don't click on many ads and I often use a work VPN.
I'm in the UK and noticed I often seem to be getting US centric results and have to try using Google more often.
Edit: ddg has been my default for 2 years.
I was hoping to build an extension for DDG a few months back, but things seemed to have changed in the forum.
This could explain why we are seeing changes.
phireal@pc ~$ ls -1 Box/ - work nextcloud Cloud/ - personal nextcloud Code/ - source code I'm working on Data@ - data sources (I'm a scientist) Desktop/ - ... Documents/ - anything I've written (presentations, papers, reports) Local@ - symlink to my internal spinning hard drive and SSD Maildir/ - mutt Mail directory Models/ - I do hydrodynamic modelling, so this is where all that lives Remote/ - sshfs mounts, mostly Scratch/ - space for stuff I don't need to keep Software/ - installed software (models, utilities etc.)
phireal@server store$ ls -1 archive - archived backups of old machines audiobooks - audio books bin - scripts, binaries, programs I've written/used books - eBooks docs - docs (personal, mostly) films - films kids - kids films misc - mostly old images I keep but for no particular reason music - music pictures - photos, organised YYYY/MM-$month/YYYY-MM-DD radio - podcasts and BBC radio episodes src - source code for things I use tmp - stuff that can be deleted and probably should tv_shows - TV episodes, organised show/series # urbackup - UrBackup storage directory web - backups of websites work - stuff related to work (software, data, outputs etc.)
Currently reconstructing the entire thing to production spec, as an AWS AMI, perhaps later polished into a personal knowledge base saas where the cleaned and sorted content is public accessible with REST/cmis api.
This project has single handedly eaten almost a third of my life.
- bin :: quick place to put simple scripts and have available everywhere - build :: download projects for inspection and building, not for actively working on them - work-for :: where to put all projects; all project folders are available to me in zsh like ~proj-1/ so getting to them is quick despite depth. - me :: private projects for my use only - proj-1 - all :: open source - proj-2 - client :: for clients - client-1 - proj-3 - org :: org mode files - diary :: notes relating to the day - 2017-06-21.org :: navigated with binding `C-c d` defaulting to today - work-for :: notes for project with directory structure reflecting that of ~/work-for - client - client-1 - proj-3.org - know :: things to learn from: txt's, books, papers, and other interesting documents - mail :: maildirs for each account - addr-1 - downloads :: random downloads from the internet - media :: entertainment - music - vids - pics - wallpaper - t :: for random ad-hoc tests requiring directories/files; e.g. trying things with git - repo :: where to put bare git repositories for private projects (i.e. ~work-for/me/) - .password-store :: (for `pass` password manager) - type-1 :: ssh, web, mail (for smtp and imap), etc. - host-1 :: news.ycombinator.com, etc. - account-1 :: jol, jolmg, etc.
. Desktop Downloads Google Drive // My defacto Documents folder legal library // ebooks and anything else I read ... Downloads Sandbox // all my repositories or software projects go here Porn // useful when I was a teen, now just contains a text file with lyrics to "Never Gonna Give You Up"
- music: Musicbrainz Picard to get the metadata right. I've been favoring RPis running mpd as a front-end to my music lately.
- movies/TV: MediaElch + Kodi
I don't have a good solution for managing pictures and personal videos that doesn't involve handing all of it to some awful, spying "cloud" service. Frankly most of this stuff is sitting in Dropbox (last few years worth) or, for older files, in a bunch of scattered "files/old_desktop_hd_3_backup/desktop/photos"-type directories waiting for my wife and I to go through them and do something with them. Which is increasingly less likely to happensometimes I think the natural limitations of physical media were a kind of blessing, since one was liberated from the possibility of recording and retaining so much. Without some kind of automatic facial recognition and taggingand saving of the results in some future-proof way, ideally in the photos/videos themselvesthis project is likely doomed.
My primary unresolved problem is finding some sort of way to preserve integrity and provide multi-site backup that doesn't waste a ton of my time+money on set-up and maintenance. When private networks finally land in IPFS I might look at that, though I think I'll have to add a lot of tooling on top to make things automatic and allow additions/modifications without constant manual intervention, especially to collections (adding one thing at a time, all separately, comes with its own problems, like having to enumerate all of those hashes when you want something to access a category of things, like, say, all your pictures). Probably I'll have to add an out-of-band indexing system of some sort, likely over HTTP for simplicity/accessibility. For now I'm just embedding a hash (CRC32 for length reasons and because I mostly need to protect against bit-rot, not deliberate tampering) at the end of filenames, which is, shockingly, still the best cross-platform way to assert a content's identity, and synchronizing backups with rsyncZFS is great and all but doesn't preserve useful hash info if a copy of a file is on a non-ZFS filesystem, plus I need basically zero of its features aside from periodically checking file integrity.
Other things are better sorted by category or topic. For tools or programming languages I'm researching I might have a directory with items "01_some-language", "02_setup", "10_type-system", "20_ecosystem", etc.
~/dev for any personal project work
~/$COMPANY for any professional work I do for $COMPANY
~/teaching for teaching stuff
~/research for academic research (it's a big mess unfortunately)
~/icl for school related projects (where "icl" is Imperial College London)
For my PDFs I use Mendeley to organize them and have them available everywhere along with my annotations.
I store my books in iBooks and on Google Drive in a scheme roughly like: /books/$topic/$subtopic
Usually organizing your files is usually just commitment, move files off ~/Downloads as soon as you can :-)
~/$MAJOR_TOPIC | |--- ./$MORE_SPECIFIC | |--- ./$MORE_SPECIFIC | |--- ./general-file.type | | ./general-file.type | |--- ./$MORE_SPECIFIC | |--- ./general-file.type
As you find yourself collecting more general files under a directory that can be logically grouped, create a new directory and move them to it.
Also keep all your directories in the same naming convention (idk maybe I'm just OCD)
Web sites are
sitename/ info - login data for site, domains, etc. site - what gets pushed to the server work - other stuff not pushed to server
As for all "working" documents, they're local to my machine under a documents or project folder. The documents folder is synced to all my devices and looks the same everywhere with a similar organization structure as my external drive. My projects folder is only local to my machine, which is a portable, and contains all the documents needed for that project.
TL;DR Shallow folder structure with dates at the beginning of files essentially.
Outside of that scope, my files reside randomly somewhere in the ~/Documents folder (I use a mac) and I rely on spotlight to find the item I need. It's not super great but is workable often enough.
It's not a silly question!
edit: I've been trying to find a multi-disk solution and haven't had much success with an easy enough to use tool. I use git-annex for this and it helps to some extent. I've also tried Camlistore, which is promising, but has a long way to go.
- /x/src contains all Git repos that are pushed somewhere. Structure is the same as wanted by Go (i.e., GOPATH=/x/). I have a helper script and accompanying shell function `cg` (cd to git repo) where I give a Git repo URL and it puts me in the repo directory below /x/src, possibly cloning the repo from that URL if I don't have it locally yet.
$ pwd /home/username $ cg gh:foo/bar # understands Git URL aliases, too $ pwd /x/src/github.com/foo/bar
- /x/bin is $GOBIN, i.e. where `go install` puts things, and thus also in my PATH. Similar role to /usr/local/bin, but user-writable.
- /x/steam has my Steam library.
- /x/build is a location where CMake can put build artifacts when it does an out-of-source build. It mimics the structure of the filesystem, but with /x/build prefixed. For example, if I have a source tree that uses CMake checked out at /home/username/foo/bar, then the build directory will be at /x/build/home/username/foo/bar. I have a `cd` hook that sets $B to the build directory for $PWD, and $S to the source directory for $PWD whenever I change directories, so I can flip between source and build directory with `cd $B` and `cd $S`.
- /x/scratch contains random junk that programs expect to be in my $HOME, but which I don't want to backup. For example, many programs use ~/.cache, but I don't want to backup that, so ~/.cache is a symlink to the directory /x/scratch/.cache here.
Non Golang code will go to ~/code, sometimes ~/code/company-name but I also have couple of ad hoc codebases spread around in different places on my filesystem.
So it is a bit disorganized. However last few years I have rarely ever needed to cd outside of ~/code/go.
Some legacy codebases I worked on (and still need to contribute to from time to time) can be in most random places as it took some effort and time to configure local environment of some of these beasts to be working properly (and they depend on stuff like Apache vhosts) so I am too afraid to move those to ~/code as I might break my local environment.
So, no organization (the ocd part of me hates this) but i always find my files in an instant, no matter where i left them.
Filename preserved, ordered by date or grouped in arbitrary functional folders
YYYY.AlbumName (Keeps albums in date order) AlbumName Track# Title.mp3 (truncates sensibly on a car stereo)
YYYY-MM-DD.Event Description (DD is optional)
scripts - reusable across clients
source code documents
I use Beyond Compare as my primary file manager at home and work. Folder comparison is the easiest way to know if a file copy fully completed. Multi-threaded move/copy is nice too.
This is a directory that can be emptied at any moment without the fear of losing anything important, and which help me keeping the rest of my fs clean. Basically `/tmp` for user.
For photos folders per device/year/month.
For Office documents pre-pending date using the ISO date format (2017-06-21 or 170621) works great. (for sharing with others over various channels like mail/chat/fileserver/cloud/etc)
I also recommend calibre for e-books, but I never got to the "document store" stage that I think some people have.
-Language/technology -specific research case
Edit: Also you might want to make a small title edit s/files/ebooks unless you are inquiring about other types of files as well.
When reading for pleasure I typically read paper, try to limit the screen time if possible.
~/github - just cloned repos
~/fork - everything forked
~/pdf - all science papers
'pjt' is my tag for projects
'sfw' is my tag for software and computer science
'doToo' is the name of this software project
'cmm' is my tag for interpersonal communications
Projects (tagged with 'pjt') is one of my five broad categories of files, with the others being Personal ('prs'), Recreation ('rcn'), Study ('sdg'), and Work ('wrk'). All files fall into one of these categories, and thus all file names begin with one the five tags mentioned. After that tag, I use the '>' symbol to indicate the following tag(s) is/are subcategories.
Any tags other than those for the main categories might follow, as 'sfw' did in the example above. This same tag 'sfw' is also used for files in the Personal category, for files related to software that I use personally--for example:
Here, NameMangler is the name of the Mac application I use to batch-modify file names when I'm applying tags to new files. '@nts' is my tag for files containing notes.I also have many files whose names begin with 'sdg>sfw' and these are computer science or programming-related materials that I'm studying or I studied previously and wanted to archive.
A weakness of hierarchical organization is that it makes it difficult to handle files that could be reasonably placed in two or more positions in the hierarchy. I handle this scenario through the use of tag suffixes. These are just '|'-delimited lists of tags that do not appear in the prefix identifier, but that are still necessary to convey the content of the file adequately. So for example, say I have a PDF of George Orwell's essay "Politics and the English Language":
The suffix of tags begins with '=' to separate it from the rest of the file name. A couple of other features are shown in this file name. I use '_' to separate the prefix tags from the original name of the file ('orwell9' in this case) if it came from an outside source. I'm an English teacher and use this essay in class, and that's why the tags 'wrk' for Work and 'tfl' for 'Teaching English as a Foreign Language' appear. 'wrt' is my tag for 'writing', since Orwell's essay is also about writing. The tag 'georgeOrwell' is not strictly necessary since searching for "George Orwell" will pick up the name in the text content of the PDF, but I still like to add a tag to signal that the file is related to a person or subject that I'm particularly interested in. Adding a camel-cased tag like this also has the advantage that I can specifically search for the tag while excluding files that happen to contain the words 'George' and 'Orwell' without being particularly about or by him.
That last file name example also illustrates what I find to be a big advantage of this system: it reduces some of the mental overhead of classifying the file. I could have called the file 'wrk>tfl>politicsAndTheEnglishLanguage=sdg|wrt|lng|georgeOrwell', but instead of having to think about whether it should go in the "English teaching work-related stuff" slot or the "stuff about language that I can learn about" slot, I can just choose one more or less arbitrarily, and then add the tags that would have made up the tag prefix that I didn't choose as a suffix.
There's actually a lot more to the system, but those are the basics. Hope you find it helpful in some way.
in my main collection of files for mystartup, computing, applied math, etc.
All those files are well enough organized.
Here's how I do it and how I do relatedwork more generally (I've used thetechniques for years, and they are allwell tested).
(1) Principle 1: For the relevant filenames, information, indices, pointers,abstracts, keywords, etc., to the greatestextent possible, stay with the old 8 bitASCII character set in simple text fileseasy to read by both humans and simplesoftware.
(2) Principle 2: Generally use thehierarchy of the hierarchical file system,e.g., Microsoft's Windows HPFS (highperformance file system), as the basis(framework) for a taxonomic hierarchyof the topics, subjects, etc. of thecontents of the files.
(3) To the greatest extent possible, I doall reading and writing of the files usingjust my favorite programmable text editorKEdit, a PC version of the editor XEDITwritten by an IBM guy in Paris for the IBMVM/CMS system. The macro language is Rexxfrom Mike Cowlishaw from IBM in England.Rexx is an especially well designedlanguage for string manipulation as neededin scripting and editing.
(4) For more, at times make crucial use ofOpen Object Rexx, especially its functionto generate a list of directory names,with standard details on each directory,of all the names in one directory subtree.
(5) For each directory x, have in thatdirectory a file x.DOC that has whatevernotes are appropriate for gooddescriptions of the files, e.g., abstractsand keywords of the content, the source ofthe file, e.g., a URL, etc. Here the filetype of an x.DOC file is just simple ASCIItext and is not a Microsoft Word document.
There are some obvious, minor exceptions,that is, directories with no file namedx.DOC from me. E.g., directories createdjust for the files used by a Web page whendownloading a Web page are exceptions andhave no x.DOC file.
(6) Use Open Object Rexx for scripts formore on the contents of the file system.E.g., I have a script that for a currentdirectory x displays a list of the(immediate) subdirectories of x and thesize of all the files in the subtreerooted at that subdirectory. So, for allthe space used by the subtree rooted at x,I get a list of where that space is usedby the immediate subdirectories of x.
(7) For file copying, I use Rexx scriptsthat call the Windows commands COPY orXCOPY, called with carefully selectedoptions. E.g., I do full and incrementalbackups of my work using scripts based onXCOPY.
For backup or restore of the files on abootable partition, I use the Windowsprogram NTBACKUP which can backup abootable partition while it is running.
(8) When looking at or manipulating thefiles in a directory, I make heavy use ofthe DIR (directory) command of KEdit. Theresulting list is terrific, and commonoperations on such files can be done withcommands to KEdit (e.g., sort the list),select lines from the list (say, all filesx.HTM), delete lines from the list, copylines from the list to another file, useshort macros written in Kexx (the KEditversion of Rexx), often from just a singlekeystroke to KEdit, to do other commontasks, e.g., run Adobe's Acrobat on anx.PDF file, have Firefox display an x.HTMfile.
More generally, with one keystroke, haveFirefox display a Web page where the URLis the current line in KEdit, etc.
I wrote my own e-mail client software.Then given the date header line of ane-mail message, one keystroke displays thee-mail message (or warns that the dateline is not unique, but it always hasbeen).
So, I get to use e-mail message date linesas 'links' in other files. So, if somefile T1 has some notes about some subjectand some e-mail message is relevant, then,sure, in file T1 just have the date lineas a link.
This little system worked great until Iconverted to Microsoft's Outlook 2003. IfI could find the format of the filesOutlook writes, I'd implement the featureagain.
(9) For writing software, I type only intoKEdit.
Once I tried Microsoft's Visual Studio andfor a first project, before I'd typedanything particular to the project, I got50 MB or so of files nearly none of whichI understood. That meant that wheneveranything went wrong, for a solution I'dhave to do mud wrestling with at least 50MB of files I didn't understand; moreover,understanding the files would likely havebeen a long side project. No thanks.
E.g., my startup needs some software, andI designed and wrote that software. SinceI wrote the software in Microsoft's VisualBasic .NET, the software is in just simpleASCII files with file type VB.
There are 24,000 programming languagestatements.
So, there are about 76,000 lines ofcomments for documentation which isIMPORTANT.
So, all the typing was done into KEdit,and there are several KEdit macros thathelp with the typing.
In particular, for documentation of thesoftware I'm using -- VB.NET, ASP.NET,ADO.NET, SQL Server, IIS, etc. -- I have5000+ Web pages of documentation, fromMicrosoft's MSDN, my own notes, andelsewhere.
So, at some point in the code where somedocumentation is needed for clarity forthe code, I have links to my documentationcollection, each link with the title ofthe documentation. Then one keystroke inKEdit will display the link, typicallyhave Firefox open the file of the MSDNHTML documentation.
The documentation is in four directories,one for each of VB, ASP, SQL, and Windows.Each directory has a file that describeseach of the files of documentation in thatdirectory. Each description has the titleof the documentation, the URL of thesource (if from the Internet which is theusual case), the tree name of thedocumentation in my file system, anabstract of the documentation, relevantkeywords, and sometimes some notes ofmine. KEdit keyword searches on this file(one for each of the four directories) arequite effective.
(10) Environment Variables
I use Windows environment variables andthe Windows system clipboard to make a lotof common tasks easier.
E.g., the collection of my files ofdocumentation of Visual Basic is in mydirectory
Okay, on the command line of a consolewindow, I can type
and then have that directory current.
Here 'G' abbreviates 'go to'!
So, to command G, argument 'VB' acts likea short nickname for directory
Actually that means that I have --established when the system boots -- aWindows environment variable MARK.VB withvalue
I have about 40 such MARK.x environmentvariables.
So, sure, I could use the usual Windowstree walking commands to navigate todirectory
is a lot faster. So, such nicknames arejustified for frequently used directoriesfairly deep in the directory tree.
are used by some other programs,especially my scripts that call COPY andXCOPY.
So, to copy from directory A to directoryB, I navigate to directory A and type
which sets environment variable
to the directory tree name of directory A.Similarly for directory B.
Then my script
takes as argument the file name and doesthe copy.
takes two arguments, the file name of thesource and the file name to be used forthe copy.
I have about 200 KEdit macros and about200 Rexx scripts. They are crucial toolsfor me.
About 12 years ago I started a fileFACTS.DAT. The file now has 74,317 lines,is
bytes long, and has 4,017 facts.
Each such fact is just a short note,sure, on average
2,268,607 / 4,017 = 565
bytes long and
74,317 / 4,017 = 18.5
And that is about
12 * 365 / 4,017 = 1.09
that is, an average of right at one newfact a day.
Each new fact has its time and date, alist of keywords, and is entered at theend of the file.
The file is easily used via KEdit and afew simple macros.
I have a little Rexx script to run KEditon the file FACTS.DAT. If KEdit isalready running on that file, then thescript notices that and just brings to thetop of the Z-order that existing instanceof KEdit editing the file -- this way Iget single threaded access to the file.
So, such facts include phone numbers,mailing addresses, e-mail addresses, userIDs, passwords, details for multi-factorauthentication, TODO list items, and otherlittle facts about whatever I want helpremembering.
No, I don't need special software to helpme manage user IDs and passwords.
Well, there is a problem with thetaxonomic hierarchy: For some files, itmight be ambiguous which directory theyshould be in. Yes, some hierarchical filesystems permitted to be listed in morethan one directory, but AFAIK theMicrosoft HPFS file system does not.
So, when it appears that there is someambiguity in what directory a new fileshould go, I use the x.DOC files for thosedirectories to enter relevant notes.
Also my file FACTS.DAT may have suchnotes.
Well, (1)-(11) is how I do it!
Grasping the fundamentals means that when it comes to policy decisions (e.g. in the management of certificates) you can see what the consequences of a particular decision are, rather than just hoping that whoever proposed that policy knew what they were doing.
For example, I think a lot of people today use Certificate Signing Request (CSR) files without understanding them at all. But once you have a grounding in the underlying elements you can see at once what the CSR does, and why it's necessary without needing to have that spelled out separately.
Or another example, understanding what was and was not risky as a result of the known weakness of SHA-1. I saw a lot of scare-mongering by security people who saw the SHA-1 weakness as somehow meaning impossible things were now likely, but it only affected an important but quite narrow type of usage, people who understood that could make better, more careful decisions without putting anybody at risk.
1) https://www.ssllabs.com/ssltest/ - try to get an A+. It's not important to in most cases in practice, but you'll learn a lot getting there. Their rating guide is also handy: https://github.com/ssllabs/research/wiki/SSL-Server-Rating-G...
2) MITM yourself. I've done this using Charles, you can do it with any HTTP proxy that lets you rewrite requests on the fly - I hear Fiddler is popular. MITM yourself and try changing the page for an HTTP site. Then try doing it on a website that is part HTTP part HTTPS (e.g. HTTPS for login page for example) and "steal your password". Try again on a website that redirects from HTTP to HTTPS using a 301 but does not have HSTS. Finally try on a site with HSTS (nb: you won't manage this one). Congratulations, you now truly understand why HSTS is important and what it does better than most people!
3) Set up HTTPS on a website. You've probably already done this. In which case maybe do it with LetsEncrypt for an extra challenge?
It doesn't hold your hand at all, but it gives you a nice "task" to accomplish. Reading up on all the terminology and exactly how and why it works was really fun.
There was also a nice web page presenting all kinds of PKI concepts that I came across a few years ago but haven't been able to find since then. :-(
I track open positions from time to time on next website https://blockchain.works-hub.com/.
You would have a canonical lossless image stored in S3. When a user makes a request to your CDN, it calls an origin server (assuming a cache miss) that transforms the canonical images into an optimized form.
Any basic WSGI, FCGI, CGI application behind NGINX will probably be sufficient.