hacker news with inline top comments    .. more ..    28 Feb 2016 Ask
home   ask   best   3 years ago   
HN Office Hours with Jared Friedman and Trevor Blackwell
85 points by snowmaker  1 day ago   143 comments top 35
yurivish 1 day ago 6 replies      
Hi Trevor and Jared,

http://weavesilk.com is a side project of mine for many years. I put out a brand new version of the iOS app this month and am thinking about developing it further but find it very hard to decide what direction to go.

The website is popular, and people love Silk, but for different reasons: some find it relaxing and meditative, others like that it closes the gap between their artistic ability and taste.

It's been used as an inspirational sketching tool for artists (http://bit.ly/1Qlm1kA), to make album art, and has been on exhibit at the Children's Creativity Museum. Some teachers use it to teach kids about symmetry.

I've made something compelling but don't know what to do next, or how to figure it out.

alantrrs 1 day ago 2 replies      
Hi!I'm building a platform for scientists to run and share their experiments (anything computable) including their whole research environment. I want to make scientific research easily discoverable and replicable.

First, what are your thoughts on this market?

Also, some advice: I'm currently building the prototype based on my own experience as I'm my own user. That's the only thing I've been focusing on, I haven't looked for external feedback yet and haven't spent much time looking for people to join me. I figured both of those things will be much easier once I have the prototype ready. Am I on the right track or should I be doing everything at once?

srikieonline 1 day ago 2 replies      
Hi Jared & Trevor, Thank you for giving us this opportunity to discuss our ideas with you.

I am the founder of http://www.pnyxter.com - a video (only) based debate and discussion app.

My question is about product-market fit.

A user can create topic and upload a video selfie talking on the topic or respond to other topics via video selfie. The app does not have provision of text comments at all - only video responses.

I've invited several friends and family for private beta, and its been 10 days now and only 1 of my friend dared to create a video.

I also shared the link on FB and linked in and got several views, but no one created a topic. I also did a FB ad for 2 days - no luck here too. Is this a clear indication of product - market misfit? Or is it too early to conclude?

Or should I shift my focus only on professional and amateur debaters of various debate clubs in cities, universities, schools etc.

I do understand the privacy issues of showing video selfie - but I've given provision of a good privacy control (or at least I think its good).

Video based opinions are already being posted by users in Facebook and youtube - but do not have this consolidated grouping of all video discussions on a topic in one place. Youtube tried video response feature and closed it in 2013 due to low engagement - but of course youtube is a very generic video site - not all videos needs video responses/discusisons/debates.


edibleEnergy 1 day ago 4 replies      
Hey Jared and Trevor,

BugReplay (http://www.bugreplay.com) is a bug reporting tool in the form of a browser extension that captures a screencast of the users actions synced with network traffic, javascript errors and other browser data.

What's the best way to grow your early user base? Ideally we'd like people who are going to use BugReplay at their jobs and open source endeavors, but we want to keep it relatively small until we feel like it's ready for the widest possible audience.

tedmiston 1 day ago 1 reply      
My app is a way for sneakerheads to follow, discover, and ultimately purchase new sneakers online from first announcement until they're available for purchase with minimal interaction. Today everyone does this by following blogs that make hundreds of posts per week with tons of repetition and very low signal to noise.

I've built a prototype for myself, but would like to now connect with the right industry people to help guide the product and be trusted early users. In another reply you mentioned finding users to serve as product advisors similarly (https://news.ycombinator.com/item?id=11183572) . I'm having a hard time discovering who the "right" people are and reaching them in a compelling way. If I were further along [read: post private beta] I would reach out to someone like Phin Barnes @ First Round, but I feel at this stage it's too early for him to really give interest.

zeeshanm 1 day ago 2 replies      
I am founder and CEO of Exivest [http://exivest.com/]. We help startup employees value their equity, help match them with direct buyers or arrange synthetic liquidity.

We've talked to many startup employees and they all have given positive feedback. One concern that comes up is that startups are not OK with changing the cap table for tiny transactions. In that case, we have plans to offer synthetic liquidity solutions via derivative transactions.

We'd like to hire your thoughts on the problem we are solving and our approach so far.

lpaone 1 day ago 1 reply      
Hi Jared and Trevor,

http://www.stroomnews.com is a bootstrapped breaking news and events focused sharing platform, that allows live-streaming and pre-captured video and image sharing through our mobile app and website. We are also working on an enterprise solution for the news industry that ties into our platform. Our B2C app was released this past Fall, but has shown little traction and we have had discussions with a major news media company about our enterprise product. We believe that signing up users for our enterprise solution will also help grow our B2C platform.

We also have an idea in the area of video compression (both founders have experience in this industry) that we believe could be huge in the streaming video industry. Our initial tests have shown very promising results, but we have not had much time to work on this due to focusing on our platform and enterprise product. The success of this tech does not only provide a large advantage for our business, but opens us up to many other industries and opportunities.

As our ability to bootstrap dwindles (due to amount of savings), do we continue along our path and try to get revenue as soon as we can by working on our enterprise product or do we spend time trying to raise money so we can focus on our tech, which may not produce anything product worthy for a much longer time (or ever as it is still in the research stage)?

Thanks for your time!

boxerab 1 day ago 1 reply      
Hi Jared and Trevor,

I have developed a high-speed image encoder that runs on off-the-shelf graphics cards:


I am working on my marketing strategy: need to decide whether to focus on selling to end-users, or licensing the software to other businesses. Second option requires more $$, and a sales team, but seems to have more potential for growing the business.

Any advice would be greatly appreciated!


plehoux 1 day ago 1 reply      
Hi Jared and Trevor,

I'm a co-founder at https://www.missiveapp.com, a collaborative email client (Slack meets Gmail). We've launched our open beta last January and are actively recruiting beta users.

We're a fully bootstrapped team of 4 working from Quebec city, Canada. We were able to bootstrap Missive with the $ we rake in from another project we launched three years ago called ConferenceBadge.com [1].

My question is, do you think it's a mistake to run two businesses in parallel?

Right now 85% of our time is invested in Missive even though it's bringing $0 in revenue.

Our philosophy is that if we were to look for funding, we would have to invest at least 15% of our time on fundraising and investor relationships.

We also believe that looking for investment before market fit is a recipe for disaster (need not to forget that we are not from/in the valley).

[1] https://medium.com/@plehoux/successfully-bootstrapped-a-prod...).

[2] What are chat conversations doing in an email client? Here are few examples of cool possibilities they enable: https://www.youtube.com/watch?v=VcRQhGfT620

dcole2929 1 day ago 1 reply      
Hey Jared & Trevor

I'm working on a few different projects all of which I think could become viable companies but having a hard time deciding which to focus on. I've already seen interest from relevant parties in each of the separate projects.

I think the hardest one but also the one with the most growth potential would be a project I'm working on to provide management tools (similar to the stuff a ceo might use) to high level government officials. However, with the way that government contracts are handled I wonder if this is even a reasonable industry to target.

Secondly I have two different projects that focus on College Students, where I would be selling solutions to the Colleges themselves. The first project is an art application that allows users to upload art in any medium and be seen by other students. This would allow them to easily build fan bases by taking advantage of pre-existing school connections.

The other idea is similar but focuses on user generated events. It tackles the question of how does one find interesting things to do, in a new area, when you don't know anyone. And has certain measures in place to help alleviate the awkwardness of trying to join pre-existing groups.


sharemywin 1 day ago 1 reply      
Hi Jared & Trevor,

Do you think that marketplaces for services is too mature of a market? I'm working on a site called bid2mow.com to help new lawn care companies find work. It seems like everyone is focused on come to my app/website and I'll get you a price versus an ebay model. I know task rabbit had that model and went away from it. Ebay may not be amazon but it's no business to sneeze at either.

cddotdotslash 1 day ago 3 replies      
Hi, thank you both for doing this. I've been working on a project on and off for about two years now which I finally launched as a beta last year. It's called CloudSploit (https://cloudsploit.com) and is a service designed to allow users to continuously scan their AWS accounts for vulnerabilities (account-level risks that could lead to a compromise). AWS has some security products, and their are certainly competitors in the space, but we've heard from countless customers that our price point and features are ideal for them. My goal is to now move this out of beta and to actually advertise it.

My question for you is: what challenges are there around marketing for security-focused products? Of course, trust is a key factor, but are there other things I should consider? I'm thinking of Twitter ads, but I'm sure there are better options between that and cold emailing. Thanks again!

BinaryResult 1 day ago 1 reply      
Hey Guys,

I am one of the Co-founders of Disco Melee (https://beta.discomelee.com/hub). We are a live streaming social network designed around the needs of gamers. You could think of us like if Twitch and Facebook had a baby who liked to party and was eyeing up Reddit for a future fling.

The gamers that find us tend to rave about our overall vision, low latency and other base features like IM system, streamer storefronts, and free donate/sub buttons for all. The problem however is that (except for our hardcore believers) they don't seem stick around very long due to the pull of network effects from the established players. What strategies could you recommend for overcoming network effects to the point where we can start generating our own? Thanks!

nopinsight 1 day ago 1 reply      
Hi Trevor and Jared,

My startup is developing technology for natural language understanding (in contrast to NLP). We believe we can approach human-level understanding for standard texts (email, blogs, online articles) in 2.5-4 years.

We are currently self-funded and can comfortably do so for about a year--by the end of which, we believe we can develop a fairly advanced demo that surpasses existing techs in some, but not all, areas.

What kind of demos do you think would impress best-in-class recruits/investors to join our team? The current options we have in mind are:

1) Solving a subset of Winograd schema (commonsense reasoning challenge) with a general approach (i.e., easily extensible with additional investment in knowledge acquisition/data sources). No systems known to public can currently solve all of them (or even a subset generally).


2) A conversational agent capable of conversing with humans at the level of a 4- or 5-year-old (without resorting to typical chatbot tricks).

3) Surpassing the state-of-the-art systems in a couple of standard AI conference tasks or on datasets released by leading companies (Google, Facebook, etc).

4) Other tasks we have not thought of...

Because of resource and time limitations, we likely need to focus the initial efforts on one, or at most two, of the options above. (A mature system should be able to do all those and beyond, but this is only for about one year from now.)

A couple extra questions if you have the time:

- Given the startup's long-term timescale before monetization and its technical nature, what sorts of investors should we focus on talking with?

- Are there chances of IP leaking and causing problems with patenting our tech later on? What should we do to prevent that?

Sincerely appreciate your time to answer these questions.

-- Ken Noppadon

aarzee 1 day ago 2 replies      
Hi Trevor and Jared,

I'm in the idea stages of creating a website, and I have no previous experience with startups or other business; I'm only a senior in high school. The idea is that there are people who would like to own the same game on multiple platforms, and so the website would offer a discount on a game that you already own, for another platform. So, for example, if you have the Xbox One version of Rocket League, the website would offer a discount on the Steam version.

My question is: how should I gauge interest in such an idea? I don't have any real budget to speak of, and I don't know where I should go looking for people to ask.

spicavigo 1 day ago 2 replies      
Hi Jared and Trevor,

I created https://codebunk.com. Its an Online Interviewing Tool for developers. Its the best tool out there. It provides code execution in 23 languages, collaborative editor, REPL shells, AV Text chat, Teams, Question banks and a lot more.

Its cash flow positive and has some of the coolest clients.

However, the rate at which it acquires new customers is pretty low (~3/month). I have exhausted (or nearly exhausted) avenues of generating buzz (PH, HN, Reddit, some tech publications). As a developer without any help, how do I promote CodeBunk further? What's the best way to reach my audience (Hiring Mangers, CTOs)?

braderhart 1 day ago 1 reply      
Hello and thank you for taking the time to help other entrepreneurs. I am working on creating a new cross-platform premier Linux distribution that uses containers as the core init process. The goal is automatic cross-compilation between multiple targets and using the built-in kernel sandboxing for applications instead of what XDG App and others are trying to do. I'd like to also target a new window manager built on Wayland with eventual Vulkan support. Would love to get some feedback on how to handle the contention that already exists within this space and how to get the funding I need to make something like this successful.
parisi 1 day ago 0 replies      
Hey Trevor and Jared,

I am in the midst of creating a new platform as a service product while working at a large tech company. I am not quite willing to reveal the platform to the world yet, but it is a new take on backend-as-a-service platforms that I think will be very intriguing to developers.

My question is not specific to my product and instead pertains to the situation of trying to develop a startup while working full time at a large company. I have done 0 work on the product on my employers time and am not concerned with that aspect. I am more interested in your thoughts on when would be the ideal time to leave my current post to work full time on the startup? My biggest concern is leaving the financial stability of my current job when I have no capital lined up to support a startup. I have already launched a closed beta of the platform and am getting feedback from a small set of users and plan to open the beta up to the world in the coming months. In your experience have you ever seen VC's or Angel's be willing to make a deal with a startup founded by a full time employee of a different company. If I were able to secure funding, I would be absolutely be willing to move on and work full time on the startup but making that leap without funding would be a difficult decision. Any words of wisdom you can offer here would be greatly appreciated!


feedbackhotline 1 day ago 1 reply      
Jared and Trevor,

Feedback Hotline (feedbackhotline.com) is the easiest way for businesses to collect feedback. We provide the hotline for free. We intend to make money through data.

We think we need to prove three things to succeed:

1. Businesses/organizations will join2. People will send a lot of feedback3. We can monetize this feedback

We are currently optimizing for 1, focusing on small businesses. We believe for i, everything before i needs to be true before i can be true. We want to speak about this framework for optimization.


publicator 1 day ago 1 reply      
Hello Jared and Trevor,

We're developing a publishing network. Individuals and groups can start magazines. Users can read on a timeline and interact like in a social network. What's new? Now anyone can get publishing infrastructure as good as a well funded online magazine. That means lots of unique publications in verticals that well-funded magazines & newspapers cannot do.

What's the best strategy to lure in people to use the service?How do we make money?

chejazi 1 day ago 2 replies      
Hi YC partners,

We're a link shortener called Credhot and our business model is to syndicate (potentially sponsored) content on an interstitial page. We also rev-share with our users based on the number of visits. The biggest hurdle to this strategy has been building an interstitial people want to share with their friends. Here is an example of our latest attempt at that, leading to "coinmarketcap.com": https://crd.ht/H7TLMpn.

Our volume is low enough that we haven't tested syndicating sponsored content on that page. Right now we're mostly focused on "building a product that users love" but at some point we will need to strike a deal with a publisher. When should those conversations start happening? We're currently bootstrapping but will want to raise capital in the next few months as we're growing (we've doubled since the last HN office hours ~34 days ago). Should we wait until after securing funding?

abrie 1 day ago 0 replies      

I've developed a custom toolchain while writing an electronic book[0]. The toolchain automates the conversion of markdown and media into a scrollable "app-novel". Initially I'd hoped to earn income from the book itself, but the naiveness of that idea is quickly becoming apparent.

As a pivot, I am developing a public interface for the toolchain, with the idea of permitting others to write books in the same style. Unfortunately it is not ready to be demonstrated. Nonetheless, this feels like a untapped industry to me, and I wonder what your opinions or suggestions might be.

Thank you.


masudhossain 1 day ago 1 reply      
Hey, Trevor and Jared. Thanks a lot for doing this!

WHAT WE DO: www.wiredhere.com

We integrate every social activity (university created or student created) happening within an university into a mobile app. The students can attend and provide feedback through the app; we than take the analytics that's created and provide it to universities so they can assess and compare themselves to other universities.


Do you think it's more optimal approach this as a SAAS for the university since we provide them a brand new web platform to make event creation easier and so they can reach students in a faster way?OR be a non-saas and introduce this to the students first and let the universities catch on afterwards, and than work with the university so they can use our web platform and mobile app?

Also, what is your opinion on our concept?

RyM21 1 day ago 1 reply      
Hi Trevor and Jared,

WordBrewery (http://wordbrewery.com) teaches languages by scraping real sentences from news sites around world, then processing each sentence with an algorithm that estimates (on the basis of word frequency and other variables) how likely the sentence is to be useful to learners at different levels.

We are now a member of 1776 and participating in Microsoft BizSpark, so we are on the right track. But I am new to the startup world, and I am funding the website entirely by myself at this point using money from my day-job paycheck. What is the best way to pursue seed funding at this early stage while we are developing core features? Do I need to organize it as an LLC or corporation to get funding?

Thank you,Ryan

bsims 1 day ago 1 reply      
Hi Trevor and Jared, we have been building financial predictions on data from marketplace lending platforms. Curious to talk to you about where you see the business models of prediction going, examples other YC companies who are selling prediction and ML as a service, pricing strategies etc.
downandout 1 day ago 2 replies      
Hi Jared and Trevor,

I am working on a marketplace through which publishers and journalists can sell the ability to be quoted and linked to in an article to the highest bidder. A realtor, for example, might be willing to pay to be quoted and linked to in an article about how a city's real estate market has been accelerating.

My question is how to get started with marketing it. While the growth hacking crowd would say to simply spam a bunch of reporters to get inventory, I have found in other ventures that people absolutely hate receiving any form of unsolicited email with any kind of pitch.

sbashyal 1 day ago 2 replies      
Hi Trevor and Jared, thanks for doing this.

We have been working on http://growthzilla.com - a data driven growth solution for salons for a little more than a year now. We have paying customers since launch and have zero cancellations. Our customers love the product for 3 reasons (1) ease of use (2) effectiveness in driving growth and (3) customer service

The problem: we are growing slower than we would want. Is there anything you'd like to suggest?

zodiac 1 day ago 1 reply      
Hi Jared and Trevor,

I'm building games and tools for language learning. While talking to users, I asked them what language learning tools they wished existed when they were studying in the past. A lot of them talked about a tool that lets them talk to native speakers of their target language online.

I know I should build something users want, but there are plenty of tools that let you do this (including a YC company, cambly) and I don't want to build another one. So what do I do with their answer to my question?

ForrestN 1 day ago 3 replies      
I founded a small nonprofit (~$250k budget this year for underpaid staff of me+3 art types) that serves an annual audience of 1 million unique visitors for the last few years with hi-res documentation of contemporary art; we are nearly ubiquitous within our field, academics have hailed our project as transformative, but we suck at fundraising and are tiny.

Problem: in the midst of running everything, I do all the coding slowly by myself, and the urgent coding todo list has exploded while some of our sites age. What should I do?

Thank you!

wootcamp 1 day ago 1 reply      
Hi Jared and Trevor,

I'm cofounder of a start up within the beauty space. We have small but active group of users that love creating and consuming a unique type of content only available on our platform. The revenue stream is to eventually sell tangible products so we would like to make use of all this content to convert purchases.

Is there a way we can test out how well the conversion rate could work without shipping tangible products? We don't yet have capital to stockpile.

thekonqueror 1 day ago 1 reply      
Hi Jared and Trevor,

http://nestify.io founder here. We're improving PHP CMS hosting (WordPress, Drupal) with better scaling, on-page optimizations, better security. We have paying customers and ~100 die-hard users that will be really sad / lose revenues if we shut down.

Should we focus on building our brand while scaling or switch to whitelabel and API services and partner with hosting providers?

rsdce 1 day ago 1 reply      
Hi Jared/Trevor, my partner and I are working on developing an app for wardrobe cataloging, which to our surprise (or not ) has already has been implemented by lots of ppl in the apple android store. However the advanced features that we thought of aren't implemented yet. How should we decide the path to go : implement the app or just make the advanced features and license them to the existing apps?
workerdee 1 day ago 1 reply      
Hello,I have a little Etsy shop and was wondering how to get technical peoples attention. How do you find gifts for female family and friends (when they dont already have something in mind)?

As for the shop itself, the photographs need improvement I set up a small lighted space on my kitchen counter and am working on this. Shop: http://zebbles.etsy.com

mariobyn 1 day ago 1 reply      
Hey,I want to present http://grobyk.com, a platform that wants to help teams to grow by engaging the team's members to find/create/share useful articles. In this way they will build a knowledge base and grow as a team.We want some feedback regarding the idea and how we can attract customers.
FraserGreenlee 1 day ago 1 reply      
Hi Trevor and Jared,

I'm the founder of WebArcs (http://webarcs.com) an RSS aggregate for discovering and subscribing to websites. I'm just starting out and I want to see this be the way people surf the web in the future.

I was wondering which demographics I ought to target too too help build a strong user base?

Ask HN: Good Tutorial to Run Django+Nginx+GUnicorn in Docker
4 points by tkd  3 hours ago   discuss
Ask HN: Will JavaScript take over front-end?
8 points by whizzkid  15 hours ago   8 comments top 5
throweway 51 minutes ago 0 replies      
It's cliche but I think or hope JS will become the bytecode of the web and compiling to JS from your favourite paradigm be it OO, Functional or just JS with compile time type.

The resistance to this is most code is made for $ and the people with $ tend to prefer fungibility. JS devs are easier and cheaper to replace than Purescript ones. From the lens of developers as a list of buzzwords you can to grep this makes sense.

Frameworks get trendy but new languages have a hard time even though the learning curve may be no different.

I just hope somehow better languages than JS will win.

lollipop25 13 hours ago 1 reply      
> Will JavaScript take over front-end?

Has it not? A more appropriate question would be "When will it step down?"

> it takes 40% more development time to finish a project with AngularJS than without using it.

And by "without using it", does it mean you write in vanilla JS? I believe you jumped into conclusions way too early. Working with any technology will take long. It's not the actual writing of code that will take long, it's the learning (if you're a total noob), setting up (if you don't have scaffolding tools), and debugging (if you don't have tools). The actual work is just a fraction of what you're actually doing.

> How was your experience with React or any other

If you mean React alone, then it's like Angular with directives... alone. You'll have to debate on your router, your data flow library, your server-communication layer, your build tooling, your process. It's all the same thing under the guise of a different syntax.

> Should I invest more time to keep me updated with one of them?

Get to know all the libraries, but never try to use them all. Choose one and get things done.

> I still think, front-end should just present the information, not the whole application logic.

It's like saying JS should have stayed on the browser, but wait! There's Node.js (server), Espruino (hardware), FirefoxOS (OS), PhoneGap (mobile), NW.js (desktop). If everyone was thinking the way you do, these would have never been invented.

smadge 14 hours ago 0 replies      
> I still think, front-end should just present the information, not the whole application logic.

I think you are absolutely right! Progressive enhancement is a foreign concept to some developers. They've forgotten that hyperlinks are the engine of application state.

I just disabled Javascript in my browser. We'll see how things go.

poof131 8 hours ago 0 replies      
Loading an application and hydrating with data seems like the ideal architecture. Especially when your single application can run on the web, mobile, or a desktop. A choice between thin and thick clients seems a little mainframe versus PC. The web started as a page-based text-focused entity, but I dont understand why it needs to stay there or why thin clients are the ideal pattern. Certainly first load & SEO can present challenges, but both these problems are going away with React/Angular2 server side rendering and Googles progress with indexing web apps.

Maybe Im missing something, but SPAs seem like the future (perhaps compile to JS or whatever, but still thick clients). The backend should be concerned with clean APIs to data and not with rendering the view. If people want to turn off JavaScript for a text only web, well some people still use flip phones and everything depends upon what you are building and for who and how quickly you can move in either paradigm. Id suggest studying either the Angular or React ecosystem, but dont expect SPAs to go away.

fiatjaf 14 hours ago 1 reply      
- In my experience, React is much better and faster and easier to use than Angular, but there are things better than React out there (Cycle, for example).

- I don't know.

- It depends on what is your application logic, but I think you are right for the majority of cases. However, presenting the data in a good way is probably more than 50% of any application (if it wasn't, everybody would be writing just CLI apps), that's why Javascript matters.

Ask HN: Facebook interview question
4 points by shreyassaxena  14 hours ago   10 comments top 7
codeonfire 4 hours ago 0 replies      
I went to work and didn't have to deal with any weasles. No one tried to bullshit my manager or his manager at my expense. I didn't have to deal with stupid people trying to make themselves look smarter than they are, try to take credit for other people's work, or try to fake some stupid shit to get credit for something that execs have tuned into that will be immediately thrown away. You know what? All those goddamn people got fired. For once, executives were smart, kind, and logical. I just made something that is immensely valuable that makes me a lot of money so I can take care of my family and all you shitty people I dealt with over the last couple decades fuck right off for a day.
Rainymood 4 hours ago 0 replies      
Probably something alone the lines of: finished a huge project we've spent the last X months on ahead of time and took the whole team out drinking and gave them a day off for delivering such great work.
DrNuke 3 hours ago 0 replies      
I was extraordinarily paid not less than my employer is doing on me.
ag_47 10 hours ago 0 replies      
I solved a problem that was right up my alley; using tools I'm very familiar with; a challenge that wasn't too easy and wasn't too hard/complex. It was just right.Everything happened in a "state of flow", it felt completely effortless. I feel euphoric on my way home.. the solution is almost perfect, everything fell right into place.
afarrell 11 hours ago 0 replies      
Someone asked me for help with figuring out something out and they and I talked through the problem and they came away with a much better understanding than they had before. Possibly I did as well, possibly I was actually just teaching.
yunyeng 9 hours ago 0 replies      
I solved a problem with the information I recently learned, and it fit right into it perfectly. I understood how things work, and started learning deeply into subject.
yarou 13 hours ago 1 reply      
I solved a problem. Probably not in the most elegant way, not using the latest and greatest buzzworthy tool or library, but I solved it.
Ask HN: How does your team write documentation? What tools do you use?
68 points by brwr  4 days ago   90 comments top 44
skewart 4 days ago 1 reply      
I really like these "how do other people do X?" questions on HN. Thanks for asking it!

I work at a small startup with a roughly 10-person eng team.

When we write docs we focus mainly on architecture and processes. The architecture docs often emerge from a "tech spec" that was written for the development of a feature that required a new service, or substantial changes to a new one. We keep everything in Github, in a README or other markdown files.

We also write API docs for HTTP endpoints. These are written with client developers and their concerns in mind. When doing this for a Rails app we use rspec_API_documentation, which is nice, but it can be annoying to have testing and documentation so tightly couples. We've talked about changing how we do this, but we always have more pressing things to do.

We never write docs for classes or modules within an app/service.

azdle 4 days ago 3 replies      
All of our docs a written in Markdown in a git repo [1]. That then gets built with a custom static site generator that I wrote [2]. Finally the output gets pushed back to gh for hosting on gh-pages [3].

I'm actually pretty proud of the search that I put together for this setup too, it's all done in the browser and the indexes are built at compile time which is then downloaded in full for a search, which sounds silly, but it works surprisingly well [4].

[1] https://github.com/exosite/docs/

[2] https://github.com/exosite/docs/blob/master/gulpfile.js

[3] http://docs.exosite.com

[4] http://docs.exosite.com/search.html?q=subscribe

tvvocold 4 days ago 0 replies      
We use flatdoc and Swagger UI for building docs, like: https://open.coding.net

flatdoc is a small JavaScript file that fetches Markdown files and renders them as full pages: https://github.com/rstacruz/flatdoc

Swagger UI is a dependency-free collection of HTML, Javascript, and CSS assets that dynamically generate beautiful documentation from a Swagger-compliant API. http://swagger.io

douche 4 days ago 2 replies      
Fucking Word docs. Which are checked into source control, except people (who are nominally developers, or project managers who were supposed to have been developers once) insist on versioning in ye olde rename-and-add-a-number style. With PDFs that are manually generated by exporting said Word documents from Word, and then again checked in, and again checked in in multiple renamed versions. Except sometimes only the PDF is checked in, without the source document...

So we have a doc folder in the repo that is like staring into the maw of Cthulhu and takes up 90% of our build time on the CI server sucking that down mass of garbage for the checkout.

Saner systems have been proposed, but rejected because the powers that be are too averse to change...

chris_engel 4 days ago 0 replies      
Because I was not happy with the existing stuff, I've built an opensource project for creating technical online documentations some time ago, named "docEngine". My goals were:

- Easy editing (namely markdown files in folders)- Runs on "cheap" hosting/everywhere (built with PHP)- Supports multiple languages (so you can create docs in english, german, etc.)- Can have editable try-on-your-own demos embedded into the documentation- SEO friendly (clean URLs and navigation structure)- Themeable (themes are separated and run with the Twig templating engine)- Works on mobiles out of the box- Supports Plugins/Modules for custom content/behaviour- Formats reference pages for objects/classes/APIs in a nice way- Supports easy embedding of disqus for user feedback- Other stuff I forgot right now

The system powers the knowledge base of my recent app "InSite" for web developers: https://www.insite-feedback.com/en/help

You can see it also in action working - with a different theme - for my javascript UI library "modoJS": http://docs.modojs.com

That page is a bit more complex. It does _not_ use multiple languages there but it makes great use of the reference pages and has many many editable live-demos. It also has some custom modules like a live build script for the javascript library. At one point it even had a complete user-module with payments but I disabled that when modoJS went opensource.

Another instance of docEngine runs for my pet html5 game engine: http://wearekiss.com/gamekitThis one uses the default theme, has most pages in two languages and again incorporates a couple of live demos.

I host a little documentation about the engine itself here, but its not complete right now: http://wearekiss.com/docEngineYou can also find the github link to the project in the footer of every hosted documentation.

Have fun with it - I give it away for free. Critics and comments welcome!Everything I have linked was built by myself.

imrehg 4 days ago 1 reply      
Word docs converted into PDF for manuals. Some other are hand-crafted Photoshop tables/text/graphics to create PDF. Sad, sad stuff, IMHO.

Trying to get people onto Sphinx [0], and use it for some non-sanctioned documentation with good success, but unlikely to make it official.

I really think version control is important: what changed, who changed it, provisional changes through branches, and removing the bottleneck of "I updated the docs, please everyone check before release and send me your comments". It should be patches, and only patches.

[0]: http://sphinx-doc.org/

ericclemmons 4 days ago 1 reply      
Trying something new on this month's project: "developer first experience".

Besides the README.md to get started, the app defaults to a private portal with a component playground (for React), internal docs (for answering "how do I"), and tools for completely removing the need for doc pages at all.

I believe that documentation has to be part of the workflow, so component documentation should be visible while working on the component, tools for workflow should have introductions and helpful hints rather than being just forms and buttons, etc.

So far, this is proving fruitful.

(Side note: wikis are where docs go to die.)

intrasight 4 days ago 2 replies      
The first software system that I worked on was the operator consoles for a nuclear power plant. A two year long dev project. We used Framemaker (1990, before Adobe purchased them). Was an awesome tool for technical documentation. Our documentation when printed and bound was three feet wide on a shelf. I think I contributed two inches. It's been all downhill since - both in terms of the tools and the quality of the documentation. Now days it's the typical - auto-gen from code plus markdown for narrative.
angersock 4 days ago 1 reply      
Long ago I learned to love wikis. Mediawiki, Dokuwiki (easy to set up), or Confluence. Hardest part is to keep people from just throwing garbage everywhere--if that happens, people stop referring to the docs, and the system collapses.

The important thing about docs is to keep in mind the audience. This is important because it lets you estimate their mental model and omit things that are redundant: for example, if it's internal documentation for a codebase, there is little need to explicitly list out Doxygen or JSDoc style information, because they have access to the damned source code. External audiences may need certain terms clarified, or some things explained more carefully because they can't just read the source.

I'd say that the biggest thing missing in the documentation efforts I've seen fail is the lack of explanation for the overarching vision/cohesive architecture of the work. This is sometimes because there isn't a single vision, or because the person who has the vision gets distracted snarfing on details that are not helpful without a preexisting schema to hang them on when learning. So, always always always have a high-level document that describes the general engineering problem the project solves, the main business constraints on that solution, and a rough sketch of how the problem is solved.

Ideally, the loss of the codebase should be less of a setback than the loss of the doc.

I will say that, as your documentation improves, you will hate your project more and more--this is the nature of the beast as you drag yourself through the broken shards of your teams engineering.

vacri 4 days ago 3 replies      
We used to use a Mediawiki wiki, which only I would edit. You kind of have to be comfortable with mediawiki syntax (does the job for everything but tables, which suck). So we moved to Confluence, which has a WYSIWYG editor, to encourage more people to document things, upload documents, so on and so forth. Again, I am the only one editing it... so our documentation is "very occasionally write something down, and store it on your laptop or in your private google drive, then spend ages searching for it when someone asks".

So whenever a new staffer comes along, I get asked to give them wiki access... but I'm the only one here that uses my edits (only ops staffer). Sure, have some wiki access, for all the good it will do you!

I really don't recommend our model :)

Anyway, this is an important point: documentation is not free. It takes time. Even shitty documentation takes time. If you want good documentation, you need to budget time away from other tasks. When I used to work in support, the field repair engineers would budget 30% of their hours for doing paperwork - not documentation specifically, but it clearly shows that 'writing stuff' is not something that springs as a natural/free parallel to other activity.

kenOfYugen 4 days ago 0 replies      
I enjoy Literate CoffeeScript and that's where I picked up the concept of Literate coding.

I believe that literate style of code writing has many benefits in any language.

Basically mix markdown with the codebase and export the documentation from the same file.

For a very well executed and interactive example check out


someguydave 4 days ago 1 reply      
Our APIs are documented with comments that Sphinx uses to generate HTML documents. Unfortunately, all of our other documentation is written in Microsoft products because "that's what people use"
mixmastamyk 4 days ago 0 replies      
Sphinx or mkdocs:



Which make it easy to create html, pdf, epub, latex formats, etc.

I like to create a user guide, developer guide, and ops guide for each large project.

buremba 4 days ago 0 replies      
We use Swagger specification (automatically generated using annotations in Java) and generate Slate documentation from Swagger specification for API documentation. (https://api.rakam.io/). We also use markdown for generic (tutorials, technical etc.) documentation and render the markdown files fetched from Github in documentation page using JS. Since everything is dynamic, we don't need to worry about updating the documentation page, we just update README files of repositories, add documents to our documentation repository and the documentation page is always up-to-date. (https://rakam.io/doc/).
NearAP 4 days ago 1 reply      
We have technical writers who work in conjunction with developers to author the documentation. I don't know what tool they use.However, since you say you want to get better at writing docs, let me offer some perspectives based on a user of documentation.

1) Write to all of your target audience. For example if your product is targeted at both technical and non-technical people, then write the documentation in such a way that non-technical folks can understand it. Don't just write for the technical people.

2) If possible, write documentation around several 'how do I do XYZ task'? My experience has been that people tend to turn to documentation when they want to execute a specific task and they tend to search for those phrases

3) As much as is possible, include examples. This tends to remove ambiguities.

MalcolmDiggs 2 days ago 0 replies      
We tend to thoroughly document our API (which is the backend behind our mobile apps and website) using ApiDocJs.com or Swagger.io/swagger-ui Every service and method is thoroughly explained in detail so the front-end folks have a reference to work off of.

The rest of the systems are documented ad-hoc. Some readme files here and there, a large block of comments inside of confusing files, the occasional style guide, etc.

We also have an onboarding guide for new devs (just a PDF) which walks them through our systems, our tools, etc. Nothing fancy, about 10 pages.

nahtnam 4 days ago 0 replies      
Elixir has a great documentation system built in. I use that.
kakwa_ 3 days ago 0 replies      
At work, I've seen a variety of solutions, depending on the teams I work with:

* MS doc(x) on a network folder with an excel spreadsheet to keep track of docs (and a lot of ugly macros).

* MS doc(x) in a badly organized subversion repository (side note here, docs comments and revision mode are heavily used in those contexts, which is really annoying)

* rst + sphinx documentation in a repository to generate various outputs (html, odt, pdf...) depending on the client.

In some cases we also use Mako (a python template engin) before sphinx to instantiate the documentation for a specific platform (ex: Windows, RedHat, Debian...), with just a few "if" conditions (sphinx could do it in theory, but it's quite buggy and limited).

I've also put in place a continuous build system (just an ugly shell script) rebuilding the sphinx html version every commit (it's our "badly implemented readthedocs.org", but it's good enough for our needs).

In other cases we use specification tools like PowerAMC or Eclipse/EMF/CDO based solutions, the specification tool in that case works on a model, and can generate various outputs (docx, pdf, rtf, html...).

At home, for my personal projects, I use rst + sphinx + readthedocs, or if the documentation is simple, just a simple README.md at the root of my repository.

As a personal opinion, I like to keep the documentation close to the code, but not too close.

For example, I find it really annoying when the sole documentation is doxygene (or equivalent), it's necessary to document each public methods/attributes individually, but it's not sufficient, you need to have a "bigger picture documentation" on how stuff works together (software and system architecture) in most cases.

On the other side, keeping the documentation away from the code (in a wiki or worst) doesn't work that well either, it's nearly a guaranty that documentation will be out of date soon, if it's not already the case.

I found having a doc directory in the source code repository a nice middle ground.

I found wikis annoying in most cases, rarely up to date, badly organized and difficult to version coherently and properly (ex: having version of the doc matching the software version).

drygh 4 days ago 0 replies      
At Ionic, we use Dgeni (https://github.com/angular/dgeni) for API docs. We have a few custom build tasks that allow us to version the API docs.

We also have higher level documentation, which is meant to serve as a sort of conceptual overview of the framework, as well as to show what the framework comes with out of the box. This section is written mostly in kramdown, which gets parsed by jekyll before it's turned into HTML.

Tharkun 4 days ago 0 replies      
Most of our documentation attention goes towards the user manual and the system operator manual.

We generate the bulk of those manuals based on our object model, which is liberally sprinkled with (text only) descriptions. We've created a simple XML-based authoring framework which allows us to create pretty tidy documentation. Including images, tables, code examples etc.

We convert that XML to Apache FOP. At the end of the process, we're left with a bunch of tidy PDF manuals in a variety of languages.

gravypod 4 days ago 0 replies      
The thing that has always guided me right is that you need to a) split up functions, b) document method headers in every case with a short description of what it does, and finally c) come back one month later and rewrite any documentation that does not make sense.

This is the most important step. If you cannot remember it from a blank slate, then no one can. Keep doing that until you understand the code at first glance. Then your code will be easy for anyone to maintain.

scottlocklin 4 days ago 2 replies      
LaTeX. We have academic roots, it works with source control, and the output looks fantastico.
mixedCase 4 days ago 1 reply      
A mardown-based wiki under version control and code comments. Everything else likely isn't worth docummenting or just merits direct person-to-person communication.
tamersalama 4 days ago 1 reply      
This question is on my mind too. My clients documentations are usually a mix of MS Word & Visio. Lots of repetition and gunk in between.

Ideally, I'd love to find a mechanism that:

 - provides the OO principles in documents; Encapsulations, Abstraction, Polymorphism, Inheritance . - Accessible & maintainable by non-techies. - Allows scripting (I toyed with PlantUML, but it was a bit rigid).

afarrell 4 days ago 1 reply      
Not on a team, but I used mkdocs for this tutorial I built, then added a comment system that I built with react.js : https://amfarrell.com/saltstack-from-scratch/ The advantage of mkdocs is that it is markdown-based so it is super easy to get started.
hooliganpete 3 days ago 0 replies      
I work at a very large company so you won't be surprised to hear we use a variety of tools and there's often overlap. Almost everything goes to Confluence (our program wiki) including tech specs and marketing documentation. The product team often uses something simple, such as Quip to store and collaborate on their docs. Marketing tends to migrate toward Drive. I think the best advice I can offer is to keep one "source of truth". This isn't too difficult when your team is small but as you start to grow, having one place devs, marketing, sales can go really helps streamline things.
acesubido 4 days ago 1 reply      
Gitbook for Technical Documents, Google Drive for everything else.
ddasayon 4 days ago 1 reply      
We write the docs as markdown files and then use Doxygen to compile it to html and LaTeX for the traditional folks who MUST have a printable document. The markdown files are tracked on Git so that we can collaborate and track easily.
darkFunction 4 days ago 1 reply      
Bitbucket's wiki on our project page (6-person startup). We document mostly application behaviour for technical users of the app (server team, content writers) and a little bit of architecture if the complexity warrants it.
girzel 4 days ago 0 replies      
The Texinfo format, using the in-Emacs Info browser. Yes, it means you only read your documentation inside Emacs, but it's hands-down the best doc-browsing experience I've ever had. Hell to write, butter to read.
tmaly 3 days ago 0 replies      
This is a problem I am struggling with right now.

I have a CVS repository of PDF and Word docs.

The business side uses docx format, so using markdown and generating docx is not really feasible. I have run into issues of people changing the filename and it creating a new entry in the version control. I have a idea I plan to implement to fix this.

What I would really like is some linux system that would make it easy to pull the text out of docx and make it searchable. I would want something that could run on the command line that does not have a ton of dependencies.

rusbus 4 days ago 1 reply      
Shameless plug:I'm working on a documentation solution for dev teams. You can sign up for the beta at http://docily.com/
davidjnelson 3 days ago 0 replies      
The most valuable docs for me are rest Api contracts stored in confluence. Easy to collaborate on. Also, getting started guides in confluence for new hires, and architectural diagrams again in confluence for cross team collaboration / understanding / discussion.

As for code, auto generated docs from jsdoc etc. headers are fine but I never use them honestly. I find unit tests to be the ultimate documentation in terms of code level docs.

DannoHung 4 days ago 2 replies      
Related: what's the right way to extract inline comments regarding function API stuff from source code?

This seems like something that is a really good idea, but is hard to find any projects for it.

BooneJS 2 days ago 0 replies      
Adobe FrameMaker and Microsoft Visio, stored in Perforce.

Beautiful documents, but it takes a decent chunk of time to create. We do extract some docs via XML to generate code, somewhat backwards from how most engineers merge docs and code.

quasiben 4 days ago 0 replies      
All of folks at Continuum Analytics use sphinx and readthedocs
mbrock 4 days ago 1 reply      
We barely have any documentation except some READMEs that are mostly terse and still poorly maintained... If you don't understand something, you ask someone.
gault8121 4 days ago 0 replies      
HN, we are writing our high level overviews as Readme MD files. Any ideas on how we could help condense this info for open source contributors?
adnanh 4 days ago 0 replies      
We wrote our custom documentation generator for Grape (ruby), something like Swagger, but less rigid.
irixusr 3 days ago 0 replies      
I work for the government right now.

I'm trying to gather a community of git supporters to push for git.

However, after three months I still haven't gotten a computer suitable for my job.

arisAlexis 4 days ago 3 replies      
My boss decided to use Framemaker with DITA in 2016..
zolokar 4 days ago 0 replies      
A combination of Github Wikis and a Dozuki site.
barile 2 days ago 0 replies      
swagger.io for the apis + README.md on each service's repo
adityar 3 days ago 0 replies      
Ask HN: Monetizing Streaming Movie Search
2 points by willholloway  15 hours ago   2 comments top 2
kennyfrc 6 hours ago 0 replies      
It's a search service, so it's best to took at existing / similar services for inspiration like Google and Yelp.

So that means allocating placements to be used either for adsense / auction-style ad placements.

_RPM 13 hours ago 0 replies      
Adsense? Knowing what movies the user likes, you could probably relate that to products that people are selling.
Tell HN: HN and Slack Office Hours with YC Partners this Friday
40 points by kevin  4 days ago   7 comments top 2
kevin 4 days ago 1 reply      
Jared will also be doing open office hours on Slack from 2-4pm PT on Friday (Feb 26). If you'd like help with your startup, but want your questions answered in a private setting, sign up here by end of day on Feb 23:


minimaxir 4 days ago 1 reply      
"Tell HNs" no longer appear in the HN front page, which seems unintentional given announcements such as these.
Ask HN: How do you measure risk with an open source project?
8 points by bazMVP  1 day ago   10 comments top 5
kazinator 1 day ago 2 replies      
* I look at the code and determine, subjectively, whether this was written by first-rate developers or monkeys. I will consider this from various angles ranging from the overall program organization, to the details of how the programming language is used. If I spot bugs in this inspection, I will skip the project and look for something else. In particular anything that is a security flaw or could cause a crash is an instant deal-breaker. Not because everyone should be perfect and write error-free code the firs time, but because I was able to find it just by casually looking, whereas the maintainers have been working with that code for months and are blind to it---that erodes my confidence in the developers.

* I will look for a regression test suite: how extensive is it? If you don't see any tests, that's a big warning sign.

* If you don't see any documentation, another warning sign: the behavior is not specified and could change.

* Who uses it? If the code is reasonably widely used, that de-risks it for you quite a bit, particularly if the existing uses resemble your intended use. They trod through the code paths before you and uncovered the bugs.

* Lastly: if need be, can I just maintain this myself? How easily forkable is the thing? This point can overcome some other issues. If some code is 95% of the way there, and is organized in a good way that I can take it the last 5%, I might just do it.

Gratsby 1 day ago 0 replies      
If there's an IRC channel, mailing list, issue tracker, etc., I have a look at it. Active communities are bonus points regardless of the bug list.

I also look at how friendly the project is to pull requests and outside development. If a bug is important to me, I will spend the time to code a fix, but if there's no hope for getting any changes made, that represents a large risk to me. It's not a bad thing if the team in charge pushes back for higher quality code, code style, or solutions practical for wider audiences.

If there's continuous integration in place with automated testing and static code analysis that is fantastic. It's not a deal breaker if it's not, but having it in place is a good sign.

Depending on the project, I may have a look at the source itself. I certainly don't look the source of every application I use.

I have found that online recommendations in developer communities are not always good. More than once I've tried out projects based on people evangelizing them only to find out that they are pretty far from acceptable.

There are differences in how I judge things that will be customer facing or that I'm going to have to support operationally. I'm a lot more critical at that point, but my top 3 points are:

1. Can I put it in a container. 2. Are the developers committed to long term support.3. Will they give me a t-shirt.

cweagans 1 day ago 1 reply      
I evaluate it the same way that I evaluate any other code that I'm bringing into our project: is it robust, extensible, free of obvious errors, covered by comprehensive unit/functional tests, etc.

I would say that "project is open 6 months" is a pretty poor metric, because 6 months is a lot longer than the handful of days that your custom code will have existed when you add it instead.

thealistra 1 day ago 0 replies      
0. Is README in good shape and the API is sane?1. Last commit date in last month2. Number of stars depending on the complexity of the lib
twunde 1 day ago 0 replies      
When was the last time the code was updated? How many people have contributed to it?Really I'm looking to see whether the project is being actively maintained.
Ask HN: Should each of your products register as their own business?
22 points by itsthisjustin  2 days ago   6 comments top 2
patio11 2 days ago 2 replies      
Define "normal." I have three LLCs (four if you count one in Japan for purposes of being able to pay myself on payroll now that Starfighter exists); that's probably on the high end among most of my peers. Most small software companies have a single entity and only choose to spin out when a new product becomes a truly independent operational unit, when it receives investment, or (for branding purposes) if it ends up eating the business that spawned it.

Reasons to segregate:

1) The single biggest one is that it firewalls the liability of the businesses from each other. Whether this is important or not for you depends on what the businesses are doing: if it's Regular Internet Stuff then your E&O policy is probably good enough in terms of risk mitigation, but if 1+ of your products are in highly regulated spaces (hello HIPAA, finance, etc) then putting them in their own LLC isn't a crazy solution.

2) If you're religious about doing not just the paper ownership but the business accounts separately for each business, that makes eventually selling or otherwise disposing of them much, much easier. Otherwise you're looking at weeks of work and/or very fun professional services bills when you decide to do the division later.

3) If you have co-founders or investors, or the prospect of getting co-founders or investors, separate legal entities are going to be pretty much required. You don't want them to accidentally get ownership of your side projects; they don't want to own your side projects (ownership is a risk; they know the risks they're signing up for and don't want additional sources of uncontrolled unknown risk).

4) A minor factor, but there is non-zero social friction involved in "We've been talking about my trading name of $FOO but remember that the invoice/contract/etc will be from $BAR, LLC."

Reasons to not segregate:

1) It's a lot of extra work.

2) There's a running cost to keeping an LLC open, both the yearly fees and the operational complexity of maintaining separate books, accounts at various providers, and (if you're doing things in a complicated fashion) keeping up appearances with regards to the LLCs being formally separate from each other.

As an ex-consultant with some accidental knowledge of the payments space: I would be doing double-plus firewalling between any payments startup and anything I'd lose sleep about losing, and I would be happily writing a sizable check right about now to a lawyer rather than taking HN's advice about my compliance obligations and potential sources of risk.

mesozoic 2 days ago 1 reply      
I wouldn't worry about it until you have assets in one entity to protect by having separate LLCs
Ask HN: Best Object-Oriented Programming Book
10 points by Kinnard  2 days ago   5 comments top 4
vram22 19 hours ago 0 replies      
The Object Primer by Scott Ambler (IIRC). Read years ago but got some insights from it.



Note that is the 3rd edition. I had read an earlier edition. He lists the differences between 2nd and 3rd.

ruraljuror 15 hours ago 0 replies      
I am relatively new to this myself, but at my work there is a lot of discussion of the SOLID principles:http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
paulroest 1 day ago 0 replies      
I would highly recommend Eric Evans' Domain Driven Design as the second Object Oriented book to be read. Most any primer on OO will give you the foundation of ideas but Erick's book makes the knowledge useful and has been repeatedly called OO done right.

ISBN-13: 978-0321125217ISBN-10: 0321125215

stevenspasbo 2 days ago 1 reply      
Check out the Head First: Object-Oriented Analysis and Design. It's pretty basic, but would be a good intro if you're brand new to OO.
Ask HN: How are you tracking your tasks?
6 points by Gratsby  1 day ago   11 comments top 8
dalerus 5 hours ago 0 replies      
I use Nozbe to dump all my todos in via email or in the app. Then organize every morning. It's based around GTD concept but can be customize to your workflow.

I needed an easy place to dump everything. As work uses Trello and my startup uses Basecamp.

OrionSeven 1 day ago 1 reply      
I use Trello.com (and have for about 4 years) for tracking all of my work tasks (from coding tasks, to making sure I reply to someone, misc things, and more). If you're not familiar with Trello, think of an online kan-ban board. They have a great web app, but also iOS and Android apps.

I have Trello organized with the following boards (from left to right on the screen):

"Thing to Do" - This is really my inbox. While I tend to make the cards in the list they need to go, if I just don't know I place it here. About once a month I go through it to make sure nothing has fallen off my radar that shouldn't or more importantly see if I can just delete it because it doesn't matter.

"Priority Tasks" - These are bigger tasks that I know are things that need to get done as workflow permits.

"Doing" - What I'm currently working on. Usually 5-10 items depending on dependencies.

"Dated Boards" - At the start of each week I create a new board with the title of being just the date. All tasks that I complete that week go onto that board. For really long tasks I may copy a card and keep it in "Doing" but put a copy in that weeks board.

Trello also has card labels, think colored flags to identify things quickly. I always have the following labels:

Red - Critical, e.g. an emergency task that takes priority over all things.Orange - Urgent, can wait, but not long.Yellow - Time sensitive, I don't always use the date feature on Trello cards, so I use the yellow label for things that need to be done by a set dateBlue - Big tasks or Big wins. I want to be able to find in past week when I had a big win.Green - Interdepartmental dependency, either someone else needs this from me or I need something from someone else.

At my current job I can see back nearly three years what I did week to week. I can quickly search to see when something was done, or browse it. But more importantly it's easy to make a Trello card and once it's in Trello I can easily organize my time and tasks.

kennyfrc 1 day ago 0 replies      
Same as you, I tried a bunch of various to-do list apps before but none of them worked out well for me.

Then I realized that the process of breaking down workload and personal life into small tasks does not fit well in a to-do list app context.

What worked for me was using something like BusyCal, which is a souped-up calendar. It was more aligned with the 'breaking down' process because I used the 'banner' feature to help me define my weekly goals then I proceed with tasks assigned with specific chunks of time.

I then track the planned vs actual time spent using letsfreckle.com, which is a time tracking tool.

For the more higher-level stuff (ie. monthly goals, yearly goals), I just keep them in Trello.

As for habits, I have a weekly review to check where I am vs monthly goals and I also use that review to allocate tasks into my calendar.

nvbhargava 1 day ago 0 replies      
I use Todoist to keep track of my tasks. But I started using toggl (toggl.com) recently to keep track of how long I spend on a single task, and I started becoming more productive because of this.
skylark 1 day ago 0 replies      
Todoist is fantastic. Once you get the hang of their natural language date parser, entering both one-off and repeated tasks becomes extremely fast.
tmaly 1 day ago 1 reply      
Make a list on some app or on a paper pad.

I use Google Keep for simple todo lists like grocery or things I have to do for short duration.

For bigger long term projects I like Trello

kiloreux 1 day ago 0 replies      
Todoist, it has been awesome and very helpful so far, exists on every platform and very simple and slick UI.
MalcolmDiggs 1 day ago 0 replies      
I put most things in Asana. Except groceries, for that I use evernote (just a list with checkboxes).
Ask HN: What's your next career move?
23 points by mgadams3  1 day ago   15 comments top 9
quickpost 4 hours ago 0 replies      
Location: Denver

Title: Freelance Software Developer

Years Exp: ~12

Been doing freelance development for last 5+ years. Enjoyed the flexibility and the freedom of being an indy dev. Started off with aspirations of building my own SaaS on the side and bootstrapping with consulting, but got sucked in by nice hourly rates and cushy lifestyle. Cant complain too much though! Getting the itch to do something new now, though. Trying to decide between

1. Trying another startup as a founder or very near to it - before freelancing I worked in a few startups, one was successful (as an employee).

2. Re-joining corporate america to see if I can make a big impact with a relatively high and predictable income + benefits. Eyeing health care IT as one potential area of impact where theres seemingly lots of opportunity and change right now due to health care reform (and its a distinct area of interest beyond coding).

justinlaing 1 day ago 1 reply      
Location: Santa Cruz, CATitle: Sabbatical from startup life after selling my company and completing contract.Years of SE Experience: 18

Next Move: Building another software startup. Go big. Apply the lessons learned from my previous company, while trying something in a new space. Get back to coding every day after being a owner/manager for years. Find other awesome people to work with, as in my experience this is the single biggest factor in happiness at work.

willholloway 16 hours ago 0 replies      
Location: A tiny seaside hamlet in the northeast megalopolis

Years SE Experience: 7

Next Move: Use my extensive experience in conversion optimization and web performance optimization to increase profits for clients. This cash will fund the next phase of my open source projects.

dvainsencher 8 hours ago 0 replies      
Location: Princeton, US

Title: ML postdoc, formerly software engineer

Years Exp: >5, depends how you count

Next move: Finding a job in industry (tech or finance in NYC or Bay area), solving hard problems using ML, so I can wrestle with the world instead of with reviewers, and be part of a team again.

japhyr 1 day ago 0 replies      
Location: Southeast Alaska

My father was a software engineer at DEC in the 70's, so I learned to program at 5 on a kit computer. I've been a hobbyist programmer all my life, but I got pulled into teaching. I've been teaching 6-12 grade math and science for 20 years.

Over the last four years I've started to build a second career in the programming world. I have a couple open source projects that have shown promise, and I recently published an introductory Python book that's doing pretty well.

Next Move: I'm excited about the possibilities, all of which are pretty appealing.

- Write more.

- Pick up development on two main open projects:

- introtopython.org | An open introduction to Python based on jupyter notebooks. Anyone familiar with notebooks can contribute a project.

- opencompetencies.org | An open platform for building education standards.

- Switch to teaching CS full time instead of math and science.

- Pick up more freelancing work.

eecks 22 hours ago 0 replies      
Location: Ireland

Title: Software Engineer

Years SE Experience: 2-3 years professionally

Next Move: I'm not too sure. I am keeping an eye on the job market to see if anything comes along that I feel would be a great move. I'd love to start a company but I don't think I'll do it yet.

gravypod 1 day ago 1 reply      
I'm currently at college to get proof that I know how to write software.

I've been programming since I was 12, so it's fun helping my friends through our CS classes.

My next move is to find a job... anywhere.

mindcrime 1 day ago 0 replies      
Location: Chapel Hill, North Carolina, USA

Title: Lead Consultant (Mammoth Data) / Founder/CEO (Fogbeam labs)

Years SE Experience: ~20

Next Move: (do X so that I can Y)

Continue building Fogbeam Labs so that I can eventually make that my full-time thing, hire employees, and build a real company. As part of that, we just started working on a new product that I'm really excited about. I don't want to say too much about it just yet, but it's going to be a gnarly Machine Learning / AI project.

MalcolmDiggs 1 day ago 0 replies      
Location: NYC

Title: Lead Engineer

Years SE Experience: ~12

Next move: Stop coding within a year. The gameplan is to lay solid technical foundations for our MVPs, then assemble a team of devs who are better than me, and get out of their way. I'm not going to retire by any means, I'd just like to move to a management/executive role.

Ask HN: How to handle 50GB of transaction data each day? (200GB during peak)
128 points by NietTim  3 days ago   77 comments top 34
ecaroth 3 days ago 1 reply      
Not an answer to your question, but just a quick note- this is the first post in a long while on HN where I appreciate both the problem you are looking to solve and the honesty/sincerity you have in saying that you are not perfectly qualified to solve it but you know those here can help. From all of us on the community watching and lurking, thanks for your candor so we can all learn from this thread!
haddr 3 days ago 4 replies      
First of all, 50GB per day is easy. Now, maybe contrary to what they say below, do the following:

* Don't use queues. Use logs, such as Apache Kafka for example. It is unlikely to lose any data, and in case of some failure, the log with transactions is still there for some time. Also Kafka guarantees order of messages, which might be important (or not).

* Understand what is the nature of data and what are the queries that are made later. This is crucial for properly modeling the storage system.

* Be careful with the noSQL cool-aid. If mature databases, such as postgreSQL can't handle the load, choose some NoSQL, but be careful. I would suggest HBase, but your mileage may vary.

* NoSQL DBs typically limits queries that you might issue, so the modelling part is very important.

* Don't index data that you don't need to query later.

* If your schema is relational, consider de-normalization steps. Sometimes it is better to replicate some data, than to keep relational schema and make huge joins across tables.

* Don't use MongoDB

I hope it helps!

mattbillenstein 3 days ago 5 replies      
First of all, ingest your data as .json.gz -- line delimited json that's gzipped -- chunk this by time range, perhaps hourly, on each box. Periodically upload these files to the cloud -- S3 or Google CloudStorage, or both for a backup. You can run this per-node, so it scales perfectly horizontally. And .json.gz is easy to work with -- looking for a particular event in the last hour? gunzip -c *.json.gz | grep '<id>' ...

Most of the big data tools out there will work with data in this format -- BigQuery, Redshift, EMR. EMR can do batch processing against this data directly from s3 -- but may not be suitable for anything other than batch processing. BigQuery and/or Redshift are more targeted towards analytics workloads, but you could use them to saw the data into another system that you use for OLAP -- MySQL or Postgres probably.

BigQuery has a nice interface and it's a better hosted service than Redshift IMO. If you like that product, you can do streaming inserts in parallel to your gcs/s3 uploading process for more real-time access to the data. The web interface is not bad for casual exploration of terabytes of raw data. And the price isn't terrible either.

I've done some consulting in this space -- feel free to reach out if you'd like some free advice.

nunobrito 3 days ago 2 replies      
We need to handle data in a similar level as you mention and also use plain text files as only reliable medium to store data. A recent blog: http://nunobrito1981.blogspot.de/2016/02/how-big-was-triplec...

My advice is to step away from AWS (because of price as you noted). Bare metal servers are the best startup friend for large data in regards to performance and storage. This way you avoid virtualized CPU or distributed file systems that are more of a bottleneck than advantage.

Look for GorillaServers at https://www.gorillaservers.com/

You get 40Tb storage with 8~16 cores per server, along with 30Tb of bandwidth included for roughly 200 USD/month.

This should remove the IOPS limitation and provide enough working space to transform the data. Hope this helps.

harel 3 days ago 3 replies      
Here are a few suggestions based on 6+ years in adtech (which have just came to a close, never again thank you):

* Use a Queue. RabbitMQ is quite good. Instead of writing to files, generate data/tasks on the queue and have them consumed by more than one client. The clients should handle inserting the data to the database. You can control the pipe by the number of clients you have consuming tasks, and/or by rate limiting them. Break those queue consuming clients to small pieces. Its ok to queue item B on the queue while processing item A.

* If you data is more fluid and changing all the time, and/or if it comes in JSON serializable format, consider switching to postgresql ^9.4, and use the JSONB columns to store this data. You can index/query those columns and performance wise its on par (or surpasses) MongoDB.

* Avoid AWS at this stage. like commented by someone here - bare metal is a better friend to you. You'll also know exactly how much you're paying each month. no surprises. I can't recommend Softlayer enough.

* Don't over complicate things. If you can think of a simple solution to something - its preferable than the complicated solution you might have had before.

* If you're going the queue route suggested above, you can pre-process the data while you get it in. If its going to be placed into buckets, do it then, if its normalised - do it then. The tasks on the queue should be atomic and idempotent. You can use something like memcached if you need your clients to communicate between eachother (like checking if a queue item is not already processed by another consumer and thus is locked).

TheIronYuppie 3 days ago 3 replies      
Disclaimer: I work at Google.

Have you looked at Google at all? Cloud Bigtable runs the whole of Google's Advertising Business and could scale per your requirements.


lazyjones 3 days ago 0 replies      
I'm not sure I understand precisely what kind of data you are processing and in what way, but it sounds like a PostgreSQL job on a beefy server (lots of RAM) with SSD storage. Postgres is very good at complex queries and concurrent write loads and if you need to scale quickly beyond single server setups, you can probably move your stuff to Amazon Redshift with little effort. Wouldn't recommend "big data" i.e distributed setups at that size yet unless your queries are extremely parallel workloads and you can pay the cost.

In my previous job we processed 100s of millions of row updates daily on a table with much contention and ~200G size and used a single PostgreSQL server with (now somewhat obsoleted by modern PCIe SSDs) TMS RamSAN storage, i.e. Fibre-Channel based Flash. We had some performance bottlenecks due to many indexes, triggers etc. but overall, live query performance was very good.

zengr 3 days ago 0 replies      
Doing real time query for report generation for data growing by 50gb per day is a hard problem.

Realistically, this is what I would do (I work on something very similar but not really in adtech space):

1. Load data in text form (assuming it sits in S3) inside hadoop (EMR/Spark)

2. Generate reports you need based on your data and cache them in mysql RDS.

3. Serve the pre-generated reports to your user. You can get creative here and generate bucketed reports where user will fill its more "interactive". This approach will take you a long way and when you have time/money/people, maybe you can try getting fancier and better.

Getting fancy: If you truly want near-real time querying capabilities I would looks at apache kylin or linkedin pinot. But I would stay away from those for now.

Bigtable: As someone pointed out, bigtable is good solution (although I haven't used it) but since you are on AWS ecosystem, I would stick there.

wsh91 3 days ago 0 replies      
We're having a good time with Cassandra on AWS ingesting more than 200 GiB per day uncompressed. I don't know how you're running your IOPS numbers, but consider allocating large GP2 EBS volumes rather than PIOPS--you'll get a high baseline for not that much money. The provisos you'll see about knowing how you expect to read before you start writing are absolutely true. :)

(Hope that might be helpful! A bunch of us hang out on IRC at #cassandra if you're curious.)

yuanchuan 3 days ago 0 replies      
I once worked on similar project. Each day, the amount of the data coming in is about 5TB.

If your data are event data, e.g. User activity, clicks, etc, these are non-volatile data which should preserve as-is and you want to enrich them later on for analysis.

You can store these flat files in S3 and use EMR (Hive, Spark) to process them and store it in Redshift. If your files are character delimited files, you can easily create a table definition with Hive/Spark and query it as if it is a RDBMS. You can process your files in EMR using spot instances and it can be as cheap as less than a dollar per hour.

mindcrash 2 days ago 0 replies      
You probably might want to read this (for free): http://book.mixu.net/distsys/single-page.html

And pay a little to read this book: http://www.amazon.com/Designing-Data-Intensive-Applications-...

And this one: http://www.amazon.com/Big-Data-Principles-practices-scalable...

Nathan Marz brought Apache Storm to the world, and Martin Kleppmann is pretty well known for his work on Kafka.

Both are very good books on building scalable data processing systems.

alexanderdaw 3 days ago 0 replies      
1. Stream your data into Kafka using flat JSON objects. 2. Consume your kafka Feeds using a Camus Map Reduce job (a library from linked in that will output hdfs directories with the data). 3. Transform the hdfs directories into usable folders for each vertical your interested in, think of each output directory as an individual table or database. 4. Use HIVE to create an "external table" that references the transformed directories. Ideally your transformation job will create merge-able hourly partition directories. Importantly you will want to use the JSON SERDE for your hive configuration. 5. Generate your reports using hive queries.

This architecture will get you to massive, massive scale and is pretty resilient to spikes in traffic because of the Kafka buffer. I would avoid Mongo / mysql like the plague in this case, a lot of designs focus on the real time aspect for a lot of data like this, but if you take a hard look at what you really need, its batch map reduce on a massive scale and a dependable schedule with linear growth metrics. With an architecture like this deployed to AWS EMR (or even kinesis / s3 / EMR) you could grow for years. Forget about the trendy systems, and go for the dependable tool chains for big data.

lafay 3 days ago 0 replies      
We faced a very similar problem when we started Kentik two years ago, except in our case the "transactions" are network traffic telemetry that we collect from our customers' physical network infrastructure, and providing super-fast ad hoc queries over that data is our core service offering.

We looked at just about every open source and commercial platform that we might use as a backend, and decided that none were appropriate, for scale, maturity, or fairness / scheduling. So we ended up building, from scratch, something that looks a bit like Google's Dremel / BigQuery, but runs on our own bare metal infrastructure. And then we put postgres on top of that using Foreign Data Wrappers so we could write standard SQL queries against it.

Some blog posts about the nuts and bolts you might find interesting:



If we were starting today, we might consider Apache Drill, although I haven't looked at the maturity and stability of that project recently.

asolove 3 days ago 1 reply      
Read "Designing data intensive applications" (http://dataintensive.net/), which is an excellent introduction to various techniques for solving data problems. It won't specifically tell you what to do, but will quickly acclimate you to available approaches and how to think about their trade offs.
jamiequint 3 days ago 0 replies      
Consider using CitusData to scale out Postgres horizontally. You can shard by time and basically get linear speedup based on # of shards. Its extremely fast and will be open source in early Q2 I think. You then can put your Postgres instances on boxes with SSD instead of IOPS. Writes also scale mostly linearly.
pklausler 3 days ago 0 replies      
50GiB/day is less than a megabyte per second. Surely you wouldn't be bandwidth-limited on a real device, even consumer SSDs are in the 100-600 MiB/s range IIRC. Can you do anything to increase your bytes per IOP in your current environment if you're IOP-limited?
exacube 3 days ago 0 replies      
If your data is growing at this rate (and you plan to keep this data around), you'd want a distributed database that can scale to terabytes. But it might be overkill if you dont care for data consistency (i.e., you dont need to read it "right away" after you do a write):

If you just want reports (and are okay getting them in the matter of minutes), then you can continue storing them in flat files and using apache HIVE/PIG-equivalent software (or whatever equivalent is hot right now, im out of date on this class of software).

If you want a really good out-of-box solution for storage + data processing, google cloud products might be a really good bet.

agnivade 3 days ago 0 replies      
Lots of good suggestions here. I won't say anything new but just wanted to stress on the data ingestion part.

DO NOT write to txt files and read them again. This is unnecessary disk IO and you will run into a lot of problems later on. Instead, have an agent which writes into Kafka (like everyone mentioned), preferably using protobuff.

Then have an aggregator which does the data extraction and analysis and puts them in some sort of storage. You can browse this thread to look for and decide what sort of storage is suitable for you.

mslot 3 days ago 0 replies      
disclaimer: I work for Citus Data

The bottleneck is usually not I/O, but computing aggregates over data that continuously gets updated. This is quite CPU intensive even for smaller data sizes.

You might want to consider PostgreSQL, with Citus to shard tables and parallelise queries across many PostgreSQL servers. There's another big advertising platform that I helped move from MySQL to PostgreSQL+Citus recently and they're pretty happy with it. They ingest several TB of data per day and a dashboard runs group-by queries, with 99.5% of queries taking under 1 second. The data are also rolled up into daily aggregates inside the database.

There are inherent limitations to any distributed database. That's why there are so many. In Citus, not every SQL query works on distributed tables, but since every server is PostgreSQL 9.5, you do have a lot of possibilities.

Looking at your username, are you based in the Netherlands by any chance? :)

Some pointers:

- How CloudFlare uses Citus: https://blog.cloudflare.com/scaling-out-postgresql-for-cloud...

- Overview of Citus: https://citus-conferences.s3.amazonaws.com/pgconf.ru-2016/Ci...

- Documentation: https://www.citusdata.com/documentation/citusdb-documentatio...

ljw1001 1 day ago 0 replies      
If the big issue is querying the data, consider redshift (expensive) or a self-hosted column-store database. Data will need to be loaded in a batch for this. Column stores will reduce iops through compression, and selective data loading, and because they don't have persistent indexes.

To save IOPS on the early part of the process, consider using fast encryption (lz4 or snappy) to compress the records before writing to the file system. This might cut your IOPS in half.

ermack 3 days ago 0 replies      
Its difficult to give answer without understanding of data processing you want.

If you need to generate rich multi-dimension reports I recommend you create ETL pipeline into star-like sharded database (ala OLAP).

Dimensions normalization sometime dramatically reduce data volume, most of dimensions even can fit into RAM.

Actually 200Gb per day not so much in terms of throughput, you can manage it pretty well on PostgreSQL cluster (with help of pg_proxy). I think mySQL will also work OK.

Dedicated hardware will be cheaper then AWS RDS.

foxbarrington 3 days ago 0 replies      
Here's what I've done for ~200GB/day. Let's pretend you have server logs with urls that tell you referrer and whether or not the visit action was an impression or a conversion and you want stats by "date", "referrer domain", "action":

* Logs are written to S3 (either ELB does this automatically, or you put them there)

* S3 can put a message into an SQS queue when a log file is added

* A "worker" (written in language of your choice running on EC2 or Lambda) pops the message off the queue, downloads the log, and "reduces" it into grouped counts. In this case a large log file would be "reduced" to lines where each line is [date, referrer domain, action, count] (e.g. [['2016-02-24', 'news.ycombinator.com', 'impression', 500], ['2016-02-24', 'news.ycombinator.com', 'conversion', 20], ...]

* The reduction can either be persisted in a db that can handle further analysis or you reduce further first.

stuartaxelowen 3 days ago 0 replies      
Check out LinkedIn's posts about log processing [0] and Apache Kafka. Handling data as streams of events lets you avoid spikey query-based processing, and helps you scale out horizontally. Partitioning lets you do joins, and you can still add databases as "materialized views" for query-ability. Add Secor to automatically write logs to S3 so you can feel secure in the face of data loss, and use replication of at least 3 in your Kafka topics. Also, start with instrumentation from DataDog or NewRelic from the start - it will show you the performance bottlenecks.

0: https://engineering.linkedin.com/distributed-systems/log-wha...

bio4m 3 days ago 0 replies      
If youre on a tight budget and IO is your main bottleneck it may be easier to purchase a number of decent spec desktop PC's with multiple SSD's in them. SSD's have really come down in price while performance and capacity have improved greatly. Same goes for RAM.(Assumption here is that time is less of a concern than cost at the moment and youre not averse to doing some devops work. Also assuming that the processing youre talking about is some sort of batch processing and not realtime)This way you can try a number of different strategies without blowing the bank on AWS instances (and worst case you have a spare workstation)
libx 3 days ago 0 replies      
I would consider Unicage for your demands.https://www.youtube.com/watch?v=h_C5GBblkH8https://www.bsdcan.org/2013/schedule/attachments/244_Unicage...

In a shell (modified for speed and ease of use) get, insert, update data in a simple way, without all the fat from other mainstream (Java) solutions.

batmansmk 3 days ago 0 replies      
We love those projects at my company (Inovia Team). Your load is not that big. You won't make any big mistake stack-wise, you just have to pick something you already have operated before in production at a smaller scale. Mysql, Postgres, Mongodb, Redis will be totally fine. We have a training on how to insert 1M lines a second with off-the-shelf free open source tools (SQL and NoSQL). Ping us if you are interested by getting the slidedeck.

Tip: focus on how to backup and restore first, the rest will be easy!

pentium10 3 days ago 0 replies      
Use BigQuery, here is a nice presentation how to get going and some uses cases that get's you very familiar in the territory. I offer consulation also so you can reach out. http://www.slideshare.net/martonkodok/complex-realtime-event...
nickpeterson 3 days ago 1 reply      
Does the database grow 50GB or is that the size of the text files?
i_don_t_know 3 days ago 0 replies      
I don't know what I'm talking about or what you need, but I hear kdb is popular in the financial industry because supposedly it can handle large amounts of real-time financial information. http://kx.com
jacques_chester 3 days ago 2 replies      
Compare pricing on RDS, if doing it yourself is hurting.

AWS also has Kinesis, which is deliberately intended to be a sort of event drain. Under the hood it uses S3 and they publish an API and an agent that handles all the retry / checkpoint logic for you.

hoodoof 3 days ago 0 replies      
I'd start by asking if you are solving the right problem.

Does the business really need exactly this? What is their actual goal? Are they aware of the effort and resources required to get this report?

coryrobinson42 3 days ago 0 replies      
I would highly recommend looking into Elasticsearch. Clustering and scalability are its strong points and can help you with your quest.
ninjakeyboard 3 days ago 0 replies      
Look at your current solution and check the run plan of your sql. If your data is indexed correctly it shouldn't be too too bad to execute queries. 1M records is about 20 ops to search for a record by key.

If it's modelled in SQL, it's probably relational and normalized so you'll be joining together tables. This balloons the complexity of querying the data pretty fast. Denormalizing data simplifies the problem so see if you can get it into a K/V instead of or relational database. Not saying relational isn't a fine solution - even if you keep it in mysql, denormalizing will benefit the complexity of querying it.

Once you determine if you can denormalize, you can look at sharding the data so instead of having the data in one place, you have it in many places and the key of the record determines where to store and retrieve the data. Now you have the ability to scale your data horizontally across instances to divide the problem's complexity by n where n is the number of nodes.

Unfortunately the network is not reliable so you suddenly have to worry about CAP theorem and what happens when nodes become unavailable so you'll start looking at replication and consistency across nodes and need to figure out with your problem domain what you can tolerate. Eg bank accounts have different consistency requirements than social media data where a stale read isn't a big deal.

Aphyr's Call Me Maybe series has reviews of many datastores in pathological conditions so read about your choice there before you go all in (assuming you do want to look at different stores.) Dynamo style DB's like riak are what I think of immediately but read around - this guy is a wizard. https://aphyr.com/tags/Jepsen

AWS has a notorious network so really think about those failure scenarios. Yes it's hard and the network isn't reliable. Dynamo DBs are cool though and fit the big problems you're looking at if you want to load and query it.

If you want to work with the data, the Apache Spark is worth looking at. You mention mapreduce for instance. Spark is quick.

It's sort of hard because there isn't a lot of information about the problem domain so I can only shoot in the dark. If you have strong consistency needs or need to worry more about concurrent state across data that's a different problem than processing one record at a time without a need for consistent view of the data as a whole. The latter you can just process the data via workers.

But think Sharding to divide the problem across nodes, Denormalization eg via Key/Value lookup for simple runtime complexity. But start where you are - look at your database and make sure it's very well tuned for the queries you're making.

Do you even need to load it into a db? You could distribute the load across clusters of workers if you have some source that you can stream the data from. Then you don't have to load and then query the data. Depends heavily on your domain problem. Good luck. I can email you to discuss if you want - I just don't want to post my email here. Data isn't so much where I hand out as much as processing lots of things concurrently in distributed systems is so others may have better ideas who have gone through similar items.

There are some cool papers like the Amazon Dynamo paper and I read the Google Spanner paper the other day (more globally oriented and around locking and consistency items). You can see how some of the big companies are formalizing thinking by reading some of the papers in that space. Then there are implementations you can actually use but you need to understand them a bit first I think.


faizshah 3 days ago 0 replies      
Note: This is based on solutions I have been researching for a current project and I haven't used these in production.

Short answer: I think you're looking in the wrong direction, this problem isn't solved by a database but a full data processing system like Hadoop, Spark, Flink (my pick), or Google Cloud's dataflow. I don't know what kind of stack you guys are using (imo the solution to this problem is best made leveraging java) but I would say that you could benefit a lot from either using the hadoop ecosystem or using google cloud's ecosystem. Since you say that you are not experienced with that volume of data, I recommend you go with google cloud's ecosystem specifically look at google dataflow which supports autoscaling.

Long answer: To answer your question more directly, you have a bunch of data arriving that needs to be processed and stored every X minutes and needs to be available to be interactively analyzed or processed later in a report. This is a common task and is exactly why the hadoop ecosystem is so big right now.

The 'easy' way to solve this problem is by using google dataflow which is a stream processing abstraction over the google cloud that will let you set your X minute window (or more complex windowing) and automatically scale your compute servers (and pay only for what you use, not what you reserve). For interactive queries they offer google bigquery, a robust SQL based column database that lets you query your data in seconds and only charges you based on the columns you queried (if your data set is 1TB but the columns used in your query are only some short strings they might only charge you for querying 5GB). As a replacement for your mysql problems they also offer managed mysql instances and their own google bigtable which has many other useful features. Did I mention these services are integrated into an interactive ipython notebook style interface called Datalab and fully integrated with your dataflow code?

This is all might get a little expensive though (in terms of your cloud bill), the other solution is to do some harder work involving the hadoop ecosystem. The problem of processing data every X minutes is called windowing in stream processing. Your problems are solved by using Apache Flink, a relatively easy and fast stream processing system that makes it easy to set up clusters as you scale your data processing. Flink will help you with your report generation and make it easy to handle processing this streaming data in a fast, robust, and fault tolerant (that's a lot of buzz words) fashion.

Please take a look at the flink programming guide or the data-artisans training sessions on this topic. Note that the problem of doing SQL queries using flink is not solved (yet) this feature is planned to be released this year. However, flink will solve all your data processing problems in terms of the cross table reports and preprocessing for storage in a relational database or distributed filesystem.

For storing this data and making it available you need to use something fast but just as robust as mysql, the 'correct' solution at this time if you are not using all the columns of your table is using a columnar solution. From googles cloud you have bigquery, from the open source ecosystem you have drill, kudu, parquet, impala and many many more. You can also try using postgres or rethinkdb for a full relational solution or HDFS/QFS + ignite + flink from the hadoop ecosystem.

For the problem of interactively working with your data, try using Apache Zeppelin (free, scala required I think) or Databricks (paid but with lots of features, spark only i think). Or take the results of your query from flink or similar and interactively analyze those using jupyter/ipython(the solution I use).

The short answer is, dust off your old java textbooks. If you don't have a java dev on your team and aren't planning on hiring one, the google dataflow solution is way easier and cheaper in terms of engineering. If you help I do need an internship ;)

If you want to look at all the possible solutions from the hadoop ecosystem look at:https://hadoopecosystemtable.github.io/

For google cloud ecosystem it's all there on their website.

Happy coding!

Oops, it seems I left out ingestion, I would use kafka or spring reactor.

P.S The flink mailing list is very friendly, try asking this question there.

Chrome says login.live.com is a Deceptive site
9 points by whizzkid  2 days ago   2 comments top
sbose78 20 hours ago 1 reply      
Checked, seems fine to me.
Ask HN: Does HN move too fast for 'Ask HN'?
50 points by J-dawg  4 days ago   11 comments top 7
brudgers 4 days ago 0 replies      
My understanding is that "Ask HN" questions have a different "gravity" and sink more slowly. That said, I suspect that the average quality of an "Ask HN" question is not much better than the average non-spam submission...maybe worse since meta-discussions are fairly common and lead to dull comments like mine here.

Even non-meta questions can be rather lazy...I mean a couple of throw away sentences that don't provide much context suggest that it's probably not that important.

For example: https://news.ycombinator.com/item?id=11160872

Versus: https://news.ycombinator.com/item?id=11149361

While I don't think of "Ask HN" as StackOverflow, there's something to the response "What <code> have you tried?" and the idea that a two sentence question doesn't necessarily deserve a long detailed comprehensive answer.

brndn 22 hours ago 0 replies      
Ask HN is probably my favorite part of HN and I check it every time I visit. I wish it was more active. I love hearing the opinions of all of you smart people.
monroepe 4 days ago 1 reply      
While I agree they do get lost quickly, there is an "ask" link in the header. I check there every so often, but maybe I am in the minority.
27182818284 4 days ago 0 replies      
The overall quality of Ask HN questions is pretty hit or miss compared with other submissions in the News section. Often times there are Ask HN questions with no context, that border on spam, really aren't asking a question, or would be better served on Stack Overflow.

so I guess what I"m saying is that I'm not particularly surprised by its speed, because a lot of the stories submitted to deserve to decline quickly

throwaway21816 4 days ago 1 reply      
Will this Ask HN become its own self fulfilling prophecy?
cremno 4 days ago 1 reply      
>One solution would be to give them their own separate 'new' page.

It's not exactly that but https://news.ycombinator.com/ask exists.

beamatronic 4 days ago 0 replies      
Yes, absolutely 100% yes.
Ask HN: What companies have/had good engineering blogs?
24 points by ambertch  3 days ago   19 comments top 19
abuchanan 15 hours ago 0 replies      
ThoughtWorks of course...https://www.thoughtworks.com/blogs
sumodirjo 3 days ago 0 replies      
Collection of engineering blogs : https://github.com/sumodirjo/engineering-blogs/
CiPHPerCoder 2 days ago 0 replies      
If you're into PHP programming, application security, and/or cryptography:


DustinLessard 2 days ago 0 replies      
Workiva Techblog https://techblog.workiva.com/ has become a favourite of mine recently.
kaizensoze 3 days ago 0 replies      

Edit: Too bad you can't use asterisks in HN comments...

whatismybrowser 3 days ago 0 replies      
Etsy's tech blog: https://codeascraft.com/ is excellent.

They got me on to monitoring EVERYTHING with statsd. Great stuff.

147 3 days ago 0 replies      
One of my favorite ones is Instagram's: http://instagram-engineering.tumblr.com/
kachhalimbu 3 days ago 0 replies      
Auth0 blog is pretty nice if you are into JavaScript and Security https://auth0.com/blog
perseusprime11 3 days ago 0 replies      
Netflix is a good one. But remember most of them won't get you anywhere if you want to learn about their architectures. They are mostly used as a recruiting tool.
Ask HN: Which tool do you use for API test automation?
6 points by Oras  2 days ago   3 comments top 3
MalcolmDiggs 1 day ago 0 replies      
For end-to-end tests I user SuperAgent (http://visionmedia.github.io/superagent/) or more specifically SuperTest (https://github.com/visionmedia/supertest) which has a suite of assertions built in.

It actually tests your API over HTTP, so you can tell it the url (localhost or production) of any api you wanna test. Use it in conjunction with any modern test framework and you'll likely get the request-times/performance metrics echo'd as part of the runner's output.

It's typically used more for javascript development, but there's no reason that it couldn't test an API written in PHP (or anything else), as long as you're comfortable writing your tests in javascript.

chrstphrhrt 18 hours ago 0 replies      

It lets you generate end-to-end tests from a spec file for APIs that are themselves generated from the same file. Same for client libraries and API docs, generate all the things :)

brudgers 1 day ago 0 replies      
What tools have already been considered?

Why don't these meet expectations?

Open Sourcing Mobile Libraries at 500px
8 points by JVillella  1 day ago   discuss
Ask HN: How do you track FOSS releases, changelogs?
3 points by andrewstuart2  1 day ago   3 comments top 2
mtmail 1 day ago 0 replies      
For nodejs you can start with https://www.npmjs.com/package/retire "is a tool for detecting use of vulnerable libraries"

For Ruby/Python/PHP/node you can use https://gemnasium.com/

I thought about creating something similar for Perl once. I'm sure there's still space in the market.

m_ke 1 day ago 1 reply      
Ask HN: Could browsers do the VirtualDOM React do?
4 points by lucio  2 days ago   discuss
Ask HN: Who will GitHub acquire?
15 points by curiousisgeorge  3 days ago   5 comments top 3
saysomethingnow 3 days ago 0 replies      
Probably a good place to start:


It's an incredibly sparse list though, given GitHub's popularity. Compare this Atlassian's marketplace


which seems to include everybody, both big and small.

eugenekolo2 3 days ago 0 replies      
Bitbucket (all of Atlassian) UI designers are still living their IBM days. GitLab is the major competitor I see.
csense 3 days ago 1 reply      
Maybe what's going on is they're focusing their resources toward paid offerings for large businesses and away from small and FOSS projects.

The most valuable thing Github has is the enormous portfolio of FOSS projects and all the free eyeballs that come with it. Yes, it's not paid, but because its public offerings are so popular, most developers are familiar with it and that gives it a trusted inlet to basically every software organization in existence. After all, in a non-dysfunctional organization, the guy who decides what products to buy to support their developers should give far more weight to what those developers want, than what is said by a vendor salesman whose job it is to convince them to buy a particular product.

Which means it's a mistake to neglect their FOSS users -- it seems self-evident that the biggest, best, most cost-effective way for Github to sell its paid features is word-of-mouth from its free users. It's very hard for a competitor to replicate their enormous network effect, which is also what makes it so effective. Their underwhelming response to the "dear github" letter suggests that their upper management is blind to this. I think there's a serious possibility that, in the next five to ten years, their position will be as marginalized as, say, Sourceforge is today -- long ago it was the "gold standard" in hosting services, but today it's only used by barely-maintained projects who can't even scrape together the resources needed to change their code hosting provider.

Business co-founder making me uncomfortable
6 points by eruditely  2 days ago   30 comments top 10
jtfairbank 1 day ago 0 replies      
This is a whole ball of worms:

- Cofounders should have a roughly even equity split, with the min being no more than 10% less than the max (unless there is a significant factor like one founder investing 100k+ into the business at conception). This prevents resentment and keeps the conversations equal.

- You will have to get used to being in the press, it is necessary to be successful. If this makes you uncomfortable, then have the non-tech founder be the face of the company. But your name and company will be in the press.

- The non-tech stuff is his side of the business. 99% of the time it is best to let him handle this. You need to divide responsibility and then let go.

- If your company relates to the plight of the homeless, this seems fine even if it is a bit early. Otherwise this seems like a distraction / gimic to gain publicity in a "look at what crazy startup founders have to do" kinda way.

- Stop caring what other people think. If / when you're successful you'll get even more people who think you suck and talk shit about you, your product, etc.

argonaut 1 day ago 1 reply      
I will be very harsh, but only because I think you need to take a step back and take some time to figure things out. The reason you need to go to school, and not do a startup, has nothing to do with your equity/pay situation.

You are simply not emotionally mature enough to be working at a startup.

The first thing that stood out to me is the judgmentality over such trivial matters.

Saying things like "i associate homelessness with lack of skills and i want to keep my identity small. i'm known for being well read" really reminds me of when I was really arrogant in high school and felt the need to maintain an internal (and wholly undeserved) sense of moral/intellectual superiority over my more popular (not unsurprisingly) classmates.

And then worrying about how your friends might perceive you because you're associated with him. These are not the signs of legitimate concern. These are the signs of insecurity. Similar to your fear that your parents would kick you out of the house if they read this post. This is not the sign of someone emotionally mature enough to be working full or part time at a startup.

Now, you could be totally right about there being too much focus on media attention. But a socially mature adult (and business partner) would be comfortable bringing those concerns directly to their cofounder, not to an internet forum where they can let their ruminations run wild.

The fact that you've also accepted an arrangement that is an absolutely terrible deal financially indicates a certain sense of naivete about employment, how startups work, and finances. This is not how unpaid internships work. The only reason to ever take an unpaid internship is if 1) it's at a highly prestigious company, or 2) it's in an industry where there are hundreds of overqualified applicants for every position.

Either get a proper paid tech internship so you can actually learn what it feels like to have a real job at a real company with real mentorship, surrounded by people smarter than you. Or take classes and study hard so you can get into a good school.

Spoom 1 day ago 2 replies      
You are getting taken advantage of. 10% equity while not taking any salary isn't fair in any circumstances I can fathom unless the company is already profitable.
JSeymourATL 1 day ago 1 reply      
> i have grounds to not want to be a part of this right? I want to keep our identity small.

As an equity partner in this start-up you now have a fiduciary interest in the Branding/Marketing/Public Image of the company.

You didn't mention your market/client space. Does the press attention only serve your older partner's ego or does it help convey your company's story to your market?

Before confronting your cofounder-- first seek to understand. Get the specific details of his media plans. Then consult with 1-2 branding experts, find one on Hourly Nerd > https://hourlynerd.com/your-matches/computer-software/grow-b...

A good chat with these guys should be the perfect sounding board whether or not the 'rags to riches' stunt is a sound idea.

auganov 1 day ago 0 replies      
Sounds like its much more than the image issue. Just run lol. I wish I did when I was in an analogous situation.

Laughable equity, co-founder you can't respect, already built-up resentment - can only get worse.

ddorian43 2 days ago 1 reply      
No salary and 10% ? Doesn't make sense to me.
dsr_ 2 days ago 1 reply      
If you're "an employee that was promoted and got 10%" but are not being paid a salary, it is entirely possible that you are not in a fair situation. My recommendation is that you concentrate on that: is this a partnership you want to be in?
ljw1001 1 day ago 1 reply      
If your business partner is homeless and 23 years older than you, is there any reason to believe he knows much about business?
sharemywin 1 day ago 1 reply      
your doing 9k per month and you can't pay anyone? where is the money going? And when do you expect to get paid a living wage? when would you be making a market wage?
eruditely 1 day ago 1 reply      
i should add we have like 96k a year in monthly revenue but we're still not profitable. does this change anything?30-40% is left aside for investors.
Ask HN: Alpine Linux as a Desktop?
6 points by smoyer  2 days ago   2 comments top 2
jfkw 2 days ago 0 replies      
Somewhat off-topic:

Fellow longtime Gentoo and recent Alpine user here. I haven't encountered undue conflict burden from configuration file updates. Some projects do churn whitespace etc, in configuration defaults files, which is unfortunate but not specific to any distro.

If an application supports a conf.d style override, I use that, containing only settings which differ from default.

Is there something inherent about Alpine packaging that handles local config differently?

emilburzo 1 day ago 0 replies      
As a former Gentoo-er: why not Archlinux?
Ask HN: Alpine Linux as a Desktop
6 points by smoyer  3 days ago   discuss
Ask HN: Am I getting hosed by a CEO?
10 points by JohnnyD10  2 days ago   18 comments top 16
nostrademons 2 days ago 0 replies      
Yes, and your CEO is clueless. That's not how equity or investment work (you don't "reserve shares" for an investor, new preferred shares are issued and everyone holding common gets diluted). Good investors want to see founders with roughly equal shares; a large discrepancy indicates that some founders are insufficiently incentivized and probably won't be around long. The best way to maintain majority control of the company is to be profitable - percentage ownership means nothing if you're about to go bankrupt and the only way to save the company is to negotiate a deal where the investors get control.

Stop hanging out with this loser, build up your technical skills so you have a track record of projects built from scratch that you can show people, and then partner with someone who'll actually value you like a partner.

patio11 2 days ago 0 replies      
Four year vesting and one year cliff is a market term for founders and employees in funded and pre-funding startups. Vesting over 3 years is totally reasonable in your situation.

3% though? Not in bounds. If you're the technical guy there on day one and you don't receive a market salary or reasonable facsimile thereof every two weeks, you're a technical co-founder whether you want to be or not. Your deal is exploitative. You will likely not successfully negotiate a non-exploitative deal. (Presumption should be 33%, not 30%, and very definitely not 10%. No difference between the cofounders is meaningful as T approaches 5 years from now; if there is a meaningful difference, that person probably shouldn't be a co-founder.)

Everyone in this conversation is a businessman. They've underbid for your services. I strongly advise turning in your two week's notice and washing your hands of this. Your equity, whether or not you've actually been issued it, is equity in a company run by operators who are not dealing fairly with their technical co-founder and who are either comfortable with lying or clueless. No fact which I've just recited would cause me to think "This equity is worth more than typical startup equity", which is worth nothing.

>> He tells me I'm too "9 to 5" security focused

This is a common line. It is just that: a line. It gets cynically deployed against engineers on a regular basis by people who are not willing or able to pay market wages. You should interpret this line as nothing other than "I am unwilling or unable to pay market wages" and act accordingly.

>> He keeps telling me that I need to trust him and that he'll reward people down the road based on their performance.

99% of the people told variants of this line will be screwed by the people offering this term. As a direct consequence, entrepreneurs who are presently cash-poor but who want to incentivize employees/partners/etc do not say this; they say variants of "Here's a third of the company" or "I will commit to consequential cash payments in writing contingent on us achieving milestones together."

"I will pay you a number amenable to me, at a time of my choosing, if and only if I feel like paying you" is not an offer.

I will close with the observation that now is among the best times in the history of ever to be an engineer capable of shipping projects. You have better options. Exploring them for two weeks ROFLstomps the value of continuing to work for this company.

JohnnyD10 1 day ago 1 reply      
Thank you everyone for your views. They confirm what I think I've known for a while, I just needed some experienced perspective to validate these thoughts. I think what's made me re-think many times is that I'm surrounded by 5 other people in this company, all of whom are about 10 years older than me except one, and they seem more than eager to step up and offer this guy their near full time allegiance unquestioningly. It makes me pause and say "am I not seeing something here? Am I just naturally more distrusting than these other guys I started with?" So I've slowed down and quadruple re-thought every time I was ready to walk, trying to see it from a different angle. Still, the nagging discomfort remains. Glad to know I'm not too off base.

Update: yesterday, CEO tells me he will offer the guy who started 2 weeks ago the same equity I have, because he's done "such a good job". He has been kicking it hard, it's true, and he does really well, but he's fresh, and the CEO's perspective is so skewed based on what's bright and shiny in front of him at the moment that he totally disregards the contribution of people working for him 7 months for free who got him to where he's at now.

The sad part is I think he knows this damn well, and his strategy well may be to operate a company on a string of trusting people who he will continue to cycle in as the disenchanted ones before them leave.

rmc 1 day ago 0 replies      
> He keeps telling me that I need to trust him and that he'll reward people down the road based on their performance.

I heard, and believed, that for years. I got nothing.

brianwawok 1 day ago 0 replies      
Cut your losses, leave, and make sure the deal makes sense next time BEFORE you do any work. I get offers every day from people that want me to do 90% of the work of their startup for no pay and 3%. Passssssssss.
JSeymourATL 1 day ago 0 replies      
> am I completely a chump?

It's human nature to trust people, until proven otherwise.

Your CEO might even have magnetic charm on par with Richard Branson or Steve Jobs. Their gift for words hit the right buttons. Who wouldn't want to go into partnership with those guys? Time reveals if they actually make good and deliver.

We should all think in terms of highest & best use of our time. Relative to proving your worth-- imagine what this position might have yielded had you been paid at contractor rates. You've invested your time in the CEO. Has he proven his worth back to you?

krmcclelland 1 day ago 0 replies      
Wow Johnny! You are being taken advantage of. First, how would the company fair if you weren't there? Do you have any documentation (i.e. emails, text, etc.?) that eludes to what they promised you. If so you may have an opportunity to visit with your attorney to see if their is a way to get back compensation?

Good luck, I hope everything works out.

petervandijck 1 day ago 0 replies      
"He'll reward people down the road based on their performance" -> you're being taken advantage of.
jacalata 2 days ago 0 replies      
You're not being treated like an employee, employees get paid. You're being treated like a useful idiot.
brudgers 1 day ago 0 replies      
Trusting other people is not a flaw in your character. The character flaw is in people who take advantage of that trust to your detriment.

Good luck.

ank_the_elder 2 days ago 0 replies      
Cut your losses, get a paid gig right away. Sounds very dodgy to me.
cjcenizal 2 days ago 0 replies      
I hope these comments give you the reassurance you need to assert yourself. Your partner isn't respecting you or your contribution. Sometimes you just have to walk.
chrstphrhrt 2 days ago 0 replies      
I've been chumpier than that. Get out while you can.
rmc 1 day ago 0 replies      
You're being screwed.
harryh 1 day ago 0 replies      
Just gonna say +1 to what everyone else has said here (particularly nostrademons & patio11). You are not getting a fair deal. You should leave.
eruditely 11 hours ago 0 replies      
Hey man I'm in roughly a similar situation, I didn't really know either and it just sort of happened. I was just excited to be doing real work, you know?

Here's to hoping what ever decision you make works out.

What rack-mountable multiple-ARM servers are there out there?
4 points by mikaelm  2 days ago   2 comments top 2
CyberFonic 2 days ago 0 replies      
Well you'd think that it wouldn't be that hard to create a blade server style system using Raspberry Pi Compute Modules. A 256 node, 1U server might just be possible. Of course, the power supply, cooling and LAN fabric would be an interesting challenge.

Rather than 64 bit and ECC RAM, you could have high redundancy on the module level. AFAIK Google do not use server grade systems, just lots of them in a failure tolerant configuration.

mikaelm 2 days ago 0 replies      
Ah just to be clear, I would like the individual SoC units to have as little firmware as possible, for the security of the computation.

An important part of the goal is to get an as "closed computational environment" as possible, where risk for BIOS/firmware infection by hacker is minimal.

So just CPU, ECC RAM, ethernet, and microSD (or USB) to boot off.

Ask HN: What is the best way to learn JavaScript for a beginner?
17 points by joshcox  2 days ago   14 comments top 12
santiagobasulto 2 days ago 0 replies      
Don't go after "learn javascript". Try to "learn to program" first. The only way to learn anything is BY DOING. You have to sit and code. Are you going to play great basketball just watching the NBA? NO! You have to go outside and play. In the world of coding that translates to: sit and code.

Online resources, there are many. Too many sometimes. Grab anything from codecadeamy, codeschool, or Tree House. But remember that's not the only thing you need to know.

If you check my profile, we do remote programming courses where people work together with a real teacher for 6 weeks. We offer scholarships (100% free).

Remember:* Focus on learning programming. What's the scope of a variable? what's immutability? etc.* Practice a lot. Code as much as you can.* Look up for a group to work together.

maxblackwood 2 days ago 2 replies      
Eloquent Javascript. It's one of the best introductions to programming I've ever read. http://eloquentjavascript.net/
sebastianxx 2 days ago 0 replies      
Practice as much as you can. Write lots of code. Try to solve problems. Don't be afraid to reach out to people and ask questions. The best way to learn Javascript and programming is by doing it. JUST DO IT! As for books I'd recommend "Head First JavaScript". They're great for beginners. If you're comfortable with screencast try Learn Javascript in 14 days course on iLoveCoding and then move on to the lessons https://ilovecoding.org/lessons.

If you get stuck, ask questions on Stack Overflow or Reddit. Good luck

random_coder 1 day ago 0 replies      
I find Mozilla's javascript guide[1] to be quite good for learning JS, if you have some programming experience.

1. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guid...

chatwinra 1 day ago 0 replies      
I'd recommend a 3 pronged approach:

1. Read specific Javascript stuff (I'd also recommend Eloquent Javascript as others have). Code school free courses can help with the basics too.

2. Try and find someone who you can speak to from time to time about Javascript stuff. They can introduce you more advanced stuff (like Grunt/Gulp), which even if you can't understand it all, helps expand your horizons and show you what to look at when you're comfortable with the simple stuff.

3. Read about programming in general, because the principles apply to any programming language. I'd recommend Code Complete by Steve McConnell. It's a bit daunting but full of great stuff to make you think about your approach to programming.

good luck!

brudgers 1 day ago 0 replies      
There is a lot of good advice elsewhere. I'd recommend Norvig's essay:


The complexity of programming ramps up really fast from "Hello World" toward professional tools and techniques due to a massive permutation space and in the case of JavaScript comprehensive instability in best practice.

Good luck

lollipop25 1 day ago 0 replies      
"just do it"

JavaScript (or any programming language for that matter) cannot be learned by just reading. One must actually write with the language to get used to the language, as well as gain muscle memory for the syntax. Do something, even something very simple, like a console-based game (which is essentially just a state machine), or create your own pub-sub library (which deals alot with object and array manipulation), or recreate some data structure (binary tree, linked lists, rings, etc.). These simple exercises can go a long way.

These books help as well.

- JavaScript: The Definitive Guide

- High Performance JavaScript

davismwfl 1 day ago 0 replies      
In the past, I bought some team members access to codeschool.com. The price is really fair and they have screencasts, course syllabus and tasks that help you learn whatever you are taking from them. I was pretty impressed, it worked well for Javascript and HTML/css and I'd have no problem subscribing again if there was a need. IIRC, it was a monthly price but was fairly inexpensive and we only had it like 6 months and then cancelled because they had gone through the courses they wanted.
novicei 1 day ago 0 replies      
I recommend https://ilovecoding.org/ There are lots of awesome answers there, do read books and watch Javascript tutorials but don't just read or watch them. GO AND WRITE CODE. Don't procrastinate, just do it. You learn by doing it.
k__ 1 day ago 0 replies      
If you programmed in other languages before, I would recommend "JavaScript the Good Parts" and "Pro JavaScript Techniques". Both were about the language itself and how to avoid a few ugly edge cases.

Disclaimer: I already knew C, Java, PHP and VisualBasic before I started with JavaScript .

Ask HN: What is your experience with running Hacklang on production?
4 points by andreygrehov  2 days ago   discuss
Ask HN: Is there a free/fremium hash table in the cloud with simple HTTP access?
8 points by THRWAWA20160222  4 days ago   9 comments top 4
namtao 3 days ago 0 replies      
It does! I wanted exactly the same and couldn't find something simple enough, so I made it (last month):

Stord.io is a key/value store. This is often modelled as a hashmap or a dictionary in programming languages.

Under the hood, stord.io is powered by Redis, with a thin python application wrapper, based on Flask.Stord.io doesnt assume anything about your data, make whatever nested schema you want!


Full disclosure: if this wasn't already clear, it's my project. I would LOVE feedback/feature suggestions.

bifrost 4 days ago 0 replies      
I have seen a couple variants, but none of them have stuck around for long since they ended up being CnC for botnets/malware/etc.

I think it would be safe to assume there are also collision problems in unauthenticated ones as well...

xyzzy123 4 days ago 2 replies      
I'm not aware of any services with the simple API you're looking for (neat idea), but there are a lot of more complicated solutions.

What are the key/value durability requirements? (OK to drop values now and then, or does it need to keep them until the end of time?). Need backups? Do values expire, or do you have to expire them manually? Since you can't enumerate or search, how do you delete things? Allowed sizes of keys and values, between bytes and terabytes? How far should it scale? Shared namespace, or namespace per user? Do you need a latency guarantee? How low? Are you gonna use it for something important and need an SLA on the availability of the service as a whole?

A couple of "nearby" points in the solution space:

Amazon S3 is a KV store where the keys look like filenames and the values look like files. High durability, good scaling, pretty high latency. You could also obviously paper a KV store on top of ElastiCache or DynamoDB, which are going to have different properties.

Going low-level and implementing your own in say, golang would probably be the most fun though :p

Hard to say if we could use a SAAS KV store at work without a lot more technical detail on the solution. I'm having a hard time thinking of an app where you'd want a KV store, but not need a database or NoSQL store which you could use instead.

mike255x 4 days ago 1 reply      
You can use any REDIS service on the cloud. An example: https://redislabs.com/. If you are on Heroku there are multiple REDIS plugins.
Ask HN: Why are there no glucose measurement sensors?
8 points by danielschonfeld  3 days ago   8 comments top 3
1123581321 3 days ago 1 reply      
There's a lot more activity around intercepting output from sensor-transmitter combos like Dexcom and building a better receiver, or looping the data into a pump. Take a look at this, for example: http://www.nightscout.info/

A DIY sensor needs to either be some kind of test strip or a needle. Both are a lot easier to just get through insurance than to mimic.

I'm sure you could perform the chemical reactions yourself, but you'll probably find whatever you make will need to be replaced frequently. Meanwhile, the software, modified Android phones, etc. last a long longer, so that's where is the action.

HarryHirsch 3 days ago 1 reply      
What? Glucose test strips have been around since the 1960s, and they all use the same principle - coulombmetry using the glucose oxidase/horseradish peroxidase system.

The challenge for the homebrewer is to build something traceable. When you measure blood sugar today the same sample should yield the same number - not only today but also ten years from now. Yes, Theranos is fighting with traceability, too.

Raed667 3 days ago 0 replies      
There are plenty of "connected" GLUCO-MONITORING systems. A simple Google search with those 3 keywords shows plenty of results.
       cached 28 February 2016 13:05:02 GMT