As an H1B, while there is employer tie in at some level, I am free to change jobs. What I am not free to do is to stop working. That fear certainly does indirectly result in lesser job mobility.
For example, opportunities like say going to Recurse Center, or having the freedom to have a decently long sabbatical etc. even with a 6 month emergency fund are not available. The other thing is no freedom to pursue any other job, i.e working on a startup idea during the evenings or weekend, or writing a technical book etc. for money is not allowed.
There are certainly issues with the H1b system, i.e. the lottery, low salary cap, multiple applications per person to game the system etc, but employer stickiness is not one of them, at least for people who are qualified.
The major problem is due to long wait for green card for people from China, and almost impossible wait for people from India. My friends who were not born in these two countries have been able to get a green card within about a year, after getting an H1B, while I am looking at a more than 20-30 year wait.
AFAIK, the number considered is salary, so the number is decently high, even for bay area. There are some exceptions like Netflix, but a lot of companies pay majority in equity, especially at higher salary levels.
I don't know how easy/difficult would it be for companies to move from equity based compensation to normal salary though, if something like that is implemented.
It depends on the passion as well. Not all passions are equal and can be easily transferred into a successful career. Assuming you have multiple passions, my personal advice would be to maximize money/time ratio for your career that corroborate with one of your passions to have some job satisfaction as well as pursuing your other passions.
For example, if you have a passion for say writing and computer science, I would recommend to get a job in CS, since money/time ratio is usually higher there.
I can see it being 'fun' in something like Scratch. I expected op to reply with either some edutech or some domain I hadn't heard of. It does however seem that op is trolling, and I missed it.
I am quite curious of the domain where you use emojis in function name? That sounds fun, and also from what I have seen of unicode support somewhat terrifying.
I work mostly in enterprise backend so variable names are typically boring, but I have never heard of emojis being used in var names.
Aren't a lot of so called 'algorithmic problems' similar to brain teasers? If you have seen it before, you'll know it. But otherwise, there is not much chance of you cracking it without an 'Aha' moment, which may or may not come during the one hour allocated for an interview.
Weird tricks in bit manipulation, linked lists, array, hash problems etc. are all standard in interviewing and are still used, even at Google, at least 2 years ago, when I last interviewed there.
This works though as Google's, and the industry in general has a policy of rejecting candidates, rather than accepting them, because hiring is very risk-averse. Candidates switch jobs frequently at the beginning of their career and so there is a rotating pool of good candidates the companies can pick from.
Yes they are, and as you, I consider themselves exactly the same. An example one that I used to ask during interviews were the "traverse a matrix in inverse diagonals" or "traverse a matrix in circular way" or the typical "first duplicated number" (that one is on CodeFights and CodeWars).
Those are problems that you either know the "trick" or you don't.
The main problem with the majority of them is not the problem itself. The main problem is that originally they are meant to understand the process of problem solving of a candidate. However, as they are passed among interviewers, they become an "solved or not solved", because it is the path of least effort.
I once created a problem that I love to ask in my interviews, it deals with binary search to get the total elements from a list that is only avilable through a "broken" API. A lot of people I interviewed told me that they loved the question, however, when some colleagues started adopting it, I realized that they were basically expecting the specific answer they knew... when the real value of the exercise is to work with interviewees to "solve a problem together".
I think this is also because no one really teaches you how to interview. Giving multiple interviews doesn't, at least IMHO make you knowledgeable to be on the other side.
It is kinda at a certain level in your career you are expected to just know how to do it. For other skills like management, you at least have your peers who are doing the work day in day out, who you can learn from. Interviews are closed room, usually, though I have had some where one person was shadowing the interviewer.
Also, if you take a terrible interview, its not gonna really impact you. There is no performance report that'll negatively impact you. There will be another candidate, who you'll have a rapport with. Or someone else will interview someone else.
As you said in your anecdote about the binary search problem, not everyone is best at being an interviewer.
But aside from there being a dedicated person for interviewing for big companies, or outsourcing interviews for startups, I don't really see a solution. And outsourcing, whether internally or externally comes up with its own set of problems.
> I think this is also because no one really teaches you how to interview. Giving multiple interviews doesn't, at least IMHO make you knowledgeable to be on the other side.
This is something that I have ameliorated in my teams in the following way: When an Engineer is going to start interviewing people he goes through the following process:
0. Before starting, we provide some guidance on what we look in interviews and what we ask (1 hour meeting with the new wannabe interviewer and the manager)
1. First he shadows an interview done by a Tech Lead or Sr. Engineer (those are the ones that generally do the interviews)
2. Then he does two interviews, where he asks the questions but is shadowed by a Tech Lead of Sr. Engineer. He gets feedback about his interviews and is asked to explain his feedback and we tune it to the expectations from the team.
3. Finally he starts interviewing people on his own.
This has worked well enough for me, specially once we had a shared GDocs with a) The list of questions to ask (along with notes on what to look for and how to guide guys) and b) A "competency matrix" ( like http://sijinjoseph.com/programmer-competency-matrix/ ) tailored to what we were looking for.
Finally, one of the things I always emphazise the people that is interviewing for my team, is to remember how they felt during interviews. And be aware that as interviewer ALWAYS you have the upper hand. I hate being interviewed, I hate the feeling there, and the fact that you have 60-90 mintues to demonstrate that you know what the company is choosing to ask you, without regards of all other stuff you know that will be useful for the position, but they don't ask. And the nerves.
Interesting approach for optimization of an algorithm that I haven't seen in any books(as the author also mentions).
Is there any comprehensive resource for these, or does everyone kind of fiddle around and stumble on these independently. I have always been curious about 'hacks' such as the Fast inverse square root method. How does anyone figure these out in the first place?
This is kinda similar problem to the advice given of "Build Something" given to new programmers. For a long time, I thought that meant "Build Something completely new and fascinating" and I would spend days trying to find something to build, see examples of it already present and give up. Recently I have realized it doesn't really matter. Sublime text had the first release in 2008, a time when I would have said Text Editor market is saturated. Visual Studio Code was released even more recently, and still gained sufficient market share. Now will you write the next Sublime text? Probably not, but that doesn't mean writing a new text editor, music player, heck even a todo app is a bad endeavor to learn. And it'll be really hard to write sublime without writing a notepad clone.
My main point here is, even if you are writing about the most mundane topic, in CS there is bound to be something, a different command line flag, a different point of view, something old rediscovered etc. just by the virtue of writing on it. Will it happen on your first attempt? Probably not.
You can start to write about the most mundane or the most esoteric thing. Only with practice would you find a niche, and there eventually will be some gems.
Any examples of code for this approach? From what I guess, you are implementing some kind of assembly like approach with explicit saving of stack frame, but I am having a hard time imagining it as being easier.
Not OP, but they probably mean something like the following toy problem (in C++):
struct BinTree {
long value;
BinTree *left;
BinTree *right;
};
long long RecursiveDFSSum(BinTree *node) {
if (NULL == node) {
return 0;
}
return (long long)value + RecursiveDFSSum(node->left) + RecursiveDFSSum(node->right);
}
long long IterativeDFSSum(BinTree *tree) {
std::vector<BinTree *> custom_stack;
custom_stack.push_back(tree);
long long value = 0;
while (!custom_stack.empty()) {
BinTree *node = *custom_stack.rbegin();
custom_stack.pop_back();
if (NULL != node) {
value += node->value;
custom_stack.push_back(node->left);
custom_stack.push_back(node->right);
}
}
return value;
}
Did not check this actually compiles etc. but you get the point. Both ways will give you the same solution in the same way, time complexity, space complexity etc. but the second one is not bound by max call stack size (only by max heap size). Additionally, the second one is slightly more space efficient, since the recursive solution requires saving an entire call frame into the stack (e.g. stack pointer, return address) whereas the iterative solution just stores one pointer per stack item.
This is exactly right; thank you for providing the example. This is manual recursion in a heap allocated space.
If you were coding a DP problem, then custom_stack might have a fixed size you can pre-allocate, and it might also be 2 or 3-dimensional.
For some image-based recursion, your backtracking doesn’t even need to store real pointers in the stack frame. For example when I’ve written a flood fill, I can store the return pointer as a one pixel offset in as little as 2 or 3 bits, depending on whether I include diagonal pixels (4-surround vs 8-surround).
Thanks! I was getting confused between stack data structure, function stack and stack space and had gotten to a weird mix in my head. The example cleared it up.
I think the OP might be referring to allocating and managing your own stack. You could use a list in Python for this.
Python memory management is automatic though, discussing stack vs heap doesn't make sense in the context of Python,well for CPython at least. I'm not sure about other Python implementations.
> Python memory management is automatic though, discussing stack vs heap doesn't make sense in the context of Python,well for CPython at least. I'm not sure about other Python implementations.
I'm not sure I know what you mean about stack vs heap not making sense because of Python's memory manager. Will you elaborate?
The primary issue is that the default stack limit in Python is too small for some applications of recursion, which the comment before mine illustrates. This is true not just in Python, but any language, since the stack size is generally a function of the process or OS, not a limit of the language. Heap allocated stacks are a reasonable thing to do in any language.
You're right you can solve that in Python by using a list. Python's memory management doesn't really affect one's ability to do so, right?
A secondary issue is that native recursion sometimes uses more memory than a manually heap-allocated "stack". If I make my own stack, I have complete control and complete understand of what's in memory and how much I use. With the native stack, it can be very opaque, and it's easy to chew up the already-too-small stack very quickly by accidentally having a large stack frame.
>"I'm not sure I know what you mean about stack vs heap not making sense because of Python's memory manager. Will you elaborate?"
In Python everything is an object. Python gives you a reference to that object when you create it. There is no way to tell Python(CPython anyway) in which memory space you would it to create that object.
> There is no way to tell Python(CPython anyway) in which memory space you would it to create that object.
Ah right, that's because all objects are heap-allocated.
You choose heap by using an object for the stack, and rewriting your recursion to use (superficially) iterative code.
You can choose stack allocation instead by using regular recursion: native function calls with local variables.
What you bring up is an interesting issue that can make recursion harder to understand in Python. Having local objects in the stack frame can cause both stack and heap allocation - pointers for the objects on the stack, and the object contents on the heap. Or, you might have global objects that aren't local to the recursive function call or the stack, in which case it's important to understand you're sharing data across function calls.
Generally speaking, you probably don't want individual heap allocations in a recursive function, so it's best not to have local objects. At least performance-wise.
>"You choose heap by using an object for the stack, and rewriting recursion using (superficially) iterative code.
You can choose stack allocation instead by using regular recursion: native function calls with local variables."
I'm not following you.
I believe these would both result in the same thing in Python. Python gives you a reference to an object. That object is stored in a private heap "somewhere." That reference to the object that Python gave you is stored on the stack. The object that it points to lives in the heap. This should be the same for both of your examples. I'm not sure what you mean by "native function calls." I am not familiar this this term.
Sorry maybe I’m making it more confusing than it needs to be, I think you do understand the terms.
What we’re talking about is the difference between calling a function recursively (a function that calls itself) and instead simulating recursion using a data structure posing as a stack and an iterative function that doesn’t call itself but instead pushes and pops into your fake stack data structure.
You can either use the system’s built-in stack (by calling functions), or create your own fake stack (by pushing/popping, writing/reading an array, etc.).
Using sys.setrecursionlimit() as mentioned at the very top of this thread only affects the system stack. By “native function calls”, I just mean regular function calls. These are subject to the system’s stack limit.
When you allocate an object and use it as a fake stack to replace the system stack, the size of the object is not subject to the system’s stack limit (which is small — on PCs often a megabyte or two), it’s only limited by the available size of the heap (which is normally large relative to the system stack limit, often gigabytes).
When you make your own fake stack and use an iterative function, you can achieve a much greater recursion depth because your fake stack size is on the heap and not actually in the system stack.
Does that make more sense? The two cases are very different in Python, and using a fake heap-allocated stack, i.e. a Python object, is super useful.
It does seem similar to my experience when changing jobs or changing stacks, but I don't see this happening after the settling in period. I am not working in web dev though, so maybe the experience is different.
Another thing that stuck me was about the use of a semicolon, comma etc to get working. It is a common pet peave when searching for documentation.
For example for a command
$ program <file_name>
Is file name in quotes, <>, or something else? What to do if there are quotes in the filename? Single or double quote? Two examples in the documentation, one simple, and one as complex as possible would be great. This is one thing I end up going to stack overflow again and again.
For example, opportunities like say going to Recurse Center, or having the freedom to have a decently long sabbatical etc. even with a 6 month emergency fund are not available. The other thing is no freedom to pursue any other job, i.e working on a startup idea during the evenings or weekend, or writing a technical book etc. for money is not allowed.
There are certainly issues with the H1b system, i.e. the lottery, low salary cap, multiple applications per person to game the system etc, but employer stickiness is not one of them, at least for people who are qualified.
The major problem is due to long wait for green card for people from China, and almost impossible wait for people from India. My friends who were not born in these two countries have been able to get a green card within about a year, after getting an H1B, while I am looking at a more than 20-30 year wait.