Hacker Newsnew | past | comments | ask | show | jobs | submit | evozer's commentslogin

There is, it's called static. Just hide it in a .cpp file and expose the real function in a header file.


I don't understand your example, why would

(1 + 2.060) - 2.060 == 0

while

1 + (2.060 - 2.060) == 1?

Am I just misunderstanding what you wrote?


No, the site messed up my comment. b is supposed to be 2 raised to the 60th power. The two asterisks were removed. Let's try ^, b is 2^60, c is minus b. I edited my original comment.


shouldn't it be e^tau*i = 1 if tau is 2pi?


Oops, typo. You're right. You could also write "e^tau*i = 1 + 0" to relate the "5 most important numbers in math" but that form always seemed a bit forced to me.


If you write "-1 * e^(tau * i) + 1 = 0" you can reasonably claim to relate six important numbers: -1, e, tau, i, 1, and 0. IMHO that looks a bit less forced than the version with "1 + 0", though of course it's not the simplest form. (I mean, that "+ 0" could have been inserted almost anywhere...)


Did you consider emulating mmap yourselves?

  "Memory mapped files work by mapping the full file into a virtual address space and then using page faults to determine which chunks to load into physical memory. In essence it allows you to access the file as if you had read the whole thing into memory, without actually doing so."
I feel like this could be done in c++ directly, by maintaining an internal cache for each file that keeps track of which parts of the file are loaded and uses read() to load chunks on demand. Error handling would be a lot simpler (no signals, just a failed read()) and there would be less OS-specific code.


This is essentially how databases like PostgreSQL work, but in essence it only avoids the sys-call overhead. The OS is already caching the file, regardless of mmap, so using pread would have likely been enough for us.

It totally would have been simpler overall, but each incremental step we made was significantly less work than the refactoring required for pread.


> The OS is already caching the file

Not necessarily. With O_DIRECT, pread() doesn't put pages into page cache: it just DMAs them directly into your process. Using O_DIRECT and the process-private caching we've been discussing, sophisticated programs (like databases) can (and do!) implement their own "page cache" systems. And because databases have access pattern information that the generic kernel VM subsystem doesn't, such a database can frequently do a better job doing this caching on its own.


I might have undersold the performance advantage of writing your own cache, but let me reiterate the point I was trying to make: The reason we didn't consider doing so was because we weren't having a performance issue. Writing our own cache would be strictly more work than just using pread and accomplished the same thing.


Yeah. For your application, you did the right thing. I was speaking more abstractly.


It totally would have been simpler overall, but each incremental step we made was significantly less work than the refactoring required for pread.

Question.

In 10 years will you be saying this about the next incremental problem that you run into? If you think this likely, then the next incremental problem is an excuse to do it right.


If it's less work to solve that problem than refactor all the relating code, and the impact on maintainability is minimal, likely yes. But considering the amount of users we have and the current lack of any crashes relating to mmap there are unlikely to be any future unforseen issues.


Mmap is right, though. Pread would also be right. There's a tradeoff and the complexity argument would only win if they knew all this when they started.


Well, then you have to implement some kind of plan for efficient caching - some kind of LRU scheme, for example, to prevent the cache from ballooning to unusable sizes - at which point you're reinventing the kernel page cache (poorly). mmap does have a big advantage here if you really need a lot of random accesses.


It’s easy enough to read a file in chunks, parsing out the information as you go. This limits memory use as long as you release the chunks when you no longer need them. The operating system can swap out memory as-needed, even if you didn’t get the memory from mmap, so it’s irrelevant where you store the parsed data.

Unless you actually need to read the file multiple times (compared to looking at the parsed in-memory data multiple times), this should be fast enough.


I had to go to the last page and click "repeat the search with the omitted results included" to make it show up at all. After doing that it was the third result.


I looked at the implementation of the openStream() function:

  AudioStream *AudioStreamBuilder::build() {
      AudioStream *stream = nullptr;
      if (mAudioApi == AudioApi::AAudio && isAAudioSupported()) {
          stream = new AudioStreamAAudio(*this);  

      // If unspecified, only use AAudio if recommended.
      } else if (mAudioApi == AudioApi::Unspecified && isAAudioRecommended()) {
          stream = new AudioStreamAAudio(*this);
      } else {
          if (getDirection() == oboe::Direction::Output) {
              stream = new AudioOutputStreamOpenSLES(*this);
          } else if (getDirection() == oboe::Direction::Input) {
              stream = new AudioInputStreamOpenSLES(*this);
          }
      }
      return stream;
  }  

  Result AudioStreamBuilder::openStream(AudioStream **streamPP) {
      if (streamPP == nullptr) {
          return Result::ErrorNull;
      }
      *streamPP = nullptr;
      AudioStream *streamP = build();
      if (streamP == nullptr) {
          return Result::ErrorNull;
      }
      Result result = streamP->open(); // TODO review API
      if (result == Result::OK) {
          *streamPP = streamP;
      }
      return result;
  }
Doesn't this leak memory if streamP->open() fails? I'm also surprised they are using new instead of unique_ptr, especially since one of their listed benefits is "Convenient C++ API (uses the C++11 standard) ".


Yeah, seems like memory leak to me too. streamP is leaked if open fails.

And this in modern C++ in year 2018:

if (result != Result::OK) { goto error2; }


One the face of it, it does look like the memory pointed by streamP leaks. Unless build () kept another pointer to be used in the destructor.


OP said $5 for 84 days though, not a month.


I missed that, than ...it's true.

I'm shedding bitter tears.



But it would be fun to see NP bodyblock the entire enemy team with one set of treants.


If you want to be more efficient, you can also use a union for this.

typedef union { int i; float f; double d; long l; char c; void *p; ... } var;

Now you might also get some nice float-interpreted-as-int bugs if you forget the type!

You can also kind of pretend that you have C#-style type inference:

var x = { .i = 10 }; var y = { .f = 2.0f };


But you don't want to be efficient lmfao


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: