A stack trace (or even better, a minidump with the call stack!) is one of the most useful debugging things for me. Hell, the call stack in general is super useful to me!
I can look at a stack trace, go "oh, function X is misbehaving after being called by function Y, from function Z", and work out what's gone wrong from the context clues, and other debugger info. As a game developer, with codebases that are big, semi-monolithic codebases, it's essential, especially when code crosses the gameplay/engine and engine/kernel barriers.
It's funny, as my phone (A 2022 Moto Razr) can work as PC, if I plug it into a monitor with it's USB-C port. I can plug it into a monitor, and plug a mouse and keyboard into the monitor's inbuilt USB-C hub, and it works just fine. Has a desktop mode and everything! If the monitor doesn't have a hub in it, I can use the phone as a mouse/touchpad! Plus, if the monitor supports it, it'll even keep it charged rather than using the battery!
And I don't just use it as a gimmick, I use a HDMI/USB-C cable to use it with my TV as a streaming/light gaming setup. Nice to be able to plug it in, kick off a streaming app or Youtube, or play some Minecraft or something on my TV in bed, all comfy.
Can confirm, S22 ultra when plugged into Dell docking box (or whatever its called, not a typical docking station, it just connects with laptop via thick USB-C cable) works out of box, with mouse and keyboard.
Firefox with ublock origin works very well for example. The only thing is it doesn't adjust automatically to native screen resolution (1600p in my case). But its still just an Android, even with full filesystem access it feels vastly subpar to normal desktop PC if I need more than just browsing or other android apps.
My dream was to be able to use VR glasses and something like Samsung dex for an ultra portable coding workstation.
I bought a pair of Viture Pro glasses, but they were pretty unusable for coding for me. Maybe for watching videos would have been OK but not typing / needing to read all areas of the screen.
Boy do I have some bad news for you: Automated Content Recognition [0, 1, 2]. If your Smart TV is connected to the Internet, it can also track what you're watching or doing, even if you're using it as an external monitor [3] (in Dutch).
TL;DR: I'm of the opinion that the answer is probably "not yet", "it's in the works", or "it's already here, but not yet widely known".
In short, I couldn't find strong conclusive evidence for "yes" or "no".
The Wikipedia article on ACR [0] seems to be quoting CIO-Wiki [1] --- or vice-versa. The statement would imply "yes":
> Real-time audience measurement metrics are now achievable by applying ACR technology into smart TVs, set top boxes and mobile devices such as smart phones and tablets. This measurement data is essential to quantify audience consumption to set advertising pricing policies.
On the other hand, a paper on ACR [2] implies it only occurs on TV's (so, this points us towards "no"):
> [...] Unlike traditional online tracking in the web and mobile ecosystems that is typically implemented by third-party libraries/SDKs included in websites/apps, ACR is typically directly integrated in the smart TV’s operating system. [...]
... but then, in its conclusion one could make the case for "not yet" as they reference Microsoft's Recall (this, to me, makes me lean on "not yet"):
> [...] Finally, although different than ACR, our auditing approach can be adopted to assess privacy risks of Recall (Microsoft, 2024) – which analyzes snapshots of the screen using generative AI (Warren, 2024). [...]
Collecting my thoughts on this paper, I'm a bit disappointed that we seem to have a double-standard for the nomenclature: if the content recognition happens on a PC, then it's labeled as "generative AI" (should've probably been called LLM by the authors) and if it takes place on a TV-shaped computer (they're mostly Android TV's, after all, right?) then it's called ACR. I think that it has not been properly articulated that what people are worried about [3] is that Microsoft's Windows Recall is (or will become) "ACR with extra steps".
To conclude (and extend this to the mobile phone domain), I'll leave a "thought experiment": is all the AI processing power on new mobile phones going to be used exclusively by the users, and for the users?
-----
Some nuanced notes...
I'm conflicted about whether to demonize ACR entirely or not. To me, "ACR" means something that is running all the time listening to user's surroundings or screenshotting a user's displayed information for the purposes of improving targeting or tracking their behavior (this seems to match Wikipedia's definition at first glance). I am in part validated by [2] as well:
> [...] At a high level, ACR works by periodically capturing the content displayed on a TV’s screen and matching it against a content library to detect the content being viewed on the TV. It is essentially a Shazam-like technology for audio/video content on the smart TV (Mohamed Al Elew, 2023).
However, after doing some research, I discovered that a particular knowledge field may be misusing the term (or using the ACR term for lack of a better term like "reverse image search" or "content-based image retrieval" --- CBIR, CBVIR, QBIC --- in their vocabulary), and perhaps in the process inadvertently "whitewashing" the term.
Take, for example, the European Union's Intellectual Property Office's (EUIPO's) discussion paper titled "Automated Content Recognition: Discussion Paper – Phase 2 ‘IP enforcement and management use cases’" [4] (PDF). I think that they are conflating some terms like hashing, fingerprinting, watermarking and labeling it under the ACR term, then they're making valid-sounding use-cases like "smartphone solutions to detect genuine or counterfeit products" (products, by definition, are not content,... so I fail to see how ACR ties in). Perhaps someone more knowledgeable can correct me if I'm misreading the paper (I am no IP lawyer, but have worked as an Information Security Officer).
I think the EUIPO paper also glosses over some possible privacy implications: e.g., they link to an article called "Are 3D printed watermarks a “grave and growing” threat to people’s privacy?" [5], but in the context of using "RFID tags or serial numbers" to protect IP on 3D printed objects ... they do not discuss the possible privacy implications of, for example, being tracked by a possible "RFID-tag-cloud" of such objects. I know that this is beyond the scope of "is there ACR running on mobile phones", but I wanted to showcase what I think is the misuse of the ACR term to expand into the physical --- "offline" --- world, in the process losing its more "academic" meaning.
To answer your question directly: I'm pointing out unexpected privacy pitfalls of using a smart TV's full set of features (i.e. running apps and using it ... as a monitor).
Although I agree with the point of your solution... I disagree with minimizing the danger of such anti-features.
To elaborate, try thinking of your average reasonable person and think of their journey into learning how to preserve their privacy without losing access to the features of the services and products they have paid for. Without a massive effort it is ultimately an oxymoron.
A reasonable person would expect that your (internet connected) smart TV would collect info to help them tailor future products based on their customer's usage (app usage frequency, standard or cable usage frequency, frequency of usage as external monitor). You would not expect to have to watch what you say in front of the such a device because they're literally listening to you [0] (in 2015, you needed to use the remote to use the voice detection service).
Additionally, reasonable user's of smart TV's (and other IoT devices) might feel like they are no longer tracked with their uniquely identifiable information because they turned off "targeted advertising" (if the service allows for setting that option), but that only prevents their advertising ID from being tracked [1].
Moreover, a reasonable person might expect that using a DNS-based blocklist would be a sort of "revocation of consent" to being tracked, but tracking services are savvy when it comes to PII exfiltration [2]:
> [...] We find that personally identifiable information (PII) is exfiltrated to platform-related Internet endpoints and third parties, and that blocklists are generally better at preventing exposure of PII to third parties than to platform-related endpoints. [...]
Finally, there have also been studies that show a lack of transparency when it comes to GDPR requests about the data collected through Automatic Content Recognition (ACR) [3].
So, my point is that "just don't use your product for most of its intended use" might be a thought-terminating cliche that prevents us from taking a step forward in stopping the normalization of unreasonable privacy transgressions (PII exfiltration, audio spying by third-party service providers, monitoring of external devices' screens).
I meant the Z Flip 6. Not the fold. The folds have always had fully supported DeX.
The Flips did not have a DP-capable USB-C port until the Flip 5, and still did not support DeX due to thermals. But the Flip 6 has it with a developer option, but only the "new" DeX.
Sorry for the confusion on my side. I thought of the Flip as the OP mentioned the Motorola Razr which is positioned against that, not the Fold.
Not great) Tried replacing my laptop with a Samsung phone+monitor combination on a trip, didn't really work out. Phones are not built for continuous load.
The official DeX docks have a fan built in. This helps a lot especially if you take the phone out of the case. I need to do so anyway because the usb doesn't go in deep enough without it.
I had a Huawei P20 Pro that did much the same back in 2018.
I never really used it for much, a bit of light browsing and really just as a gimmick, buit yeah, there was a desktop of sorts and you could use all the apps, and the touchpad/mouse thing worked. You could attach a bluetooth keyboard too, IIRC.
Kindof a shame my iphone doesn't do this (I assume, I haven't tried), but I'm not sure if I'd use it.
Actually, I use my iPhone with a USB-C/HDMI cable, the Remote Desktop client and a Bluetooth keyboard when traveling. Some apps will let you use an additional display just fine.
OK so I've now tried this with a new USB-C iphone.
Yeah it's painful to use! You can set up a mouse, and use a physical keyboard for input, but it doesn't attempt to do any more than mirror the screen onto the external device by default.
Huawei's desktop mode was limited, but I think you're right - you can say the iphone has good device compatibility, but there's not a good way to use it docked. Not that the android ones were 'good', but they made an attempt!
Which is quite frankly weird, given that the iPad has fairly robust mouse/keyboard support at this point, and at least some nods towards window management
Interesting, I wouldn't mind an Android phone that can do similar, but I'm not looking for a clamshell. For anyone else who, like me, is naive about such things, the key search terms seem to be: DisplayPort alternate mode over USB-C. Support seems patchy.
I hate USB-C for laptop charging ports, to fragile for regular use. However I build a few things recently and I love the simplicity.
- External Touch Screen - only needs one cable, usbc, for picture, sound, touch, power! ... (DP mode you mentioned required)
- As power source. My caravan computer (Dell wyse 5070) uses usbc as power source with a cheap DC slot adapter. My laptop charges in usb-c from 60w or more.
- We have 2 Rolands (p-1, s-1) both can use their usbc cable for direct audio in AND out which just works on Linux.
- For the Roland's I can use my phone as sound DAW or source, or both. I can also attach the touch screen, ...
All using the same (cheap and available) cable. Which is amazing and took my whole life to get to.
Knowing the UK, BT will ring up their Tory mates in power and ask them to remove the workers rights, so that they can get away with it. Hell, workers rights are already up on the chopping block, so they might not even have to bother ringing up.
I read through, and I (someone with no military experience) came to a pretty much identical conclusion. No point fighting if the objective can be secured with diplomacy. Granted, you always have to be ready for a fight, so having overwatch is paramount. But yeah, going for a hearts and minds approach seems the most sensible way of completing the objective with a minimum of fuss, given the situation presented.
Same here. The tactical options feel like a red herring: because up front the big detail is that the town, and the militia, are not necessarily hostile - though individual members might be. So top priority really is avoiding a fight at all: particularly because your only long range options (the mortars) are going to wipe the town out entirely, and likely lead to having to fight door to door through the two objective structures in the first place.
It's funny, in that my (current) PhD hasn't felt anything like that. I dunno if it's my source of funding or whatever, but I've not had to do much more than the research I said I was going to do when I got the funding. I've had a module here and there, but otherwise, I've been able to 100% focus upon the research that I've been doing, which I've totally self directed. My supervisors have been great, supportive, etc, and I've benefited from links to industry as well. It's been a great 2.5 years, and I'll be sad once it's over, really.
They really aren't. The TB2 can loiter at 18,000 ft, or about 5.5km. 20mm guns max effective range is about 1,500m. 40mm are better, but still max out at about 7,000m. You're not going to be able to hit it. That assumes you can even see it, which you probably can't.
Not really. The low health tracks aren't just sped up, they're compositions in their own right. Because of that, a separate track ID makes total sense as they are different music tracks.
I can look at a stack trace, go "oh, function X is misbehaving after being called by function Y, from function Z", and work out what's gone wrong from the context clues, and other debugger info. As a game developer, with codebases that are big, semi-monolithic codebases, it's essential, especially when code crosses the gameplay/engine and engine/kernel barriers.