• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.

    I agree that perception models aren’t great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I’ve seen on this topic usually produce much more convincing pointclouds.



  • I use a lot of AI/DL-based tools in my personal life and hobbies. As a photographer, DL-based denoising means I can get better photos, especially in low light. DL-based deconvolution tools help to sharpen my astrophotos as well. The deep learning based subject tracking on my camera also helps me get more in focus shots of wildlife. As a birder, tools like Merlin BirdID’s audio recognition and image classification methods are helpful when I encounter a bird I don’t yet know how to identify.

    I don’t typically use GenAI (LLMs, diffusion models) in my personal life, but Microsoft Copilot does help me write visualization scripts for my research. I can never remember the right methods for visualization libraries in Python, and Copilot/ChatGPT do a pretty good job at that.


  • There is no “artificial intelligence” so there are no use cases. None of the examples in this thread show any actual intelligence.

    There certainly is (narrow) artificial intelligence. The examples in this thread are almost all deep learning models, which fall under ML, which in turn falls under the field of AI. They’re all artificial intelligence approaches, even if they aren’t artificial general intelligence, which more closely aligns with what a layperson thinks of when they say AI.

    The problem with your characterization (showing “actual intelligence”) is that it’s super subjective. Historically, being able to play Go and to a lesser extent Chess at a professional level was considered to require intelligence. Now that algorithms can play these games, folks (even those in the field) no longer think they require intelligence and shift the goal posts. The same was said about many CV tasks like classification and segmentation until modern methods became very accurate.




  • Fair enough! I think it’s more common for games to do that, but sometimes I had trouble with software on Windows that used virtualization elements themself. I probably just didn’t properly configure HyperV settings, but I know nested virtualization can be tricky.

    For me it’s also because I’m on a laptop, and my Windows VM relies on me passing through an external GPU over TB3 but my laptops’ dedicated GPU has no connection to a display, so it would be tricky to try and do GPU passthrough on the VM if I were on the go. I like being able to boot Windows on the go to edit photos in Lightroom, for example, but otherwise I’d prefer to run the Linux host and use the Windows VM only as needed.


  • I’m a fan of dual booting AND using a passthrough VM. It’s easiest to set up if your machine has two NVMe slots and you put each OS on its own drive. This way you can pass the Windows NVMe through to the VM directly.

    The advantage of this configuration is that you get the convenience of not needing to reboot to run some Windows specific software, but if you need to run software that doesn’t play nice with virtualization (maybe a program has too large a performance hit with virtualization, or software you want to run doesn’t support virtualized systems, like some anticheat-enabled games), you can always reboot to your same Windows installation directly.


  • GPU and overall firmware support is always better on x86 systems, so makes sense that you switched to that for your application. Performance is also usually better if you don’t explicitly need low power. In my use case I use the Orange Pi 5 Plus for running an astrophotography rig, so I needed something that was low power, could run Linux easily, had USB 3, reasonable single core performance, and preferably had the possibility of an upgradable A key WiFi card and a full speed NVMe E key slot for storage (preferably PCIe 3.0x4 or better). Having hardware serial ports was a plus too. x86 boxes would’ve been preferable but a lot of the cheaper stuff are older Intel mini PCs which have pretty poor battery life, and the newer power efficient stuff (N100 based) is more expensive and the cheaper ones I found tended to have onboard soldered WiFi cards unfortunately. Accordingly the Orange Pi 5 Plus ended up being my cheapest option that ticked all my boxes. If only software support was as good as x86!

    Interesting to hear about the NPU. I work in CV and I’ve wondered how usable the NPU was. How did you integrate deep learning models with it? I presume there’s some conversion from runtime frameworks like ONNX to the NPU’s toolkit, but I’d love to learn more.

    I’m also aware that Collabora has gotten the NPU drivers upstreamed, but I don’t know how NPUs are traditionally interfaced with on Linux.



  • I work in an ML-adjacent field (CV) and I thought I’d add that AI and ML aren’t quite the same thing. You can have non-learning based methods that fall under the field of AI - for instance, tree search methods can be pretty effective algorithms to define an agent for relatively simple games like checkers, and they don’t require any learning whatsoever.

    Normally, we say Deep Learning (the subfield of ML that relates to deep neural networks, including LLMs) is a subset of Machine Learning, which in turn is a subset of AI.

    Like others have mentioned, AI is just a poorly defined term unfortunately, largely because intelligence isn’t a well defined term either. In my undergrad we defined an AI system as a programmed system that has the capacity to do tasks that are considered to require intelligence. Obviously, this definition gets flaky since not everyone agrees on what tasks would be considered to require intelligence. This also has the problem where when the field solves a problem, people (including those in the field) tend to think “well, if we could solve it, surely it couldn’t have really required intelligence” and then move the goal posts. We’ve seen that already with games like Chess and Go, as well as CV tasks like image recognition and object detection at super-human accuracy.


  • One of the big changes in my opinion is the addition of a “Smart Dimension” tool where the system interprets and previews the constraint that you want to apply instead of requiring you to pick the specific constraint ahead of time(almost identical to SOLIDWORKS), and the ability to add constraints such as length while drawing out shapes (like Autodesk Inventor, probably also Fusion but I haven’t used that). It makes the sketcher workflow more like other CAD programs and requires a little less manual work with constraints.



  • Last I tried it, there was no fix. Their latest update on the website says:

    The work on the toponaming problem is an ongoing project, and we are very grateful to the FreeCAD community for contributing a lot to that effort. But it’s not complete yet, there will be much more to say when it’s largely done. So let’s focus on the other three.

    So I take it they haven’t implemented a fix. They previously said they were going to work with the FreeCAD team on mainlining a toponaming fix, using realthunder’s work as a proof of concept, but said fix has not landed in mainline FreeCAD yet. I believe that’s the major feature they’re looking to implement for FreeCAD 1.0.

    Definitely excited for Ondsel though! Hopefully that fix can be integrated quickly.




  • Yep, and for good reason honestly. I work in CV and while I don’t work on autonomous vehicles, many of the folks I know have previously worked at companies or research institutes on these kinds of problems and all of them agree that in a scenario like this, you should treat the state of the vehicle as compromised and go into an error/shutdown mode.

    Nobody wants to give their vehicle an override that can potentially harm the safety of those inside it or around it, and practically speaking there aren’t many options that guarantee safety other than this.


  • Yeah I think Lemmy would actually work pretty reasonably. It reminds me of how lots of software and projects have Reddit communities. I agree that being able to share 1 account over many services, and especially not having to pay for infrastructure is something that drives discord use over forum-based platforms.


  • Personally, I’d prefer that projects use forums for community discussions rather than realtime chat platforms like Discord or Matrix. I think the bigger problem of projects using Discord is not that it’s closed source, but rather that it makes it difficult to search (since no indexing by search engines) and the format deprioritizes having discussion on a topic over a long period of time. Since Matrix is also intended for chat, it has these same issues (though at least you can preview a room without making an account).