• 87 Posts
  • 730 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle





  • Invisibly; by trying to post in it and encourage others to do so. There is not much management to do with such a small community. The majority of regular users watch the All feed, so subscriptions are really just a way to bookmark the community to post in it or find it more easily. For smaller or new communities, expect it to be more like your personal blog as it is unlikely to be something others will post in regularly. The majority of communities that are hourly-active were made prior to the rexodus of June 2023, or within a few weeks thereafter.

    Unless you’re in a very controversial space, actively micromanaging a community is likely an issue with the mod not the community IMO. The admins take care of the majority of wack-a-mole nonsense here.


  • How are finishes so durable and thin?

    My assumption of a lack of post processes is because I come from a background of automotive refinishing and repair, where I’ve owned a shop and painted for many years along with getting into custom art graphics and airbrushing. The only finishes I know of that provide a similar durability are two part urethanes. Those are far too thick by comparison. When cutting into plastics that have been moulded, the finish shows no signs of mechanical layering or bonding like a post process finish in most cases. Often a cleanly broken or cut part shows a similar type of penetrating surface alteration I associate with a polishing operation, where the surface transitions in color and grain structure with in millimeter or few (in cases where the break is clean and does not appear to be influenced by stress alterations like ABS where it whitens under tension).

    How does chromate conversion work with a prep regime and what kind of wet paint can offer similar durability to a 2k urethane when it is impossibly thin? Like I know the limitations of urethane well when it comes to corners and pointy bits where it will thin from surface tension. There is not a chance in hell that the buttons on the side of my phone could be painted with such a finish with an even conformal coating and remain durable for years of constant abrasion. Is there a name for this class and type of finish? Where are they sourced? What is the scale of the industry? Is there a way to access the process and products at a small scale?



  • Slowly trying to learn sh while using mostly bash. Convenience is nice and all, but when I encounter something like OpenWRT or Android, I don’t like the feeling of speaking a foreign language. Maybe if I can get super familiar with sh, then I might explore prettier or more convenient options, but I really want to know how to deal with the most universal shell.


  • Yeah, but depends on a person’s goals. I don’t mind being doxed. The privacy thing I’m really concerned about is manipulation of data related to the host server; apps that are used like data loggers of sensors; tracking dwell time; page views; likes, blocks, etc. I care far less about what I say to others in public. I vehemently claim that owning the data about any individual is theft of autonomy, failure of democracy and government, and a form of slavery if one plays out the total philosophical circumstance and implications. Anyone that holds such data about someone else with the intent to manipulate in any way whatsoever is a criminal. I’ve been a Buyer for a retail chain, collected and analysed tons of customer data. This has nothing to do with how data is collected and used now, but this is used as justification for the present criminal data manipulation industry.

    As a disabled person, I need to connect with humans more, and as much as I can here. I totally respect those of you that have other priorities that limit your conversational topics of interest, and I don’t wish to violate those. This place is just my version of a public square, where I’m trying to make general conversation. -warmly



  • So Flash memory works in blocks called pages. The pages contain a header that ends in a few bytes that says what the rest of the page maps to.

    If the file was encrypted, you’re probably SOL. If it was not encrypted it may be possible to to recover some parts of the files. This is extremely advanced level data recovery. I only know the abstract basic principals and would likely struggle to figure this out and recover my own stuff if I ever needed to do this. I’ve only programmed microcontrollers and flash memory devices.

    A micro SD card contains a small microcontroller and some blocks of flash memory, although the microcontroller is transparent to the user and operating system… unless hacking with needle probes in a lab.

    So here’s the basics. Writing flash involves taking an entire Page of memory and zeroing it first. There is a tiny voltage booster circuit on the card that allows the page to get pulsed up and down in voltage a few times in order to completely zero the entire page without any remaining residuals. Once this is done and the entire page has been zeroed, only then is it possible to write the data into the bytes of the page.

    If you want to change a single byte level value in an address that already contains a value, first the entire page is copied to a blank page in another location, then the old page is pulsed a few times, then each value is transferred back into the old page except that the new value that needed to be changed is now set to the new values.

    This is the proper way to write flash at a basic level. If the power is lost in the middle of this cycle, the worst case scenario is that the new updated value was not written. The page in question should never be “missing” because the header record should always point to either the original or copied page. One of the two should always be present and complete… in a proper setup. Obviously, it might be faster to simply use some RAM to hold the page, erase the old page and rewrite it. I have no idea what size pages are in modern SD cards, but on hobby class microcontrollers I have used the pages were 4096 bytes, IIRC. My understanding is that most SD cards use an 8051 clone micro, so it is probably a similar size.

    So here’s the thing, the bulk of the data is always there. Somewhere deep down inside you likely already knew this. It is why you’re supposed to overwrite an entire drive instead of the “quick” erase in most formatting tools. The quick erase is simply deleting a tiny header file that says what exists where on the drive. Similarly, some part of your SD card there is a page or few where the header has been screwed up. Your OS is looking at this header info and seeing a mismatch of garbled junk and saying f-that bs.

    Generally, recovery would involve dumping the raw contents of the flash memory as hexadecimal, being super familiar with what you’re looking at and knowing how to find the page that is causing the error. Generally I assume you’d need to replace the bad page with a good header and it would then work. There are services for this kind of operation; data recovery. In practice, this has a few more layers of complication. Pages can be placed in different locations that enable wear leveling so one area of memory is not over utilized. There is also a table of bad blocks/pages that the micro knows to skip, and there is usually a bit or address in the page that is used to detect errors that may have occurred.

    This is pretty much everything I know on the subject. Hopefully it helps you understand the abstract nature of what is happening. In the simplest of terms, flash memory is like writing a long essay with an ink pen and where you can not make mistakes or use whiteout. If you need to make a change, you must write out the entire page all over again. This process is what is so time critical that you must “eject” the drive.


  • MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.

    The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.

    On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.

    There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.

    However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.

    Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.


  • The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.

    The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.

    I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly

    I don’t fault your skepticism.



  • I don’t trust AliEx any more after I took the loss of 3 orders for ~$60 in 2020. When I called, they hung up on me at random every time. After the 3rd try I washed my hands and walked away. Stealing from me once is on them, twice would be my own fault. Prior to that experience I spent a few thousand dollars on the platform for odds and ends.

    I expect something like this to be an emulator and nowhere near the quality of a real Nintendo product, but I could be wrong. I would buy used or a homebrew project that is well documented and might cost a little more.


  • All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

    For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

    This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

    The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

    That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

    Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

    But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

    When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.