• Content Count

  • Joined

  • Last visited

  • Days Won


kya100 last won the day on June 15

kya100 had the most liked content!

About kya100

  • Rank
    Advanced Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,201 profile views
  1. The easiest way to get your favorite virtual private network up and running on your new Windows 10 operating system is to simply download your VPN's app from the Windows store and install it, just as you did on your previous version of Windows. Using a VPN's app is also the best way to use that VPN's bonus features -- from ad-blocking to automatically selecting the fastest connections. But for the tech-curious, another option is to test drive Windows 10's built-in VPN client. It may sound tricky, but the process takes about 15 to 20 minutes and can be broken down into two main components: creating a VPN profile, and then connecting to the VPN. Here's the step-by-step guide for setting up your VPN on Windows 10. What you'll need VPN service: Even though you're using Windows 10 to manage your connection to a VPN, you'll still need to choose which VPN service to connect to. The service you choose will determine who's running the servers you're about to connect to. Check out our updated directory of the best VPNs we've tested to get a quick idea of which provider might be best for you. You'll find lots of options, including the best cheap VPNs, the best iPhone VPNs and the best Android VPNs. But no matter which service you choose, keep an eye out for any red flags that might indicate a less-than-private service. Protocol choice: During setup you're going to be asked to choose a protocol from a list. In the simplest terms, the protocol you choose determines the strength of your encryption. There are several types of protocols used by VPNs, and whichever VPN you choose will use one of them. The four most common are: PPTP, L2TP/IPSec, SSTP and OpenVPN. During setup, you'll tell Windows which type of protocol your VPN uses by selecting it from a list. Your VPN provider will tell you which it uses. Create a VPN profile and connect to it 1. On your Windows 10 desktop, right-click the Start button and select Settings from the menu that appears. 2. In the new window that pops up, click Network & Internet then select VPN from the list of connection options on the right side of the screen. 3. Click Add a VPN connection. 4. This will take you to a configuration screen. Under VPN provider, click the dropdown menu and select the option that says Windows (built-in). 5. In the Connection name field, type out the name you'd like to give this particular connection. Try to create one that you'll easily recognize as a VPN connection. If, for example, you're using ExpressVPN and want this connection to be the one you use to connect to a New York server, name the connection something like "ExpressVPN, New York server." 6. In the Server name or address field, type the actual address of the server you're connecting to. Your VPN service will be able to provide this information. Generally it will look like a website URL, with an alphanumeric string of five or six characters followed by the name of the VPN service you're using. 7. In the VPN type dropdown, you'll be asked to choose a protocol like I mentioned above. Select whichever one your VPN service uses. 8. In the Type of sign-in info dropdown menu, choose the way you're going to sign in to your new VPN connection. Different VPN providers have different preferred methods so you may wish to check with your VPN provider to be sure, but for most commercially available private VPNs, you'll be selecting Username and password. This means whenever you choose this new VPN connection on your Windows 10 machine, you'll need to log into it with the same username and password you normally use to log into your VPN service on any other device. 9. Click the Save button. You've now created your VPN profile, and all that's left to do is to connect to it. 10. Return to your Network & Internet settings page, and select VPN from the options on the right-side of the screen as you did before. Your newly created VPN connection will appear in the list (in our example, you'd see "ExpressVPN, New York server"). Select it and click Connect. And there you have it. Sure, maybe you're missing out on some of the additional features you'd otherwise get from using your VPN provider's downloadable application, but on the other hand you're also now have greater control over your connection and you don't have to deal with a potentially bloated piece of software constantly running in the background.
  2. Clearview AI, the maker of a controversial facial recognition app, is confident its technology has beneficial uses, as other Big Tech names either exit the marketplace or suspend their use by law enforcement out of fears of misuse. The moves come amid studies showing the technology has low accuracy rates for women and minorities. Clearview CEO Hoan Ton-That says his company's technology can help protect children and victims of crimes, without risk of racial bias, singling out competitor Amazon's Rekognition as failing in that regard. His criticism comes on the same day Amazon announced a one-year moratorium on the tool's use by law enforcement, after weeks of protest against police brutality, and just days after IBM announced it's pulling out of the facial recognition market out of concern the product could be used for profiling. "As a person of mixed race, this is especially important to me," Ton-That said in a statement Wednesday evening. "We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of color." Clearview identifies people by comparing photos to a database of images scraped from social media and other sites. It came under fire after a New York Times investigation in January. Since then, Sen. Edward Markey, a Democrat from Massachusetts, has called Clearview a "chilling" privacy risk. In addition, Google, YouTube, Microsoft and Twitter have sent cease-and-desist letters to Clearview. The company also faces multiple lawsuits. Markey also raised concerns this week that police and law enforcement agencies could use facial recognition techonology to identify and arrest protestors in cities where people are protesting the killing of George Floyd, an unarmed black man. He also voiced concern that the threat of surveillance could deter people from "speaking out against injustice for fear of being permanently included in law enforcement databases." Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection Ton-That also said the company is "committed to the responsible use" of its technology, adding that it's intended to be used to identify criminal suspects and not as a surveillance tool at protests or under other circumstances. "We strongly believe in protecting our communities, and with these principals in mind, look forward to working with government and policy makers to help develop appropriate protocols for the proper use of facial recognition," Ton-That said. In addition to concerns over accuracy, privacy advocates and lawmakers worry the technology has the potential to become an inescapable and invasive form of surveillance. A handful of cities have banned the municipal use of the technology, and Democratic lawmakers have proposed prohibiting public housing units from using facial recognition technology.
  3. A firm that claimed to have built an algorithm to identify women's orgasms has defended itself after ridicule on social media. Cyprus-based Relida Limited said its algorithm could "validate" female orgasms 86% of the time.Slides from a presentation it produced were posted on Twitter and were retweeted thousands of times. The company said it had wanted to help developers test sex tech products and that its work had been "twisted".The presentation was posted on Twitter by Stu Nugent, brand manager at the sex toy label Lelo, after he was sent the pitch.The slides,say that "there is no reliable way to be sure a woman has an orgasm". They list statistics about women who have faked climaxes. Relida said its idea was still in development and the presentation was not intended for publication.The algorithm is based on earlier research into changes in heart rate. "An orgasm may be identified with heart rate as it has a specific pattern when climaxing," it said. It said the algorithm was not yet finished and was created by a woman "looking for the well-being of other women". "We never wanted to sell this algorithm directly to women or men," it said."Indeed, this is a too sensitive a subject, and information that could create additional pressure on women." It described Mr Nugent's tweet as "unethical". Mr Nugent said he was taken aback when he received the set of slides on LinkedIn. "To be frank, we already have a very robust and reliable system for deciding whether our designs are pleasurable, and that's by asking the people who use them," he said. "In any case the orgasm isn't necessarily the right metric for measuring the pleasurability of a sex toy." Relida said its product was "all about science". However Mr Nugent said it was "solving a problem we never had". "The idea of detecting an orgasm against the word of the person who is actually having (or not having) one is dangerous," he said.
  4. In order to unmask a California man who repeatedly harassed and exploited girls on Facebook, the social network decided to help the FBI hack him. Facebook had reportedly been tracking Buster Hernandez for years. Using the secure operating system Tails, Hernandez was able to hide his real IP address and continued to contact and harass dozens of victims on Facebook, according to Motherboard. Facebook's security team eventually decided to work with a third-party firm to develop a hacking tool that took advantage of a flaw in Tails' video player. The exploit, which Facebook reportedly paid six figures for, could reveal the real IP address of a person viewing a video. The tool was given to an intermediary, who handed it over to the FBI. The publication added that it's unclear whether the FBI knew about Facebook's involvement. Working with a victim, the FBI used the tool to send a booby-trapped video to Hernandez that allowed the bureau to gather evidence that led to his arrest and conviction, according to Motherboard. In February, Hernandez pleaded guilty to 41 charges, including production of child pornography and threats to kill, kidnap and injure. Facebook confirmed that it worked with security experts to help the FBI. "This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice," said a Facebook spokesperson. "The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls." The FBI declined to comment. Motherboard said it spoke with several current and former Facebook employees and that they all said this was the first and only time the company has helped law enforcement go after a criminal in this specific way. Law enforcement has long argued that technology that encrypts messages or otherwise shields a user's identity can be used by criminals and can prevent police from catching offenders. Others say, however, that tools created to hack into such systems put innocent users, such as political dissidents, at risk. Tails OS says on its website that the operating system is widely used by journalists, activists, domestic-violence survivors and privacy-concerned citizens. The company told Motherboard that the Facebook exploit was never explained to the Tails development team.
  5. The first aerial showdown between a human fighter pilot and an autonomous aircraft is slated for July 2021, according to a fascinating interview with the outgoing Director of the US Department of Defense's Joint Artificial Intelligence Center (JAIC). Speaking to the Mitchell Institute for Aerospace Studies as part of its Aerospace Nation interview series, Lt. General John Shanahan spoke passionately at length about his work building the JAIC from the ground up, the challenges facing the US armed forces at the dawn of the AI era, the challenges inherent in bringing next-gen technologies through the notoriously conservative bureaucracies of the military, and the ethics at the heart of any weaponized artificial intelligence program. As an aside to the nearly hour-long interview, Shanahan mentioned an email conversation he'd been having with Dr. Steven Rogers, Senior Scientist for Automatic Target Recognition and Sensor Fusion at the Air Force Research Laboratory (AFRL), Wright-Patterson AFB, Ohio. "Cap Rogers and I exchanged emails just this weekend," said Shanahan, "on the work he's doing trying to field, in July of next year, an autonomous system to go up against a human, manned system in some sort of air-to-air. Bold, bold idea." Shanahan didn't confirm what kind of aircraft would be involved in the challenge, whether it might entail fitting out an older fighter jet with a tactical autopilot system or using something like the Kratos XQ-58A Valkyrie combat drone, which is intended for use as an autonomous escort flying alongside manned F-22 or F-35 fighters, and is already flying. Comparing the initiative to early chess matches between world champion Garry Kasparov and IBM's Deep Blue supercomputer, Shanahan said he didn't expect the autonomous system to chalk up its first victory. "Cap's probably going to have a hard time getting to that flight next year where the machine beats the human," he said. "But go back to DARPA grand challenge. Who finished that first DARPA grand challenge? Nobody. Nobody came close. It might've been about a mile down the road. But how much has played out since then? This is less about beating a human in 2021 – if he does it, great, that'll be a record all by itself – but it's about learning about what it takes to build a different kind of system that's not the kind of thing we're used to building in the past." "The future of warfare is algorithm against algorithm," Shanahan said. In some sense, it always has been, with humans, human strategies and human organizational systems being the best algorithms we've had available. But looking at how quickly AI has mastered incredibly complex systems like chess, go and others, it seems inevitable that our meatware will quickly become outdated. The full interview is well worth a watch, but we've transcribed a few other comments from Lt. Gen. Shanahan that we felt were of note. On AI and the Department of Defence "It is my conviction, and my deep passion that AI will transform the character of warfare and the department of defense in the next 20 years. There is no part of the department that will not be impacted by this, from the back office to the battlefield, from undersea to cyberspace and outer space, and all points in between. Everything could be made better through the application of AI." On the Joint Artificial Intelligence Center "As recently as June 2018, the JAIC boiled down to four volunteers with no money. Today, I'm proud to say, we've grown to 185 people, with a US$1.3-billion budget. We've grown so fast that we've exceeded our current spaces and we've moved into a new facility. All of that's happened in 18 months. For the Department of Defense, that's about as fast of a growth curve as you could possibly imagine." On whether AI is ready for military service yet "We used to have these discussions like 'hey, this technology's still pretty brittle, it's a bit fragile. Shouldn't we just wait a little while until the technology is better?' No. The absolute worst answer is to stop and wait for technology to catch up. You've gotta learn how to do it." On what an AI-enabled military future might look like "In general, smaller, cheaper, more disposable, swarming autonomous systems. And with autonomy comes AI-enabled autonomy. There's a tendency to conflate autonomy with AI-enabled autonomy, but they're two very different things. There's a lot of autonomous systems in the DOD today. There are very few, I'd say really no significant AI-enabled systems ... so, you might have a manned airplane that's quarterbacking a lot of autonomous, swarming systems. I think the only failure we'll have is a failure of imagination. Anything's on the table." On the ethics involved in AI-enabled warfare "We have a grounding in the principles of ethics, that we're not going to go out there and just use these things without the standard foundational elements of the Law of Armed Conflict, the International Humanitarian Law … we take that into account from the very beginning. But we have to address it. In fact, we're now starting to sense that we're going to be far enough along in our Joint Warfighting Mission that we're going to have to sit down and do some test cases to work through what's acceptable in the field." "We get accused in this department of going after killer robots. No commander would want robots with self agency just indiscriminately out on the battlefield making life and death decisions. That would not happen. You would have rules of engagement, all these other things that we do for a living. We'll take those into account." "But it is developing fast enough that we have to look at the ethical use of artificial intelligence. We're not just going to be leading the government, we're going to lead the world in these things. Because what we don't want to happen is to have China take over this conversation, saying the right things but doing something entirely different. And we know that would be the case." "We need to put some big bets down. And they are big bets. They're not risk-free bets. But when we look at what China and Russia are doing, especially China, where they're investing, I almost say we can't afford to do it any other way. We've gotta build toward that AI-enabled force of the future, or I think we have an unacceptably high risk of losing. And we're not used to doing that." On the future role of humans, and how the look of these systems will change "A lot of people have pondered over the last couple of decades, where do we really need a human in these systems? Are we trying to build the next manned fighter, as opposed to building the system with the best possible capability for the environment it'll face in, say, Indo-Pacific?" "If you look at the MQ9 ground control station, are you really trying to make that look like an F-16 cockpit, or just the most functional use of a keyboard and a couple of other things, because that's what the world has evolved to? What we have to do is take account of a different mindset, of people who've grown up differently than a lot of people like me, who have three and a half decades of this behind me and all the old bad habits and patterns we grew up with." "Maybe we shouldn't be thinking about a 65-foot (20-m) wingspan. Maybe it is a small, autonomous swarming capability. But then I've gotta solve for battery, I've gotta solve for size, weight and power problems, which are going to be a short-term challenge." On how the JAIC is looking to the business world for ideas "We should also be taking the best of the lessons coming out of the commercial car industry. And I like talking about this, because it's a cautionary tale. Ten companies, I think about 13 billion dollars or so over the last decade, and there is no level 4 autonomous car available on the road today. A cautionary tale. On the other hand, that's a decade worth of experience we should be pulling into the military. I think it's less about autonomy, and more about all the lessons they're learning by building those capabilities out."
  6. This has to be one of the weirdest electric motorcycle projects we've ever seen. The Emula aims to bring the noise, vibration, lumpy power curves and gear shifting of gasoline-powered motorcycles through into the electric age. Electric motorcycles, the argument goes, are boring. They're silent, so you don't get the hair-raising soundtrack of a screaming engine on full song. You don't have to shift gears, so that's one less fun thing to do, and their power curves are just about linear, with torque available everywhere, so they lack the "character" of an ICE bike. 2Electron, a company from Torino in western Italy, has decided to put all that character back in, and has built a prototype electric motorcycle designed to act as a kind of time machine, letting you experience the bikes of the past on a platform of the future. Thus, the Emula has a big ol' touch screen on the dash, which allows you to choose between several different kinds of old-school gas motorcycle, from 600cc inline fours to 80s-era 250cc two-strokes to 800cc twins. Once you've picked one, it does its best job to act like that kind of bike in every way. We're talking custom power curves to match the dyno charts of the petrol bikes. We're talking a fake hydraulic clutch lever and foot shift lever with "realistic feedback," that moves you up and down a series of simulated gears. We're talking speaker systems on the tank and under the seat, playing a pre-recorded engine sound matched to your chosen motorcycle type, simulated gear and simulated RPM – something like the SoundRacer device we had huge silly fun with, oh so many years ago. And to take things even further into the absurd, it's got vibration shakers all over the bike to shake certain bits at certain revs. Eventually, the company plans to offer a wide range of other motorcycle types, so the Emula begins acting like a little history lesson as you flip through the years and the bikes that defined them. The sheer time, love, diligence and thought it would take to build and program this system – which the company calls the "McFly Core," after Marty McFly from the Back to the Future movies – boggles the mind. The more we think about this project, the more complex and difficult and crazy it gets. Especially when every single one of its features, viewed objectively, makes an electric motorcycle worse. There's something profoundly silly about making a perfectly good electric motorcycle, then taking chunks out of its power delivery to pretend it's an ICE bike, and saddling it with all the other trappings of the last century, just to appease a kind of rider that would never buy an electric in the first place. I mean, vibrating footpegs, for goodness' sake. According to Motorrad magazine, the Emula will have a "Boring Mode" in which it just acts like a high performance, 250 km/h (155 mph) electric sportsbike, and my suspicion is that the vast majority of riders who try this thing out will immediately realize why electrics will be such superior machines as soon as the energy density issue is solved.
  7. When we first saw the Hoversurf Scorpion flying motorcycle, we thought it was "just the vehicle for aspiring amputees." Four exposed propellers spinning dangerously close to the rider's limbs made us cringe, and we wondered how much would be left of the pilot in a crash. The Moscow-based creators of this mad machine, however, threw themselves into testing and demonstration flights with extraordinary bravado, doing manned flights over concrete at altitudes nobody could expect to survive a fall from, a dirt bike helmet and armor their only protection. When the Dubai police force signed a deal to start testing and demonstrating these things, we called it "100 percent a publicity stunt, and probably quite a dangerous one." The mere thought of flying one of these early prototypes near people struck us as a huge safety risk. And now we've seen the first footage of a crash. In a secluded test area, a Scorpion pilot takes off and rapidly accelerates to an altitude of around 30 m (100 ft), at which point, according to Hoversurf, a barometer fails and the bike begins pitching wildly back and forth like a mechanical bull in the sky. The pilot immediately begins to descend, but the aircraft's flight controller can't seem to decide where horizontal is, and it bucks again, out of control as it comes down, eventually crash landing on the two rear props before flipping over backwards on top of the rider, the front two props still spinning. The pilot manages to escape injury, but the hoverbike is totaled. Hoversurf's description on the Youtube video: "The barometer in Dubai refused and an accident occurred – a down from a height of 30 meters. All safety systems worked well, and the pilot was not injured. Safety is our main concern. It is thanks to such incidents that our designs are becoming more safe." We're not sure which "safety systems" they're referring to here, but this was a very lucky crash – not just for Hoversurf and the pilot, but for the entire eVTOL industry. Honestly, we admire the cojones it takes to be a personal flight pioneer in this emerging space, but there's a reason most manned tests are done over water or on tethers. We're just as keen as anyone to see manned multirotors hit the skies, but these brave pilots have families, and rushing ahead of safety guidelines could set the whole space back if it ends in disaster.
  8. [IIMG][/IMG] While legged robots are able to perform feats such as climbing stairs, their wheeled counterparts are faster and less complex. The Ascento robot offers the best of both worlds, as it has two jumping legs – each one with a wheel on the bottom. Developed by a team of engineering students at Switzerland's ETH Zurich research institute, Ascento is now in its second incarnation, called Ascento 2. The first version was unveiled last year. When cruising along level floors, the self-balancing robot utilizes its two hub-motor-powered wheels. Once it encounters a vertical obstacle such as a set of stairs, however, it crouches down to preload its spring-equipped legs, then jumps up and forward. In this fashion, it's able to ascend the stairs, one by one. Along with giving it the ability to jump, though, Ascento's legs also allow it to stay upright on uneven terrain. They do so by bending at their linkage points independently of one another, keeping the robot's main body level at all times. With the latest version's new-and-improved "brain," this leg-bending feature additionally lets the robot keep from falling over when struck from the side. And while Ascento can be remotely controlled, it is able to operate autonomously, utilizing cameras and other sensors to both navigate and 3D-map its surroundings. Specs-wise, it tips the scales at 10.4 kg (22.9 lb), has a top wheel-rolling speed of 8 km/h (5 mph), a maximum jumping height of 0.4 m (1.3 ft), and can run for approximately 1.5 hours on one charge of its battery pack. Ascento 2 was unveiled this week, via the online ICRA 2020 robotics conference. Source: ETH Zurich
  9. It's been five years in the making, but the Walkcar from Cocoa Motors is finally up for pre-sale in Japan. The portable electric riding platform is kind of like an electric skateboard but with a deck about the size of a 13-inch laptop. The overall look of the Walkcar hasn't changed too much since 2015, and much of the time since has been spent developing and tweaking the motor. There are now two drive modes – Sport has a top speed of 16 km/h (10 mph) for a reported per charge range of 5 km (3.1 mi), while normal will get you to 10 km/h (6.2 mph) for 7 km (4.3 mi). And you'll be able to tackle inclines of 10 degrees too. The platform is fashioned from carbon fiber and aircraft-grade aluminum, is 215 mm long and 346 mm wide (8.5 x 13.6 in) and is said to feature a self-healing paint finish to keep scratches to a minimum. The user rides 74 mm (3 in) above the ground. And the whole shebang weighs in at 2.9 kg (6.4 lb). Two locked wheels to the front are driven by the electric motors, while two unlocked trolley-like wheels to the rear allow for turning. Four sensors are embedded into the upper platform, allowing the rider to control the Walkcar by shifting weight – leaning forward to move off and accelerate, back to slow down and to the sides for direction changes. There's an auto stop function too, that sees the platform come to a halt when its sensors detect the rider stepping off. The Walkcar looks to be easier to master than a Solowheel, and not as cumbersome to carry as an electric kickscooter. Hopefully it will also prove better at tackling small stones or twigs than rollerskates (quads).
  10. Facebook is testing a new feature for Messenger that allows users to better protect their messages from prying eyes. When enabled, users will need to authenticate their identity using Face ID, Touch ID, or their passcode before they can view their inbox, even if their phone is already unlocked. (The feature relies on your device’s security settings, so however you unlock your phone normally is how you’ll unlock the Messenger app.) You can also set a designated period of time after leaving the app for when you’ll need to re-authenticate. The company is currently testing the new security feature among a small percentage of Messenger’s iOS users, though it could eventually be available more widely, including on Android. “We want to give people more choices and controls to protect their private messages, and recently, we began testing a feature that lets you unlock the Messenger app using your device’s settings,” a Facebook spokesperson said in a statement. “It’s an added layer of privacy to prevent someone else from accessing your messages.” The feature is similar to security settings of many other popular chat apps, including encrypted messaging app Signal, which has seen a surge in downloads in recent weeks. Facebook has been beefing up the security features of Messenger for some time. The company has an encrypted messaging feature, Secret Conversations, and has said it would like to one day make end-to-end encryption a default setting of the app. Those plans, however, are likely still years away.
  11. In its latest sweep, Twitter has removed 32,242 state-linked accounts with ties to the People’s Republic of China (PRC), Russia and Turkey. All of the accounts were suspended for violating Twitter’s platform manipulation policies, the company wrote in a blog post. The accounts will be recorded in an archive of state-linked information operations, which Twitter claims is the only archive of its kind in the industry. Of the 32,243 accounts removed, 23,750 had ties to the PRC. According to Twitter, they were primarily spreading geopolitical narratives that favored the Communist Party of China and pushing “deceptive narratives” about the political dynamics in Hong Kong. These accounts had another 150,000 accounts designed to act as amplifiers and boost content -- those have been removed but will not be added to the archive. The accounts linked to the PRC focused heavily on Hong Kong. They also promoted messages about the coronavirus pandemic, Taiwan and an exiled Chinese billionaire, Reuters reports. The accounts reportedly had ties to another state-backed operation that Twitter, Facebook and YouTube removed last year for spreading misinformation about Hong Kong. A network of 7,340 accounts from Turkey were also removed and archived. Twitter says they were amplifying political narratives favorable to the AK Parti and demonstrated strong support for President Erdogan. Another 1,152 accounts with ties to Current Policy, a Russian media website engaging in state-backed political propaganda, were removed and archived, Twitter says. Those accounts promoted the United Russia party and attacked political dissidents. This is not the first sweep of state-backed accounts that Twitter has performed. Facebook and Google have also banned misleading and state-backed information campaigns, and Facebook recently began labelling media from state-controlled outlets. Twitter recognizes this is an ongoing issue and says its goal is to “remove bad faith actors, and to advance public understanding of these critical topics.”
  12. Brave, a browser with some 15 million monthly users, has been redirecting searches for cryptocurrency companies to links that produce revenue for the browser's owners through advertising affiliate programs. Twitter user Yannick Eckl, aka "cryptonator 1337," on Saturday revealed that when he searched for Binance, a cryptocurrency exchange, he was redirected to an affiliate version of the URL that profited Brave. The controversy grew when Larry Cermak, director of research at The Block, a research, analysis and news brand in the digital asset space, began digging into Brave's code on GitHub. He uncovered more redirects to another cryptocurrency exchange, Coinbase, and two cryptocurrency wallet sites, Ledger and Trezor. Brave's autocompletion of a URL to include a referrer link may be a bit dodgy. "This is ethically questionable because it's altering the address that the user thought they were typing to one that advantages Brave -- apparently in the hope that the user will just hit 'enter' and go to Brave's version," said David Gerard, UK-based author of Attack of the 50-Foot Blockchain: Bitcoin, Blockchain, Ethereum & Smart Contracts. "This is what's called a 'dark pattern' in interface design -- one that tries to trick the user into doing things purely for the advantage of the vendor," he said. Brave's failure to warn users that it was doing affiliate marketing appears to violate FTC rules in the United States and CAP rules in the United Kingdom, Gerard said. "Not fully informing users is deceptive marketing, and so that part is clearly unethical too," he observed. Sorry for the Mistake In a series of tweets, Brendan Eich, CEO of Brave, acknowledged that the company had made a mistake and would correct it. Brave was trying to build a business that puts users first by aligning the company's interests and those of its users with private ads that pay users, he explained. "But we seek skin-in-game affiliate revenue, too. This includes bringing new users to Binance & other exchanges via opt-in trading widgets/other UX that preserves privacy prior to opt-in," he wrote. "It includes search revenue deals, as all major browsers do," Eich continued. "When we do this well, it's a win for all parties. Our users want Brave to live." The autocomplete default was inspired by search query clientid attribution that all browsers do, but unlike keyword queries, a typed-in URL should go to the domain named, without any additions, he explained. "Sorry for this mistake -- we are clearly not perfect, but we correct course quickly," Eich wrote. He denied that Brave was rewriting links clicked on Web pages as well as those typed into the address bar, tweeting "We have never & will not do any such thing." The autocomplete function could be turned off in the browser's settings. Now that setting is turned on by default, but in the future, the default setting will be "off," Eich said. Tone Deaf Response Reaction of Brave users to the mistake was a mixed bag. "Damage done. I'll stop using #brave," tweeted a user with the handle "BitcornRick." "TBH having this as an option is weird by itself," tweeted Sriram Karra. "Who among your target segment would you think will *want* to turn that ON?" To which Matthew Wallace replied, "Well, users that still like the browser and want them to stay solvent so it doesn't disappear?" "Glad to see you are correcting the mistake. You should be more careful if you want to earn people's trust," admonished Aki Rodic. Toth Zoltan tweeted some encouragment to Eich. "Brendan, you guys have made a rocking browser, I really like it," he wrote. "Your honesty is a plus. No one should be against you making money. Till you stay transparent." Overall, though, Brave's responses on Twitter were "tone deaf," observed Gerard. "I see Brendan Eich and [Senior Developer Relations Specialist ] Jonathan Sampson have been responding to many, many upset users, but they don't seem to understand what the issue is," he said. "And they really don't understand that they've broken users' trust," Gerard continued. "Eich and Sampson seem to think that careful argumentation and using special definitions of words will explain everything and it'll be fine, but they're not showing any understanding of what they did to break users' trust." No Free Lunch While many Brave users won't be too upset with the browser's autocomplete-for-cash feature, there is a specific segment who will see the misstep as a betrayal, observed Liz Miller, principal analyst at Constellation Research, a technology research and advisory firm in Cupertino, California. "There's a group of technorati that purposefully and thoughtfully went to Brave, not because the technology was going to be different, but the mindset and the promise of the company were going to be different," she told TechNewsWorld. "That's what's really broken here," Miller continued. Brave's leaders don't understand how they've undermined their users' trust in them, she said. "They're saying their problem was they used this different tag, when the real problem was they didn't see what they were doing was going to be seen as advertising, which users should be compensated for and made aware of," Miller explained. "This is more about transparency than privacy," she added. "I think this came out of the blue and shocked Brave. It had been in a luxurious place of being one of the 'good guys.' You want ad blockers? We've got them. You want something that puts your privacy first? We're going to give it to you," Miller noted. "After being in that rarified air, this is probably the first time they've been called to the mat for something," she pointed out. There can be substantial backlash toward a company that makes a product that says it's providing privacy but is mining information, said Rob Enderle, principal analyst at the Enderle Group, an advisory services firm in Bend, Oregon. "It's disingenuous, and people can lose trust in the product and the brand," he told TechNewsWorld. "One of the big problems with the ad model is that to make money, you have to do things that the people using your product would rather you not do, but that's what's paying for the product," Enderle said. "There's no free lunch."
  13. Scientists have grown tiny human livers that functioned after transplant into rats.Trying to improve transplant numbers and outcomes is a major research area for biologists. The scientists began by making "decellularized scaffolds" on which human stem cells were grown into liver cells. Scientists from the University of Pittsburgh and their colleagues have grown tiny human livers and successfully implanted them into rats. The livers began as stem cells that are cultivated into skin and vascular cells that form a complete microenvironment. “The organ-like microenvironment further matures some liver functions and produces tissue structures similar to those found in human livers,” their paper in Cell Reports explains. In their summary, the scientists say previous research has mostly used existing structures of rat cells to grow their organlike environments. They explain: “Whereas previous studies recellularized liver scaffolds largely with rodent hepatocytes, we repopulated not only the parenchyma with human iPSC-hepatocytes but also the vascular system with human iPS-endothelial cells, and the bile duct network with human iPSC-biliary epithelial cells. The regenerated human iPSC-derived mini liver containing multiple cell types was tested in vivo and remained functional for 4 days after auxiliary liver transplantation in rats.” This cutting-edge science begins with human volunteers who gave skin cell samples. These were reverse engineered into stem cells and then redirected to become different needed cells to form a liver. From there, the scientists seeded a “liver scaffold”—a rat-based extracellular matrix (ECM) structure with, miraculously, its cells removed—with their new human liver cells. “The goal of decellularization is to remove cells while maintaining the structural, mechanical, and biochemical properties of the ECM scaffold,” the researchers explain. There were traces of DNA left in the rat scaffolds, though. “DNA content, a commonly used marker of decellularization, was 3 [to] 10 times higher than in previous studies, which may lead to an adverse immune response if animal-derived scaffolds are to be used in humans, however, this remains to be tested.” Reports say while the resulting liver-growing process has taken 10 years to perfect, this batch of miniature livers took under a month to grow—compared with two years in the human body. The team then transplanted the livers into a small group of specially prepared rats, which had their immune systems suppressed to encourage the transplant and their liver lobes removed to encourage regeneration. Five is a tiny sample, to be sure, but all five livers worked during the four-day experimental period, producing and secreting bile and urea. Some had problems around the graft site, which makes sense for an almost completely human organ transplanted into a rat. “Harvested human iPSC-liver grafts measure 2.5 [to] 3 [centimeters] and showed liver-like tissue texture,” the scientists say. Despite a handful of understandable problems, they feel optimistic about the future of lab-grown human livers on decellularized scaffolds. They conclude: “Future studies should concentrate on procedures to allow continued vascular development using, for instance, nanoparticles and growth-factor-hydrogel modification of acellular scaffolds. The strategy shown here represents a significant advance toward our understanding of the production of bioengineered autologous human-liver grafts for transplantation.”
  14. Unmanned warships could link up and cross oceans together. Unmanned ships are smaller than manned ships, but that smallness results in ships less capable of crossing vast distances. “Sea Train” would see many unmanned ships physically link to one another to overcome wave resistance, allowing these ships to make the longer trips. While the U.S. Navy is busy buying a new generation of unmanned warships to serve along manned vessels, those smaller “ghost ships” may need to team up to make crossing oceans easier. Future unmanned warships could journey across oceans physically connected to one another to make the trip more efficiently. DARPA’s new “Sea Train” concept is investigating the idea of ships that tether, like the individual cars in a train, in order to overcome wave resistance. The Navy is plunging into the brave new world of unmanned surface vessels, or USVs. The service plans to buy a wide range of USVs, from medium-sized vessels (39 to 154 feet long) all the way up to large vessels (200 to 300 feet long). While small and difficult to detect, medium-sized vessels, or MUSVs, are also less capable of making a voyage of thousands of miles to hotspots such as the South China Sea and the Baltic Sea. The ships’ size restricts the amount of fuel each can carry, and wave resistance causes fuel-burning drag. While MUSVs could always be physically carried into those areas—think ships like the M/V Blue Marlin, which can lift an entire U.S. Navy destroyer—larger carrier ships would need to be leased for service, or even built for the Navy ahead of time. And the whole point of building MUSVs is to reduce the amount of people and effort it takes to field a functional warship. The Sea Train concept, C4ISRnet reports, involves MUSVs rendezvousing at sea, tethering to each other, and making a long distance trip together. The first ship in the formation encounters wave resistance—the rest of the ships, not so much. The many smaller unmanned ships together form one large virtual ship capable of self-deploying thousands of miles without the need for refueling. In the event of a crisis, the Navy could sortie a half dozen MUSVs from Guam in the Pacific or Rota in the Atlantic. The ships would link up, sea train to the area of operations, and then disperse, each ship going its own way. Once the crisis is over, the surviving ships could sea train back to their home port. DARPA’s effort to develop Sea Train will consist of two 18-month developmental and testing periods, followed by a reduced scale model that will provide a proof of concept. If successful, Sea Train could be deployed on unmanned U.S. Navy ships within a decade. DARPA video of Sea Hunter, a prototype MUSV that recently made the trip from California to Hawaii, completely unmanned.
  15. Some say “slaughterbots” are as dangerous as nukes. Military and consumer drones have evolved quickly in the post-9/11 era, as technology rapidly advances and brings new capabilities online. The utility of killer drone swarms, in which large groups of drones cooperate to hunt and kill people, makes their development likely. The ability to kill on a large scale, coupled with the inability of a drone swarm to tell between combatants and noncombatants, means such weapons should be classified as weapons of mass destruction, with heavy penalties on their use. Swarms of autonomous kamikaze drones, capable of hunting down and killing people, should be considered weapons of mass destruction. That's the conclusion of an article at West Point’s Modern War Institute blog, which argues such drones should be treated like other WMDs, including nuclear, chemical, and biological weapons. These swarms, which are autonomous, work in groups, and carry lethal payloads, could be unleashed against armies or cities, to equally deadly effect. Zachary Kallenborn's article begins with the 2017 viral video “Slaughterbots” (below), which mixes an imaginary TED-style talk by a defense contractor with fictional news reports of armed drone swarms unleashed on universities, cities, and the U.S. Senate. The video, the brainchild of a computer science professor at UC Berkeley, was shown at the United Nations Convention on Conventional Weapons and meant as a warning against the proliferation of killer drone swarms. Several factors are driving the idea of such swarms. Advances in artificial intelligence and communications will eventually allow drones to fly and accomplish missions cooperatively. Miniaturized sensors and payloads will enable drones to detect, track, and engage individual people, delivering tiny, but lethal explosive charges against the human body, bypassing helmets and body armor. At the same time, modern mass production can churn out millions of technologically complex consumer products, like the iPhone, on a daily basis. All of this could very well combine to make swarms of killer drones a reality. The combination of low-manpower requirements and high efficiency would make them an attractive proposition for military forces worldwide. No matter how horrific, the promise of a military advantage on the battlefield means that some government, at some point, will likely pursue their development. If these weapons pass into the hands of civilians, much like shoulder-fired surface-to-air missiles and other types of military weapons, they could cause harm on an unprecedented scale. Kallenborn says if thousands of armed, fully autonomous drone swarms can kill thousands of people, that's very much within WMD territory. Legally, he says, killer swarms fit the definition of WMDs due to their scalability. One drone might kill one person, but thousands of drones could kill thousands. Kallenborn also asserts that drones may not ever be able to discriminate between combatants and noncombatants, resulting in soldiers, wounded soldiers who are no longer combatants, and civilians killed alike with impunity. How do we prevent slaughterbots from becoming a reality? We treat them as WMDs, which means taking a page from the counter-WMD playbook. This includes the U.S. government taking a strong stand against their deployment, backing efforts to prevent their development and proliferation, and eventually deciding whether or not their use would constitute a “red line” that prompts military action. It's important, Kallenborn argues, to get ahead of the technology and establish norms and penalties before killer drone swarms are released into the wild.