On integrated experiences

What can the integration of hardware + software + ML + custom components achieve for a company? Experience gains for devices or services powered by this integration will provide the answer, either immediately or over a longer-term.

In recent news
– Apple launching M1 Pro and M1 Max. It’s “loaded with advanced custom technologies that help push pro workflows to the next level”. Some tech includes on-device machine learning, integrated controllers for better I/O, custom image signal processor for computational video and pictures, better security.
– Google launched the Tensor chip. It promises to keep pace with the latest advancements in machine learning (ML). It’s an attempt to move from an ‘a one-size-fits-all piece of hardware into a device that’s intelligent enough to respect and accommodate the different ways we use our phones’. It will power speech recognition, computational photography, security and much more.

Both talked about experiences that are not as effective in other technology choices.

The M1Pro and M1Max have custom components for the ProRes video. The iPhone gives you the option of recording and editing videos on a device in this format. The new laptops now offer hardware support with these processors. It should certainly speed up workflows for professionals working in this space. All edits, whether on Final Cut, Davinci Resolve or Adobe Premiere that use ProRes perform better because of the support for Metal API etc. So these chips are going to bump up the speed even further. The ecosystem is internal. Suppose these devices are custom made with creators in mind. In that case, there may be other devices on the horizon that are going to be more focused on other professional workflows and the combination of hardware, software and other custom components make for a big performance boost.

Google, with its Tensor chip, is focused on a similar path for their mobile phones. They have the android platform, plus machine learning and cloud infrastructure. The machine learning on the chip allows them to create unique computational photography experiences. So whether you are taking photos or videos, there is a level of computational ability that the company felt was not possible in earlier off the shelf devices.

Since these launches, I have been reading reviews and insights on these devices, but I was most curious about the response of the other technology companies who provide off-the-shelf components for computing and mobile devices. It is not as if other desktops or mobile computing options are not able to give that performance. Intel and Qualcomm supply to a large ecosystem of vendors. Compatibility with different combinations of hardware and versions results in them being accepted across multiple devices. For example, Intel recently introduced the Alderlake processors which are supposed to have comparable if not better performance.

Finally, the performance choices are power and compute choice, whether the customer will prefer these new devices or multiple devices. Another point to add is extended support where the company with the integrated experience has a better chance to offer a unique service experience as well

There is much to think about on integrated experiences.

Additional reading

https://www.apple.com/newsroom/2021/10/introducing-m1-pro-and-m1-max-the-most-powerful-chips-apple-has-ever-built/
https://blog.google/products/pixel/introducing-google-tensor/
https://arstechnica.com/gadgets/2021/10/intels-12th-gen-alder-lake-cpus-will-try-to-make-up-for-rocket-lakes-stumbles/

More to imagine and build

I have been experimenting with the functionality of Metahuman Creator by Unreal Engine. This is still in the beta stage, but a few things are clear.

  1. The metahumans you see in the attached image are based on the sample models in the beta. You can do a fair bit of customisation. One can work on facial features, hair and body type. There are some sample clothes and colours to experiment with. I am sure more functionality and customisation would be possible.
  2. I started with a long and white-haired metahuman. I attempted to change features in an attempt to visualise a younger version of it. And the other one was a younger metahuman with short hair and how they would look with a bit more white hair. Not quite perfect but it still gives you an idea of the tool’s potential.
  3. The exciting bit is that you can take this metahuman (via Quixel Bridge) and put this into a scene in Unreal Engine and perhaps other platforms.
  4. You take it to the next level with a tool like iClone. Lipsync, motion capture and more, but that’s for another day.
  5. The use of this feature in gaming makes visual effects apparent, but there are clearly more scenarios.
  6. Learning experiences, pre-visualising realistic experiences and walkthroughs and linking back to other features are starting points.

People working on developing and visualising personas would find this creator very useful as well. More to imagine and build.

References

https://www.unrealengine.com/en-US/metahuman-creator

https://www.reallusion.com/iclone/live-link/unreal-engine/metahuman/default.html

Note Taking Process and Apps

Change is the toughest to track. Sometimes the change is rapid, but other times it can be subtle and barely noticed. Yet, it may have made a big difference in the experience of someone.

Sometimes these changes take place in our own area of work, but there are more likely changes in other areas that can make a tremendous impact in our sector.

Should you track changes? Yes. Reading about them and having diversified a portfolio of reading interests is one thing but making notes about something interesting is equally essential. Since my school years, I have had a habit of reading but didn’t make as many notes back then, and this habit has grown.

Sense-making, my work in customer experience and whatever term you prefer. There is a lot to read, look at, understand and make sense of the customer journey for clients. For example, if the client has a business that engages with customers in stores, clinics, and many other formats, then there is a lot of visual information by way of messaging, equipment, etc. It is helpful to know for example, how digital signage is being used by different customers in various locations. Work-related reading could involve sector-specific insights, market research documents, books and more.

A Power platform application serves as a visual and related repository for the teams’ notes in market visits. It links back to Sharepoint, and everyone has access to the information they need. Another application was a business news tracker on specific topics that each of us works on. These apps were developed quickly and have served us well over the years.

Continue reading “Note Taking Process and Apps”

Multilingual Digital Experiences

The stay home at home era has necessitated the need for companies to offer services via digital channels. So websites, voice assistant, chatbots and apps have been put to use.

The question we need to ask is whether these tools are enabled for multilingual experiences? A cursory scan of multiple company sites reveals that there is much work to be done. India has so many languages and to properly service customers with varying degrees of skills, digital tools of companies will need to communicate with them in the language of their preference.

We have implemented multilingual experiences for more international clients than local ones. Indian companies have perhaps prioritised other features, but this is undoubtedly going to change.

Some of the key learnings from our work are

Continue reading “Multilingual Digital Experiences”