Featured

How was 2022 For me

I am not a person who writes year end notes for things which went well and and which did not. However, this time I am tempted to write it, because

1.I wanted to resume serious content creation

2. 2022 is an year, on which I had more control of my life.

So what happened to me in 2022. Lets check

  1. Completed my UCLA PGPX course. Though it is not a evaluation course, I am happy that I did put in best of my efforts.
  2. Progressed on both material and spiritual growth.
  3. Able to manage my time better.

To give a fact, I was able to consistently dedicate time for workout and daily prayers. Below is my workout schedule

  • Day 1 : Jogging for 30 minutes
  • Day2 : High intensity workout for 30 minutes
  • Day3: Yoga and weightlifting for 30 minutes

And then it repeats. Sometimes I do take rest on 4th day. I had the motivation to do on weekends too. Apart from travelling dates and hospital dates, I was mostly consistent.

Below are the outcomes

  1. Able to consistently wake up at 4.30 to 5 AM every day
  2. Completed reading of Bhagvad Gita. It started two years ago with goal of one sloka per day.
  3. Able to perform Ganesh puja for 10 days following Vinayakar Chathurthi.
  4. Started reading Maha Periyava’s Deivathin Kural. Again couple of pages per day.
  5. Started reading “Presence”
  6. Able to restrict my time spent on Facebook.

On hindsight, i believe that I could have used my personal time much better. Should have spent less time on Office work.

Goals for 2023

  • Blog more frequently. Atleast one post per two weeks. On Digital transformation
  • Complete course on DSA, System design
  • Complete Kubernetes course and certification
  • Improve on Cognitive thinking.
  • Reduce time spent on twitter.

Featured

IwoJima

Any incident or event has multiple parties and participants involved. Have you noticed a third-party storyteller taking two movies out of the same situation, covering one party and focusing on both sides? That too on an incident like War? And both films are neutral without having any prejudice?

Recently I got a chance to watch two movies, โ€œLetters From Iwo Jimaโ€ and โ€œFlag of Our Fathersโ€, back to back on a long-haul flight. Of all the options, why did I choose these two movies? There is an interesting backstory to it. I watched โ€œFlag of Our fathersโ€ in the late 2000s in Hyderabad. As a fan of drama genres, I liked it very much. I was looking forward to watching it again.Cut back to 2022; I was delighted to see this movie’s name in the catalogue. I also saw the title โ€œLetters from Iwo Jimaโ€. I thought it would be nice to see both films to get the complete picture.

Both the movies are nicely set drama movies set in the backdrop of war. However, the degree to which it touches the personal side of soldiers differs. In the โ€œLetters from Iwo Jimaโ€, we could visibly see the pain of civilians forced into army platoons. Their dilemma, priorities and how they execute the commands are captured in a heart-touching way. We could see that the Japanese had much to fight for (including Diarrhea) apart from the Americans. The grace in the commander’s face brings a lot of credibility to the narrative.

On the contrary, โ€œFlag of our fathersโ€ shows the guilt of a soldier carrying the pride that deserves another person. How fame is short-lived, and the contacts might not be helping when in need. 

Few observations on the war

  1. Both Japanese and American leadership did not support their soldiers. While the Japanese did not provide air force help, American leadership gave only a few days to start with
  2. The Japanese started the battle with low morale. This amplified their self-doubt and fighting attitude.
  3. The Japanese majors did not trust their superiors and did not execute the orders.
  4. Both the army leadership did not care for their soldiers. There is a scene where a soldier jumps from a ship by mistake and is not rescued and is ignored.
  5. The brutal American politician exploiting the sentiment and raising money for war shows the administration’s priorities.
  6. War is won based on mental strength, not just weapons.
  7. The life of an ordinary soldier is the same, whether you are on the winning side or the losing side.

While most movies on soldiers and war have an emotional angle, the interactions between the soldiers are usually high on adrenaline. In both these, I see the emotions to be very subtle along with insecurities a regular human faces.

Last but not least, both of these movies are directed by Clint Eastwood. Do watch them. Further, each of these movies adoption of novels with the same name as the movie titles. Do read them as well.

Enroute to Mahakumbh

Starting from Secunderabad on 12/02/2025 to Prayagraj for mahaKumbh. Taking 12771 to Nagpur and then a bus from Nagpur at 3;30PM on 13/02/2025.

Till a few months ago, I didnt know of Mahakumbh. However,as it started, something in me itched to take this trip. I am going alone.

While I could afford a flight (which costs around 40k Roundtrip),I chose train and flight.Two main reasons for the same

  1. Spirituality should not be expensive
  2. The trip gives me personal time and space

There were lot of challenges which could dettered me. Be it work schedule ( we have a golive next week), personal (I have grahapravesam of my nes house on 23 feb) or external (heavy crowd and traffic for Kumbh).

However,an inner voice told me to still persist it. Taking the trip trusting the god.

Last decades has been a awakening experience for me. I could sense myself distancing from most of the worldly pleasures. Hope the Mahakumbh make me better.

I purposely made it a frugal trip by taking sleeper in one way. For the bus, only AC buses were available. Though I have motion sickness, I didn’t have any other choice than choosing AC buses. Only conscious expensive one was 3AC while returning from Nagpur to Hyderabad.

2024 – A Recap

As I step on to 2025, recollecting how 2024 was for me.


What worked well

  1. Atleast for first 8 months, kept the rigour of waking up early, reading a lot.
  2. Spent couple of weeks on focussed study on GenAI. Tried new things.
  3. Consistent with efforts on workout and exercise.
  4. Started Sudharshan Kriya. Helped me to get mental peace and calmness in rigrous work hours.
  5. Better health for my mom. Did her replacement knee surgery.
  6. Able to do Nithyakarma (SandhyaVandhanam) more frequently.
  7. Better journey in self realization. Started getting happy with whatever the results. Reduced on expectations.

What did not work well

  • While the efforts were there, results were below par. Weight remained the same.
  • No new creative ideas which i thought of or stumbled upon.
  • No major trips. Confined to home and office for a lot of time. Trips i made to chennai were also packed with personal commitments.

Work and Career

  1. Office work became very challenging with new responsibilities (More people officially reporting to me)
  2. Decent amount of work travel. Except for the trip to Bangalore in April for a workshop, others were hectic as usual.

2025 Goals

  1. Focussed efforts with professional assistance for weight loss.
  2. Better team and work management.
  3. Improve Family reation.
  4. Amarnath Yatra
  5. Blog more. Spend time for it.

Getting Smart People Own Responsiblity

People management is always demanding and exhaustive. While challenging, the solutions depend on the type, expertise and seniority of the people you manage.

As a manager of a team of solution architects within a large program of over 100 people, I understand the importance of a tailored approach. With approximately 9 architects, each leading a squad, most teams manage their backlog effectively. However, one squad consistently struggles with this issue. That squad had an architect. Let’s call him Mr.LateNighter.

Continue reading “Getting Smart People Own Responsiblity”

Tricky Decision Making for Solution Architects

A unique challenge and exciting factor in the role of Technical (or)Solution architect is your daily decisions that you take. While you might not get to design a rocket to Mars, even a small decision you need to take on your project might make a big difference to a wide variety of people. This includes teams involved in product delivery (UX team, development team, testing team), operations team (Handling day-to-day operations and reporting), support team (L1, L2 , L3 team), end users of the product etc. In this article, I will explain a unique scenario (TransactionID association for APIs)and the challenges in our decision-making process.

This post has been cross posted here


Project Background

My team (of 100 + team members) are developing a net banking solution for one of the biggest private-sector banks in India. The solution was developed on top of the experience platform my company has developed. We had around eight squads, with each squad having 3 UI developers, 3 Backend developers, 3 QA, 1 Business Analyst, 1 Solution Architect, and 1 UX. We worked closely with the Bankโ€™s IT and operations teams as part of requirement elicitation, design and testing. Like many other projects in 2024, we are also agile, and the solution is based on microservices architecture. While we were developing net banking, the Bank was creating a mobile app using their in-house engineering team.

Requirement

One essential requirement from the operations team was that a string uniquely identifies every journey and sub-step in a journey. That string is called a Transaction ID. The operations team uses it for various purposes. Key purposes are

  1. Hourly reporting to the business team on the success rate of various journeys. Top reasons for failure for multiple journeys.
  2. The L1-L2 team will identify and trace various activities done by a user during a session.
  3. Configure user-friendly messages for error codes returned for various journeys by backend systems and product processors.
  4. Blackout or temporarily stopping a user from carrying out specific journeys (due to regulation, supporting system downtime, etc.). For example, in India, new deposits are expected not to be accepted on the financial year closing day of March 31st.


When we started the program, this requirement was unknown to us. So during the initial time before the development kick-off we could not design how to handle it. After a few sprints, the operations team highlighted this in one of the sprint end demo sessions. This expectation/requirement came as a shocker. However, since the management priority is to develop and showcase features that business teams can see and realize the value of the platform my company has sold, efforts to create such common features still need to be prioritized. Only when we reached a point where it could not be delayed further the management took up this in project planning.

Architects discussed various ways to associate different journey steps with a unique identifier as a team. Below are the options we considered

  1. Have a filter in the backend so that transaction ID can be set in the MDC context based on the URL pattern. This could be further used for logging, observability, reporting, and other purposes.
  2. Have the UI send the transaction ID in the header for each request.

Below are the various considerations that the Architecture team should consider before making the decision

  • Architecture and design fitment.
  • Performance and reliability.
  • Scalability for new features
  • Ease of maintenance and change

A big challenge was that the platform did not offer such capability and could not be enhanced within the project timeline. So, the implementation team needs to implement this as a one-off feature.

UI setting the transaction ID

Architecture and Design Fitment

Every API being invoked should set a value in the header. Setting the transaction ID in the UI is very simple, as there is no complex architectural change. However, this option increases the complexity of the UI architecture of Angular views and components, making the UI logic very complex.

Performance and reliability

Since there is no additional runtime computation, this option does not degrade performance. If a transaction ID is not received for any journey, we know that the issue is only in the place where it is assigned. This makes troubleshooting very easy.

Scalability for new features

As the number of journeys increases, the time to incorporate the transaction ID increases proportionately. Since it depends on development, it also demands clarity on many aspects, like Transaction ID, before the development starts. When the number of stakeholders is high in decision-making (which is true in our case as the operation team needs to decide on Transaction ID based on journey design in both web and mobile channels and finalize the value), this delays the development process. If there is a need to change the transaction ID, the code needs to be changed.

However, this approach allows the backend to scale to new channels quickly. Since the transaction ID is left to the channels in a headless manner or when multiple applications or channels leverage services, the complexity of backend services does not increase.

Ease of Maintenance and Change

If the transaction ID needs to be changed, the front-end code needs to be changed and re-deployed. This makes this option maintenance-unfriendly.

BackEnd Determining the Transaction ID

In this option, a request filter matches the incoming service path against the map (which is cached) of the API path and transaction IDs. Transaction ID for the given API path is retrieved and set in the MDC context.

Architecture and Design Fitment

At the outset, it looks like a natural fit. But below are the challenges

  1. If the same API is invoked from different journeys and the transaction ID needs to be different, the backend service requires an extra attribute, either as a header or a query parameter. This means that the front-end code still needs to be changed. The key for the Transaction ID will now become API path+ extra attribute (journey name for e,g). That said, the number of front-end code that meets this criterion is around 30% for our application.
  2. Many APIs have dynamic values in path parameters. How do we get a pattern from an incoming API path? This is challenging, and we were not able to find a foolproof solution.
  3. Per microservice standards, the same API can have different implementations for GET, PUT, POST, and DELETE. However, transaction IDs need to be different. This means that in addition to the API Path and journey name, the HTTP Method will also be part of the key.

As you observe, key generation and determination become very complex as the application and journey become complex.

Performance and reliability

Whenever a request is received, a small computation is done to determine the transaction ID. This adds to the latency (even if it is less than ten milliseconds) and is also a point of failure. When the transaction ID is not set, troubleshooting becomes complex.

Scalability to new features

Whenever new features or journeys are added, the only change is to update the configurations, which maintain the mapping between API path, method, journey name, and transaction ID. This makes it very easy and scalable for new features.

Ease of Maintenance and Change

If any change is required for the transaction IDs, no code change is involved. Only configuration changes are required.

Final Decision

As you might see, each option is drastically different, with its strengths and weaknesses. The impact of each strength and weakness is very high, making decision-making very difficult. We also need to take into account the stage of the project and the changes that are required for the developed epics.

We assumed that transaction IDs would be mostly the same, approved it with the operations and business teams, and decided to use the front-end option. Changing the functionalities already delivered had a short-term impact, and every journey developed after that undergoes a painful and time-sensitive discussion on transaction IDs. However, the simplicity of architecture and implementation outweighed the ease of maintenance and flexibility.

Success Factors for Performance Testing for Green Field Implementations

Every manager and senior leadership acknowledge the importance of performance testing. However, stakeholders must make many non-technical and strategic decisions, and they should be aware of and prepared to make those decisions. When starting a large green field project, while the leadership and SI vendors prepare the ways of working, aspects like performance testing should be discussed more upfront.

This has been crossposted to here


When starting the discussion about performance testing, many people think and discuss only tools like JMeter or LoadRunner. Another critical aspect that everyone agrees on is the load splitup and load profile for various screens/APIs. While they are essential and do a heavy-duty job, the following should precede the script preparation.

  1. Performance testing sign-off KPIs.
  2. Performance strategyโ€Šโ€”โ€ŠBottom-up testing or Top-up testing approach.
  3. Dependent services. Approach and tools for virtualizing and mocking them.
  4. Test data preparation.
  5. Environment preparedness. The environment here refers to the primary, other applications, and dependent services invoked during the performance testing.
  6. Which metrics and benchmarks should the testing team consider when evaluating the test results? The testing team looked at our projectโ€™s 99 percentile response time, while the 95 percentile should have sufficed. Earlier alignment on such small guidelines will save a lot of time.
  7. Impact of other changes (e.g., security requirements and UI changes) in performance testing.

In an agile environment, these challenges grow if stakeholders do not make critical decisions in an appropriate early stage. The hidden costs could be much higher as they affect the project’s go-live date.

In a greenfield fintech implementation, we faced various challenges. Given that the concurrent users to support are 150K, and the user base is around 60 million, it took a lot of work to find an expert in performance testing who could anticipate the challenges and guide us effectively. Most performance testing experts limit their expertise to tools and rarely have a big picture.

Since our application depended on various enterprise services and systems hosted on-prem, they do not have a performance testing environment. While their production systems are stable and handle such a load, the lack of a performance testing environment forced us to adopt the following strategy.

  • Initial benchmarking and sign-off testing will be carried out using virtualized services.
  • Once the benchmarking is successful with virtualized service, test one of the lower external services and systems environments. Extrapolate the results accordingly.

Key Decisions

Bottom-up or Top-Up testing

Bottom- Up testing:

In bottom-up testing, we test with incremental loads, find issues, fix them, and proceed to the next iteration. While this approach is intuitive, it works only in the following scenarios.

  • When the performance testing happens along with feature development.
  • The architecture and design of critical components have been independently agreed upon and validated for performance.
  • UI development is complete, and the product and testing teams have approved the functionalities tested and will not be changed immediately. If the look and feel of the pages change, the performance test scripts also need to change.
  • The application and testing teams have finalized the data ingestion strategy for performance testing.

Top-Up Testing

In top-up testing, you start with a reasonably higher load (e.g. 10% of the peak load), test it and work on it based on the action items. Below are the key benefits

  • The performance testing team catches key architecture and design decisions that impact performance early. In the bottom-up approach, minor design issues local to the functionality are only observed in the initial days. In any business-critical and high-impact application, problems on a large scale due to architecture limitations must be identified as early as possible. While individual infrastructure services might be scalable, their usage in the project-specific context can be surprising. In our application, we used ActiveMQ to transmit a heavy payload, which affected the environment’s stability as ActiveMQ space was filled up quickly. While ActiveMQ generally will handle the load, in our specific context, we realized that there are better fits for our use case than ActiveMQ.
  • Get an overview of the infrastructure requirement to support the initial load or phases, which helps with budget forecasting and planning. While any changes to improve performance will only decrease the cost, the worst-case scenario has been accounted for.
  • Roles and bottlenecks due to different teams are identified early in the project lifecycle. This earlier identification gives all team members, including the Infrastructure, network, and application teams, the required time to troubleshoot and fix the problems.

When to use the Top-Up approach

  1. Architecture and design evolve as the application develops.
  2. When the application is complex with various services, various infrastructure components and services are new to the project members and the organization (at this scale).
  3. When the organizations are ready to invest in additional performance testing infrastructure.
  4. When the quicker time to market outweighs the additional infrastructure cost spent on over-provisioning the infrastructure till the infrastructure is optimized.

While the starting point differs between the approaches, it is much more than that. The moment management decides on a strategy, the level of preparation and dependencies will vary. The impact of such a simple decision is immense, and the value and the time they give back are invaluable.

Services Virtualization

Service virtualization is an overlooked aspect while preparing the project for performance testing. While products can accelerate it, they come with additional licensing costs and infrastructure requirements. Also, SI develops the software in its environment and ships the final software to the customer. In that case, it does not have the opportunity to perform minimum viable performance testing with a reasonable load, which can give confidence to management. 

Another aspect is data consistency. If only certain services are virtualized and others are not, the impact of the data inconsistency on the test scenarios, test scripts, etc., needs to be considered.

When using virtualized services for a performance testing team, the testing team should ensure that the latencies are ingested when the virtualized service returns the responses. This will give the results more credibility and bring results closer to the production behaviour.

Test Data Preparedness

The ease and complexity of preparing test data are proportional to the heterogeneity, variety, number and complexity of the services involved in the application. In a microservices architecture, every service will/can have its schema. While this gives the impression that the modular approach to creating test data is straightforward, the complexity comes when there is a soft dependency between various schemas. If the data are not consistent, then the scripts will fail.

When an off-the-shelf product is customized and implemented, the application teams will be comfortable with the APIs and the service payload that ingest the data but might need to be fully conversant with the data models and ingestion. Using APIs for data ingestion will practically work; it takes a lot more time than SQL or script-driven data ingestion.

In a large data setup, application teams must prioritize and dedicate their time to helping the performance testing team set up the test data.

The program and project plan should account for such overheads so that SI and the development team are better prepared.

The application team’s effort spent on test data preparedness cannot contribute to the immediate spring goals or provide tangible deliverables for the functional testing and business teams. However, the management should accept this and appreciate the long-term impact these efforts by the application development team will bring. In our program, we requested the performance testing team to use the APIs to set up the data, which delayed the data setup phase by a couple of (precious) months.

Impact of other changes on performance

Apart from performance, below are a few NFRs which influence the architecture decisions and can have a significant impact on the performance

  1. Application/API Security
  2. Regulatory compliance
  3. Auditing and Logging
  4. PCI and PII data security
  5. Infrastructure security components (DDoS protection rules, etc.)
  6. Fraud identification/prevention(Mostly in financial applications)
  7. Reporting and analytics

It is recommended that the approach for these NFRs be also discussed, agreed upon, and performance tested early in the project/program’s lifecycle. While technical architects can improvise the strategy in a pure agile manner, the cost and risks of implementation increase exponentially if these NFRs are architected much later. Other NFR requirements that affect performance should be approached with a curious mind by various stakeholders and their relevance to the current time frame of application usage, target audience, target device, etc. They should be decided as a team if they need to be implemented or can be ignored. Many greenfield implementations could involve rewriting existing legacy applications. In such scenarios, NFRs of legacy applications should be approached differently in the new application as well.

Handling system instability due to HPA

HPA is one of the essential and valuable features of Kubernetes. It is one of the main tenants which ensures the elasticity of the infrastructure and aids in significant cost optimizations. However, for one of our use cases, HPA caused the systemโ€™sinnstability ,and we turned it off for the selected services.
The same story has been cross posted here


Use case Overview

Our solution is a multicloud system. While the application we develop and manage is hosted in AWS, few of the upstream systems in the enterprise are hosted in GCP.

Our application listens to a PubSub topic to which one of our upstream system posts messages. Message has only primary key of a resource. Since the resource has sensitive PII information, it was decided that the entire data will not be sent over message. However, data will be available over an API and can be accessed over a HTTP endpoint. The traffice between AWS and GCP is routed via enterprise data centre and does not go via public internet.

Assume that each of the service was running on a separate pods.

Request flow

Below are the steps

  • Upstream systems posts message.
  • Our application has listener โ€œService Aโ€ which listens to the message.
  • Upon receiving a message Service A invokes a business โ€œService Bโ€.
  • Service B, invokes API provided by โ€œService Cโ€ and hosted in GCP.
  • Once the response is received, validations and data clean up are done.
  • After the data clean up, โ€œService Bโ€ invokes a produc domain service D.
  • Product domain service saves the data in the database.

The reason for such a heterogeneous architecture and different services (product domain service, Service B etc) can be debated etc inprinciple it does not seem to challenge the relevance of HPA in this context.

How the problem Unfolded

Critical factor here is the time taken by the Service B, which does a lot of operations. It invokes a third party service, gets the response, does a few business logic and saves through service D. Assume that the service hosted in GCP returns the response in 75 milli seconds.

On a normal day, the number of message received will be a factor of number of uses accessing the application. However, due to operational reasons, it has been decided to publish a lot of messages in a short time. To quantiy, around 1 million messages were emitted (bursted) by the source into PubSub.

When a messages arrives, Service A, which is the initial entry point recognized that there is a sudden need to process all the messages. Hence the HPA rules got triggered and the service A occupied a lot of CPU. Service A delegates most of its resources to Service B, which inturn delegates to service D (After the validation etc).

Now the demand for resources from Service B is same as the demand requsted by Service A. Now, Service B is also trying to spin up new instances of the pod via HPA. However, since Service A has utilized most of the available capacity in the node, there is no extra capacity for the Service B. K8S might trigger to spring up new node as well. Since the node spin up will take time, it will further delay.

The communication between Service A and B is via HTTP. Since service B is not giving the result immediately, the requests triggered by Service A waits till the timeout configured and then fail. As the requests gets piled up in Service A, the service A also requests resources from node for spinning up new pods.

Below image describes it clearly.

Solution

Luckily, we identified the problem in one of the mock runs in lower environment. Looking at the logs, it was evident that resource contention is happening. With higher nmber of instances of Service A, the picture was clear. We did the following things

  • Turn off the HPA for the services involved(Service A,B and D)
  • Pre-scale each of these services.
  • While pre-scaing make sure that downstream services has more instances. In this case, we provisioned/prescaled 5 instances of Service D,4 instances of Service B and 3 instances of Service A.
  • We also validated with GCP team hosting the Service C to be prepared for the sudden spike. They had also pre-scaled to avoid any delays in horizontal scaling up of their pods and services.

One impact is that the time taken to consume all the 1 million messages increased. However there were no failues. With a better CPU and RAM configuration, we were able to process the messages much faster while keeping the same ratio of instances of Service A,B and D.

Other Possible Solutions

  • See if message handler can directly invoke the Domain service D. While this will result in best of both the worlds, it might result in code-duplication or the business rules extracted to a JAR file and used by dependent services. This has its own implications.
  • Message listener service (Service A) and other downstream services( Service B and Service D) can be run in seperate clusters. This will avoid one pod consuming all the resources in a node.
  • Different HPA rules for Service A , B and D. I am not sure if it is possible in K8S or EKS. Any K8S experts can suggest. But the partner company who manages the infra and having a army of AWS certified experts didnt offer such insights.
  • Limits the max instances of A, B and D. This will ensure that even within HPA, one service does not bring down th others.
  • Package all Service A,B and D in a single pod.

While these options were there, each of them will have their own pros and cons and needs to be evaluated. During the application development and during infra design, such edge case of sudden bursts were hardly discussed. This resulted in choosing the quickest solution during a crisis. Not a solution which works on sunny and raind days and which more reliable and consistent.

Using UPI Outside India – First hand Experience

I have been a early adaptor of various fintech platforms and apps for the sheer simplicity they provide. I used PayTm wallets regularly even before demonetization etc. As a tech enthusiast and a common man, UPI is a big boon. It avoided need to withdraw and use cash and avoids various hassles associated with it. There are many more articles, case studies on the impact and benefits of UPI to both consumers, retailers, banking eco system etc.

From 2023, UPI has expanded to various countries outside India. At the moment of writing, it is accepted in Franch, Singapore, Thailand, Vietnam, Philippines, UAE and the latest addition being Mauritius and Srilanka. I had the opportunity to try UPI twice outside of India. In Singapore in August 2023 and in Thailand on February 2024. In both occasions, my aim was to use it for retail payments. I did not get a chance to remit money from foreign countries to India.

Same post has been cross posted here

Activating UPI International Payments via UPI

Before attempting payments, make sure you enable the international payments in your UPI app. In PhonePe and PayTm it can be done under the profile section, by clicking on your profile section and enabling it under Payments sub section.

UPI Experience in Thailand

Thailand is a interesting nation. It has the infrastructure of Singapore, but a mindset of India of 2014โ€™s while coming to fintech and digital adoption. While digital options are available, people still prefer cash.

In Thailand my visit was restricted only to Bangkok. I had a chance to spend money in starbucks, small pizzerias and taxis. While the shops had QR code, there is no standardised QR. A few had Alipay QR, in some of the swiping machines, shop keeper need to select payment provider in PoS machine before showing QR. Taxi person had a Thai QR, but not sure if it is not standardized like UPI.

For Thailand, UPI has agreement with LinqPay. In the pizzeria PoS machine, shop keeper was not able to generate QR of LinqPay from PoS. The taxi driver was able to easily show his QR from a mobile app, the scanning from PhonePe did not work (Even after enabling it).It could not recognize the receiver.

Even in other local markets, the shop keeper had QR provided by banks or Alipay. Compatibility with UPI was never highlighted.

UPI Experience in Singapore

In Singapore, i had a chance only to pay only shops in high end malls and in airport. Most of the shops there accepted Credit cards. While most of them had QR code based payment, they were using Alipay, which is not a partner for UPI. PayNow is the partner for UPI in Singapore.

Effectively I was not able to use UPI in both the countries.

Why UPI did not find acceptance with foreign merchants

1. Lack of Gateway agnostic QR:

In both the countries, the intermediate gateways or payment provider manage the QR, similar to the wallet of Fintech apps in India. Due to the lack on interoperability, the failure/non compatibility tend to increase. After failing in 2 or 3 places, people tend to give up or forget the UPI payment option.

2. Lack of awarenes among merchants:

Merchants are not aware of this payment option or how to make it work along with their existing payment enablers.

3. Lack of awareness among travellers:

Only time when people know that UPI can work outside of India is when a new country is added to the ecosystem. Even lot of young, educated and tech savvy people are not aware of this payment option or that UPI will work on the specific country they are visiting.

What can be done to Increase adoption

1. Increase awareness among travellers:

Improve awareness among indian travellers travelling about this option. It could be via advertisement in travel portals, hoardings in airport waiting areas in India (before departure),in the arrival area of supported countries, notification on the apps which support UPI payment, banks notification to customers etc.

2. Increase awareness among merchants:

Improve the awareness among the merchants in supported countries on how to accept payment via UPI. Highlight them the benefits (instant settlements against T+1 or T+2 against cards), better customer satisfaction.

3. Improve Interoperability:

Perhaps this will be the toughest. Work with regulators in other countries on how to remove the dependency of payment gateways. UPI can partner with more than one payment providers in one country or work with regulator to have payment vendor agnostic QR code which can be used with UPI. This will enable confidence amongst travellers to use without doubt of success % and will save time.

My transition from memory loss to a super organiser

Year of 2023 ended with a good note for me. One of the things that made 2023 special was a consistent feedback from peers and manager that I am highly organized in my work and I bring the things to closure. I am elated with the feedback as till a couple of years back, quality of my work suffered due to lack of planning. Infact, even now i used to forget things. Funnily, today i forgot my lunch bag in the food court and retrieved it from lost and found when i left for the day.

cross posted here

Step 1 : Identifying what is important and what is not

Eisenhower Matrix is a good starting point. 

The challenge comes when everything is asked to you as a critical or urgent item. What is important is to identify them in specific bucket using

  1. Asking proper questions
  2. Using your own thoughts

Below are the key productivity tools I use

  1. Memo
  2. Microsoft One Note
  3. Google Calendar
  4. Confluence

Lifecycle

When any work comes, it could be a outcome of a email with specific action item on me or my team, or a slack discussion which needs further exploration or a specific things that needs to be worked out and mentioned in a meeting, or a random thought occurring me, below is what I do

  1. Make a entry on a note sticker in Memo desktop app. If it is really critical, I make sure to mark it as Bold. I add any reference like Jira ID etc if needed. I also make a note of to whom I should delegate or follow up.
  2. I have multiple Memoโ€™s (similar to sticky notes). I add the activities/or what i need to do in the respective. For e.g i have below notes
    • My task list
    • Things i should discuss when i discuss with my peer or team mate A (e.g John)
    • Things i should discuss when i discuss with my peer or team mate B (e.g Ravi)
    • Trainings to do
    • Blog posts to write on
    • ….
      And the list continues
      I look at the things based on the type and edit/remove them. AT regular intervals, i also delete the unwanted note category
  3. If i have a note to follow up or delegate to someone a task, I ask them to do in slack or call. After that I have a task in Google calendar to follow up with the specific member. If it is a long running task on initiative, i set up a recurring call or task based.
  4. If there is any Jira ticket or a similar ticket in any project management tool, i subscribe/watch to the specific
  5. If some thought needs explanation, I detail them out in a confluence page. I tag and comment the relevant people to add thoughts on the same or ask for their review.
  6. While slacking or emailing any people beyond the office hours, I also make sure to delay the delivery (unless I am sure that the recipients are working at the time and my communication is expected) so that it reaches them at a time which ensure their attention.

Effective use of OneNote

I use OneNote effectively to capture the knowledge or notes which are worth saving for a lot of time. E.g Decisions taken, key workshop notes, training notes etc. Generally on all these things, there is also a need to save any multimedia content (Images, video link) and also elaborate with my thoughts.

The hierarchical nature of OneNote helps me to maintain and organise the contents in a logical manned.

I have following NoteBooks in One Note

  1. Personal (used to takes notes on personal activities, trainings etc)
  2. Employer (Used to store organization related training notes,decisions etc)
  3. Project/Customer specific notes

Each note book has various section based on the category and each section has various pages. Below is the snapshot from my personal note book

Why I do not use a personal trello board

I like trello boards, but I do not use them for my work tracking as it is a over engineered solution for my current needs. Below are the things

  1. I like to track and make a note of even small things. This would mean lot of of records.
  2. Benefit of trello board comes when you create multiple swimlanes. Given i have various types of tasks and each type has different stages, the board might look very cluttered.
  3. Trello or any equivalent gives a maximum value when lot of people collaborate and contribute.
  4. I can use the same toolset for my personal tasks too. For e.g I have a recurring task for every night to soak Chia seeds, Sabja seeds, start fermentation milk to curd, call a specific person with agenda etc

Life with Entry level smartwatches

Smart watches have become one of the popular and must have gadget and accessory. While the famous and premium brands like Garmin, Apple Watch , Galaxy watch provides a lot of convenince, how is the experience with entry level or budget smart watches ( less than 5000 INR or $75). While it is imperative that they will not give the same experience as the premium brands, the question is, what do you miss out on?

This blog is based on my experience is using Noise fit (from 2021-2023) and Fireboltt (2023) . Please note that I have not used any premium smart watches. So, I am not comparing apples with oranges, but just elaborating on my experiences with them.

Location access

Most of the watches in this price range do not have in-built independent location access. While a few can give location when paired with a mobile phone, watches alone do not have it. This is relevant when planning to go for a long run or cycling, and you do not want to take your phone with you. While the distance will be measured and given, the actual geographical location might not be given.

Accuracy of values

In my experience the distance values given by the watchs are not same as the one given by my phone. For the same running, Strava measures a distance of 4 Kilometres, watches give a different value and they differ by a huge margin. Noise fit gave 4.8 Kilometers and Bolut give me 3.2 kilometres. Sleep duration is also not accurate and completely off.For workouts like Yoga, weight lifting etc the calorie values apper very less and same irrespective of the intensity of the workout. I would not bet on these watches to monitor my temperature or heart beat and alert or notify me.

Integration with third party apps

These watches do not offer integration with external fitness apps like Strava or Adidas training. I track my running and cycling in strava. I would have loved if these watches can share data with Strava.ย  Due to this, I am not able to share my workout details with my trainer, fans and friends from a single consolidates location.

App Capabilities

App interface and experience are sufficient and satisfactory. A major complaint is that the apps of these devices consume/drain a lot of battey from the phone. Even if the watch is not paired and connected, the apps continue to run and drain the battery of the phone. The apps also miss the capability to compare various workout sessions.

Battery life

One thing that impresses me significantly is the battery life of the watches. It takes aroun 8 to 9 days to go from 100 to 15%, which I find is good.ย 

Verdict

So, are the entry level smart watches worth it. For people who are not very heatlh conscious and who are not driven by fitness statisctics, these watches will suffice and offer a great value. They measure a lot of attributes. Even with a debatable accuracy %, they can be used to comapre the trend and consistency. If you are planning to use this to flaunt your workout and measure your progress in a objecive manner, investing a bit more on premium watches helps you to meet those objectives.

2023 – A Recap

2023 was an interesting year for me.โ€‚How do i feel, if i look back at it. Continue reading to know more.

What I am Proud of

Below are a few things that happened in 2023, about which I am proud of

  • Able to control my screen time. I was never addicted to instagram , tiktok etc. So naturally i was able to aovid Instagram reels, youtube shorts etc. Most of the working days, Economic times was the first app i open (not FB or twitter). However, on weekends, my social media time is high. I need to restrict it.
  • Minimise OTT consumption.
  • Able to dedicate time for spirtual improvement. Able to do Sandhyavandhanam atleast 4-5 times a week and read “Deivathin Kural” everyday.
  • Able to workout consistently whenever possible.
  • Able to maintain work life balance and spend quality time with family.

    Trips

Trips

I made a total of 10 business trips this year. Below are the cities covered

  • Pune
  • Mumbai
  • Bangalore ( 5 trips)
  • Singapore
  • Bangkok
  • Ahmedabad

    Below are the cities covered for family funtions
  • Coimbatore
  • Erode (Bhavani)

Below are the cities covered as part of spritual trip/road trips

  • Shirdi
  • Bhadrachalam
    To sum it up, 2023 was an eventful year in terms of travel.

Learning

This is where i fell short. I started to learn Angular but could not complete it. Lot of breaks and after sometime Udemy did not load the contents properly. One reason being that I am not able to reduce my sleeping time to 5 hours. I slept an average of 6 to 7 hours, which is very high.

Work

Work was ok. Usual delays and escalations. But what is more worrying is that the challenges that were present during the beginning of the year still remain (Deployment and infrastructure issues. Still people dependent). While I was able to have a say and manage to give a workable plan, i feel that it fell short. I was also not able to influence and get started on performance initiatives.
It is always an reactive way and not a proactive one.

Sum it up

To sum it up, 2023 has been an underwhelming year. It is a year where processes are set in place.โ€‚Needs a re-alignment of priorities to achieve more. I need to make an effort to reduce my sleeping time.

Goals for 2024

  • Use GenAI to solve documentation and deployment issues.
  • Resolve deployment problems by more automation.
  • Contribute to at least one open-source initiative.