Monday, August 28, 2023

Google Translate Unwrapped: The Tech that Bridges Language Gaps


Photo by Brett Jordan on Unsplash

Changes are constantly occurring in the field of Natural Language Processing (NLP), but things were different a few years ago. There were only a few websites or apps that could work with multiple languages, and translating wasn’t a big thing back then.

In this article, we’ll explore the functionalities of Google Translate, examine its statistics, and provide an overview of its high-level architecture, if such information is publicly accessible.


Presented below are the primary features; moreover, additional features include:

  1. Text Translation: Translate written text from one language to another
  2. Website Translation: Translate entire websites to a selected language
  3. Document Translation: Upload documents for translation, preserving formatting
  4. Speech Translation: Translate spoken language in real time
  5. Camera Translation: Translate text captured by the camera in real time
  6. Conversation Mode: Enable two-way translation for interactive conversations
  7. Offline Translation: Download languages for translation without internet access
  8. Dictionary and Synonyms: Access definitions, synonyms, and example sentences for translated words.
  9. Handwriting Input: Draw characters or letters on a touchscreen for translation
  10. Phrasebook: Save translated phrases for future reference
  11. Auto-Detection: Automatically identify the source language
  12. Language Detection: Identify the language of a given text
  13. Word Lens: Translate text instantly using the camera view
  14. Conversation History: Review past translation interactions
  15. Language Pairs: Support for translation between a wide variety of languages
  16. Audio Playback: Listen to pronunciation of translated text
  17. Translate Community: Contribute to translations through user suggestions.
  18. Language Input Methods: Use keyboard, speech, handwriting, or camera for input
  19. Instant Camera Translation: Translate text within images or videos
  20. Customization: Personalize settings like font size and language preferences
  21. Integration with Google Services: Seamlessly use Translate with other Google products
  22. Real-time Translation: Transcribe and translate spoken conversations.
  23. Audio Input Translation: Translate spoken words and phrases

Let’s look at some of the quick stats:

  1. The Google tool that helps people understand different languages has been used by more than a billion people.
  2. Based on information from Google News Lab, the word “beautiful” is the one that people translate the most using Google Translate. After that, the words “good,” “love,” and “mama” are also frequently translated.
  3. The languages that people commonly translate using Google Translate include English, Spanish, Arabic, Russian, Portuguese, and Indonesian. These are the ones that users often ask Google Translate to help them with.
  4. Google Translate can assist with translating in 133 different languages.
  5. Around 92% of the translations done using Google Translate are from countries outside the United States. Among these, Brazil is at the forefront as the country that uses Google Translate the most.
  6. Around 143 billion words are translated every single day, across 100 languages.
  7. Total languages in the world: 7,100, Google Translate coverage: 133/7,100 ~ 2%, still more work to be done.

Here is the detailed architecture what I found very interesting:

https://www.lavivienpost.com/google-translate-and-transformer-model/

Sunday, August 27, 2023

The very first time I gave a presentation to the public audience

Photo by DESIGNECOLOGIST on Unsplash

I delivered my debut tech talk to the public titled “How to Scale” This encompassed in-depth case studies on:

  1. The scaling journey of Bigbasket Daily, catering to a few lakhs customers daily.
  2. The scaling strategy of Shiksha.com, handling several million daily requests.

YouTube:https://www.youtube.com/watch?v=rd9dgE7vQbA

Link to the Campaign: https://insider.in/how-to-scale-your-applications-to-match-the-business-growth-jun20-2020-digital-online-event/event

My next tech talk on “Data Observability Platform” is on 09th September 2023 at 11:30 am, here the details:

https://www.linkedin.com/posts/aditya-roshan_hello-everyone-ill-be-giving-a-tech-talk-activity-7105522090274947074-Kp-6

Please join me for this event.

Saturday, August 26, 2023

Learnings from — Clean Code: A Handbook of Agile Software Craftsmanship


Presently, I am engrossed in “Clean Code,” a book that delves into the realm of best coding practices. I find myself pondering how beneficial it would have been if I had encountered this book a few years ago, as it encompasses numerous great principles that could have significantly impacted my coding journey.

The memories of my training days at Infosys, where I was introduced to programming in C, Java, and various other software technologies, still linger vividly in my mind.

As an ECE graduate, and not from a renowned engineering college, my college experience didn’t provide me with substantial exposure to coding. Instead, I resorted to memorizing a list of programs solely to pass the exams.

Why would a senior professional with 15+ years of experience, who could be focusing on management or strategy books, choose to be interested in reading about coding practices?

However, code is the foundation of our work, especially when managing a team of engineers with varying levels of experience (0–10+ years). Code reviews play a pivotal role, but knowing what to review and how to conduct these reviews is essential. This is precisely why understanding industry-wide best coding practices becomes vital for a manager. It enables you to enforce these practices within your team effectively.

What I’ve noticed in my numerous code review sessions is that less experienced team members tend to be receptive and open to learning, making it easier to guide them towards writing better code. However, some more senior team members occasionally go into a defensive mode, often saying, “We have a lot of work; we’ll do it later, sir.” This attitude leads to technical debt, and since we rarely get dedicated time to refactor code, it results in messy and unmaintainable code.


Now, let’s delve into the key takeaways or lessons learned:

Meaningful Names: Just like when a new baby is born into a family, and every member starts looking for meaningful and trendy names, the same principle applies to treating your code as your “baby” — it deserves a meaningful name too.

  1. In the context of coding best practices, it is recommended that a class name should be a noun, while a method name should be a verb. This naming convention enhances code readability.
  2. Regardless of whether you are naming a variable, class, member variable, or any other element in your code, ensure that the name reflects its context and usage. Even if the name ends up being long, what truly matters is that it conveys a meaningful story about its purpose.
Class x {
int x;
int y;

int mult() {
return x*y;
}

}

int d; // start time
int f; // end time
Int g; // delta time

// Instead
int startTimeIndays;
int endTimeIndays;
int differenceBetweenStartAndEndTime;

Upon inspecting the code snippet above, it is challenging to deduce the purpose of the multiplication operation or the reason behind it due to the lack of meaningful naming conventions.

3. Names should be pronounceable, just like any other word in the English language.

4. Opt for searchable names in your code. When debugging or navigating through the code, the ability to search for classes, methods, or variables will prove beneficial, while the lack of searchability can cause inconvenience and frustration.

5. Avoid mixing names with language reserved keywords or operating system reserved keywords. It can lead to conflicts and unexpected behaviors in your code.

6. Avoid using the “m_*” prefix for member variables. Instead, aim to create readable and appropriately concise method and class names.

public class Product {

String m_dsc; // discription

void setName(String name) {

m_dsc = name;
}

}

// Instead
public class Product {

String description; // discription

void setName(String description) {

this.description = description;
}

}

7. When dealing with interfaces and their implementations, it is common to prefix “I” before the interface name. However, a better approach is to avoid the “I” prefix for the interface itself and use it only for the implementation. For instance, rename “IShapeFactory” to “ShapeFactory” and name the implementation as “ShapeFactoryImp”.


In the next segment (part 2), we will delve into functions/methods…

Squandered Potential: When Dreams Remain Unfulfilled

 

Photo by JĂșnior Ferreira on Unsplash

In this blog post, I will share a personal experience from my life where a significant opportunity was missed because of insufficient planning and execution.

Year 2010, I left Infosys and shifted back to Delhi to join Info Edge(Naukri.com) to build one of their portfolio product into education domain shiksha.com.

During the summer season, a few of us would gather for our weekend get-together, where we indulged in all-night sessions discussing various professional and personal matters.

One of my friends was quite imaginative and always eager to experiment with different drink combinations, often incorporating mint, lemon, and other ingredients.

Observing his expertise, I proposed the idea that when we retire, we should consider opening a restaurant or bar together, where we can creatively experiment with various drink combinations.

The concept of opening a restaurant with innovative drink offerings also sparked a startup idea in my mind: creating a platform to find excellent restaurants that offer such services. However, back then, Zomato was just about to emerge, and there were only a few restaurant listing websites available at the time.

We made a collective decision to meet the following day at our favorite coffee place in Connaught Place. Our aim was to work out a comprehensive plan and brainstorm potential domain names for the website idea we had in mind.

After savoring multiple mugs of coffee and enjoying some snacks, we eventually reached a unanimous decision on a name that perfectly resonated with our idea: “desertdrop.com.”

The next step involved going locality by locality to gather restaurant data and build an extensive database. Our initial plan was to start with a small locality in Delhi and gradually expand to cover more areas.

As the developer within our group, I assumed the responsibility of building the website’s workflow and managing its functionality.

Over the course of several meetings, we discussed and refined the overall plan, as well as the ownership structure. However, after a few months, the initial enthusiasm and motivation gradually faded away, and the project lost its momentum.

Recently, I met my friend in Bangalore, the one who initially sparked the idea. We couldn’t help but feel a sense of regret, realizing that if we had executed the idea properly, we could have been the owners of a company similar to or even better than Zomato.

In retrospect, the key factors that contributed to the failure of our venture were as follows:

  1. Overemphasis on Planning: We devoted excessive time to planning, continually refining our ideas, but failed to move beyond the planning stage.
  2. Lack of a Dedicated Team: We didn’t assemble a committed team with the right skill sets who could actively contribute to the implementation of the idea.
  3. Reluctance to Take Risks: We were hesitant to take bold risks to pursue our idea and turn it into a reality, which limited our progress.
  4. Lack of Execution: Despite having a well-structured plan, we didn’t take concrete steps to execute it, leading to the idea remaining dormant and unrealized.


Exploring CAP theorem trade-offs and different types of databases based on trade-offs

                                                                       CAP theorem

First, let’s grasp the concept of the CAP theorem. Subsequently, the CAP theorem, also known as Brewer’s theorem, posits that a database can effectively ensure a maximum of two out of three assurances: Consistency, Availability, and Partition Tolerance.

Consistency: Consistency in the CAP theorem means that all nodes in a distributed system see the same data simultaneously. It ensures data uniformity across the system, but achieving it might affect availability and partition tolerance.

Greater consistency results in increased latency within the system, since the time required for data replication/propagation among nodes is extended.

Availability: “Availability” in the context of the CAP theorem means that a distributed system ensures that operational nodes respond to user requests even in the face of failures. This can result in potentially providing slightly outdated or inconsistent data.

This pertains to the replication of data across nodes, ensuring redundancy within the system. Consequently, in the event of a node failure, the replica must be capable of fulfilling data requests.


Here’s a memorable tale from about 5 to 7 years ago when we were redesigning our platform. Back then, a friend of mine, who worked as an architect, was explaining our new design to a senior leader.

In this plan, we included extra copies and safety nets to deal with possible problems. Our main goal was to make sure the system was up and running most of the time.

Examining the backup plan, the individual grew annoyed and questioned, “Why so much emphasis on backup, backup, backup?” This led to a discussion about the potential consequences if the system experienced downtime. Would it impact the business, and if it did, how significantly? Unfortunately, we lacked a concrete response. Despite this, after the meeting, we found humor in the situation and playfully teased our architect friend about it.

As I look back on it, that incident actually raised an important question that every architect should think about when designing something. It’s about matching what the business really needs with what we’re making. Are the business goals and what we’re building on the same page? And, is the business willing to pay for keeping the system available? Remember, everything has a cost — there’s no such thing as a free lunch.


Partition-Tolerance: Partition tolerance, in the CAP theorem, is a system’s capability to operate despite network disruptions. It ensures availability during communication failures but might lead to consistency trade-offs.

Even if some nodes within the system are not operational or if there are network failures, the system should maintain its functionality without any disruption.


Let’s look into database types considering the trade-offs posed by the CAP theorem.

Examples of CA type DBs: Google Cloud Firestore, SAP MaxDB, MemSQL, SAP IQ (Sybase IQ), VoltDB, Teradata, Google Cloud Datastore, Microsoft Azure Cosmos DB, Amazon RDS (Relational Database Service), IBM Cloudant, Oracle NoSQL Database, ArangoDB

Examples of CP type DBs: MySQL, MongoDB, Oracle Database, Microsoft SQL Server, PostgreSQL, HBase, Aerospike, Google Cloud Spanner, RDBMS, Neo4j, SAP HANA, CockroachDB, IBM Db2, Hazelcast, TiDB, VoltDB

Examples of AP type DBs: Cassandra, DynamoDB, Couchbase, ScyllaDB, RavenDB, RethinkDB, Riak, MongoDB, Redis


Please bear in mind that every database has a distinct design and intended purpose. For instance, relational databases like RDBMS or MySQL prioritize consistency as their primary objective. Furthermore, the behavior of a particular database can also be influenced by its configuration choices.

Might the creative application of Plants and Lands as a means of exchange present a fresh and potent strategy for alleviating the repercussions of Global warming?

 

Photo by Sawyer Bengtson on Unsplash

Last week, as I looked out from my office in Bangalore, I saw a city that had changed a lot over the years. It used to have more greenery and a beautiful appearance, quite different from how it looks now with all the tall buildings and concrete.

Thinking about this, I started wondering about the effects of the city growing so much in terms of real estate. This growth was meant to make the economy better and help the city progress, but it ended up making the natural beauty disappear.

Still, it's important to recognize that development is necessary to create jobs and make the economy strong. It keeps things moving forward, especially in competitions like GDP and GMV. But how can we find a solution to this problem? How can we bring nature back?

These thoughts kept coming to my mind, and they led me to think of a different way to solve this issue: changing the way we think about trading things. In the past, people used to exchange things like Gold, forests, and land as currency. I started thinking about how this could help.

What if we saw wealth not just as money but also as the land we own? Or the number of trees we take care of? What if precious metals were seen as something very valuable? This new way of thinking might help us bring back the environment and stop using nature so much just for development. Of course, this idea might have problems in practice, but I believe we can find ways to solve them.

This idea reminds me of the 1980s when people used to take pride in having a lot of land or many trees. Even marriages were sometimes based on how much property the groom and their family had.

Basically, if we change how we think about wealth, it could make us treat the environment better. It makes me wonder: Could this different way of thinking help us have a future where development and nature work together? I'd love to hear your thoughts on this idea.

Acquired Wisdom from Agile Methodologies

 

Photo by Daria Nepriakhina đŸ‡ș🇩 on Unsplash

For more than ten years, I’ve been using one of the many methodologies of Agile, called the Sprint model, to develop software. 

Over time, it has become easier for me to manage after managing many rounds of these Sprints. However, the journey wasn’t always smooth.

I still remember the day when we initially considered using this approach. 

Back then, terms like “sprint,” “backlog,” “daily scrum,” and “scrum of scrum” seemed peculiar and unfamiliar, as if they hailed from another realm. Picture attempting to shift from a customary method of working, akin to following a detailed plan (as in a Waterfall model) with four primary steps (generating the concept, designing, coding, and testing), each demanding a significant chunk of time. However, in this novel Sprint approach, the pace is considerably swifter.

It requires a considerable amount of self-discipline and effort to facilitate effective collaboration among various teams in a scrum, all working harmoniously toward a specific component of a larger objective.

 At times, challenges arise in ensuring timely preparedness for development tasks, including obtaining requirements promptly, acquiring the necessary designs/mock-ups and models for web pages, as well as ensuring that the QA team possesses the essential resources. 

Furthermore, promptly addressing issues is essential — this includes fixing problems swiftly and efficiently, encompassing a diverse array of potential concerns.

Here are the key lessons I’ve learned:

  • Sticking to the plan and maintaining discipline is of utmost importance
  • Each member of the scrum team should dedicate at least 5 minutes before beginning the day to update the accurate status of their assigned tasks
  • The individual overseeing the process (known as the Scrum master) should assist in resolving any issues that arise among the teams
  • The daily meeting, where we discuss our current work, is crucial as it aids in planning the day and identifying significant challenges
  • Planning ahead for sprints is essential to provide the development team with sufficient time to prepare for the next cycle
  • Swiftly addressing problems and being prepared for unforeseen issues is vital
  • The Scrum master should collaborate closely with the team responsible for determining the software’s purpose (the product team) to gain a comprehensive understanding of early requirements. This collaboration helps prevent conflicts between teams
  • Implementing automation, such as using tools like Slack bot, can reduce the time needed for daily standup meetings, streamlining the process
  • Use Jira Automation: Link
  • In an ideal scenario, the product and design phases should be at least one sprint ahead in terms of requirements, mock-ups, and so on. However, this rarely occurs in the real world :)
  • Conducting a mid-sprint review to recalibrate for tasks that overflowed
  • Structured retrospective meetings yielding actionable tasks aimed at enhancing processes
  • Instead of designing for the future, focus on the current use case and then expand it for future scenarios
  • Each task should have a corresponding Jira ticket for tracking purposes

As a Scrum Master, your responsibilities include:

  • Vigilantly track individual and team velocity
  • Routinely assess Burn-down/up charts, Velocity Charts, and Release/Epic Burn-down reports
  • Strictly adhere to the escalation matrix
  • Ensure active participation from all departments in the retrospective meeting
  • Oversee a comprehensive range of delay metrics, extending beyond Engineering
  • Acknowledge that delays may arise in various stages, yet Engineering often faces undue pressure and blame
  • Assume leadership to steer the ship; avoid being at the receiving end
  • Implement a well-structured release calendar
  • Manage the team’s leave calendar

Key dates to monitor:

  • Individual ticket’s Dev to QA release date
  • QA sign-off date for each ticket
  • User Acceptance Testing (UAT) and demo date
  • Release notes sign-off date
  • Production release date

As I explained above, good planning is really important in this Sprint approach. Among all the planning we do, like estimating how long a sprint will take, deciding what to do first, planning when to test, and getting ready to release the software, one of the most important parts is planning when to test (Dev to QA release planning).

In this kind of planning, the team in charge of creating the software decides when each part of the software will be tested. It’s important to keep the promises we make about when things will be ready and to spread out the testing times. If we put too many things to test all at once, the team testing might get overwhelmed and the chance of something going wrong becomes bigger.

By spreading out the testing times and making sure things are tested at regular times, the first round of testing for each part of the software can be done quickly in the next week, especially if we release new versions every two weeks. But this isn’t always easy. Sometimes unexpected things happen, like really important problems that need fixing, things we didn’t plan for, or even technical issues. This is when someone experienced in the Sprint approach (the Scrum master) can help a lot.

A skilled Scrum master knows how to handle these unexpected challenges. They might give a task to a different team member who has time, or they might ask for help from the whole team, or they might talk to the people in charge to find a way to solve the problem. Their role becomes very important in guiding the team through these tough times.

So, when we talk about planning in the Sprint approach, it’s not just about setting dates. It’s also about being flexible, taking action when things go wrong, and making sure everyone works well together to keep the promises we’ve made, even when things get complicated.

A Tribute to Writers of Unorganized Test Code

Photo by Scott Graham on Unsplash

When you inquire with a Developer about how thoroughly they’ve tested their code, you might receive a series of explanations for why they haven’t written many Unit Tests. Sometimes they’ll mention challenges in creating the tests, or they might debate whether the benefits are worth the work required.

The issue here is that the code for Unit Tests, Functional Tests, and Regression Tests isn’t given as much importance. Usually, the time and work needed for these tests aren’t taken into account when planning the development work for a sprint.

As the code becomes bigger over time and when we add new features, they can sometimes cause existing things to stop working correctly. But usually, people don’t consider spending time to write tests for these changes.

Sometimes, even if we write test code just to meet the requirements, it might not follow the good coding rules (like SOLID principles). I’ve noticed that test code is often untidy, and I’ve told the developers responsible to clean it up. However, there’s a common belief that since the messy code only runs tests, not the real system, it’s okay for it to be messy.

For those who write messy tests, there’s a rule called “FIRST” that should be kept in mind when creating test code:

  • Fast
  • Isolated/Independent
  • Repeatable
  • Self-validating
  • Timely

Fast: Make sure your tests complete quickly. I’ve observed situations where test suites take hours to finish, which can sometimes become quite frustrating.

Here’s a simple example where you’re testing a function that adds two numbers:

@Test
public void testAddition() {
int result = Calculator.add(3, 5);
assertEquals(8, result);
}

Isolated/Independent: Ensure that every test is self-contained and doesn’t depend on or influence the results of other tests. This approach allows you to run different tests separately and autonomously.

Here’s an example testing a class that simulates a stack:

@Test
public void testPush() {
Stack stack = new Stack();
stack.push(42);
assertEquals(1, stack.size());
}

@Test
public void testPop() {
Stack stack = new Stack();
stack.push(42);
int poppedValue = stack.pop();
assertEquals(42, poppedValue);
assertEquals(0, stack.size());
}

Repeatable: Make certain that tests always generate consistent outcomes, meaning they yield the same results whether run once or a hundred times. The output should remain unchanged.

Here’s an example testing a sorting method:

@Test
public void testSort() {
int[] unsorted = {3, 1, 4, 1, 5};
int[] sorted = {1, 1, 3, 4, 5};

Arrays.sort(unsorted);

assertArrayEquals(sorted, unsorted);
}

Self-validating: Tests should possess distinct criteria for passing or failing. This entails including both positive and negative scenarios within the test code.

Here’s an example testing a method to check if a number is prime:

@Test
public void testIsPrime() {
assertTrue(MathUtil.isPrime(17));
assertFalse(MathUtil.isPrime(6));
}

Timely: Start writing tests early in the process of developing your code. This principle is incredibly important. Before you even start writing your actual code, take time to consider the test cases and begin crafting test code as soon as you can. Delaying writing test code often means you’ll never find the time to do it later.

Here’s an example testing a method that calculates the Fibonacci sequence:

@Test
public void testFibonacci() {
assertEquals(0, MathUtil.fibonacci(0));
assertEquals(1, MathUtil.fibonacci(1));
assertEquals(55, MathUtil.fibonacci(10));
}

Happy coding :)

Innovating Financial Inclusion: The e-RUPI Initiative in India

 


e-RUPI is a targeted electronic voucher created to ensure that the designated funds are delivered to the intended recipient and can solely be utilized for the precise purpose it was intended for. It operates as a cashless system, making it person and purpose-specific.

The objective is to establish an efficient and secure delivery system with minimal logistics for various government Direct Benefit Transfer (DBT) programs nationwide. The digital e-voucher platform can also serve organizations seeking to contribute to welfare services by using e-RUPI instead of cash, ensuring a leak-proof and accountable process.


What sets apart the normal rupee used in cash transactions from digital transactions?

The primary distinction lies in the utilization of e-RUPI by a designated individual for a particular purpose, ensuring the funds are directed solely for that specific use.

For instance, if a farmer is provided with an e-RUPI voucher to purchase fertilizer for their crops, the voucher’s usage is restricted solely to acquiring fertilizer, and it cannot be utilized for any other purpose.

The implementation of e-RUPI enhances security against potential misuse by middlemen in DBT. Nevertheless, there remains a possibility of middlemen resorting to covert arrangements to illicitly benefit from the system.


Below are the entities engaged in the creation and redemption of e-RUPI vouchers:

Issuer Bank/Payer PSP: The issuing bank or payment service provider (PSP) is responsible for initiating the request to create an e-RUPI voucher with NPCI.

Sponsor: The sponsor refers to a corporate entity, a State or Union Government department, or a business customer of the bank who requests the bank for the creation of an e-RUPI voucher.

e-RUPI beneficiary: The person who receives the e-RUPI voucher is known as the e-RUPI beneficiary. It's important to note that an e-RUPI beneficiary may not have a UPI (Unified Payments Interface) account or be a bank account holder.

Designated Merchant: Designated merchant are specific voucher acceptance points where e-RUPI voucher can be redeemed/used.

Acquiring Bank/Payee PSP: Acquiring bank/Payee PSP shall be providing facility/capability to designated merchants to accept the e-RUPI voucher for redemption.

NPCI: NPCI is the owner, network operator, service provider, and coordinator of the UPI Network.


e-RUPI Work Flow:


An overview of the Open Network for Digital Commerce (ONDC) from a bird's-eye perspective

In this article, we're delving into the government's recent initiative to democratize digital ecommerce – the Open Network for Digital Commerce (ONDC). Traditional ecommerce involves four main entities: Buyers, Sellers/Merchants, E-commerce Platforms (e.g., Bigbasket, Swiggy, Amazon), and Payment Service Providers. The typical flow of an ecommerce transaction follows this structure:

In this setup, buyers are freed from the need to know about sellers' identities or product delivery specifics. This is all managed by the e-commerce platform. However, there's a challenge when it comes to product discovery for smaller merchants. Wealthier merchants can make their products more visible by establishing special agreements with the platform. This results in fewer transactions and revenue for smaller players. Additionally, buyers might have limited options since not all merchants offering a product might be on the platform they're using.

This approach brings three problems:

  1. Limited Visibility for Smaller Merchants: Smaller players struggle to get noticed among bigger ones dominating the platform.
  2. Fewer Options for Buyers: Buyers have limited product choices as not all merchants are available on their chosen platform.
  3. Monopoly of E-commerce Players: A single dominant player can lead to issues like unfair competition and limited consumer choice.

So, how does ONDC address this? ONDC offers solutions like:

  1. Enhanced Visibility for Smaller Merchants: ONDC maintains a central repository of merchants and their catalog information, ensuring their discoverability.
  2. Expanded Choices for Buyers: ONDC's search API provides access to a global catalog, allowing buyers to find similar items across multiple catalogs.
  3. Decentralized Collaboration: ONDC promotes a decentralized ecosystem where various stakeholders collaborate to create an inclusive and competitive marketplace. This prevents excessive power concentration in one player's hands and encourages diversity and innovation.

Key Entities within ONDC:

  • Buyer: The individual/entity making purchases through the platform.
  • Buyer Applications: Apps enabling buyers to search product catalogs.
  • Seller Applications: Apps for sellers to manage orders, logistics, etc.
  • Sellers: Merchants using the platform to sell products.
  • ONDC Ecosystem: Includes gateway and centralized services. The gateway authenticates participants like Business Process Partners (BPPs) and Buyer Application Partners (BAPs).

ONDC API Architecture:

ONDC's API structure follows an asynchronous model where each action triggers a callback. The requester initiates a request, and the target entity sends back a callback. Some key actions and their corresponding APIs include:

  1. Search: Find products in the catalog.
  2. Select: Choose products for purchase.
  3. Initialize: Start an order.
  4. Confirm: Confirm and finalize an order.
  5. Status: Check order status.
  6. Track: Track order progress.
  7. Cancel: Cancel an order.

For more detailed API information, visit the ONDC API Documentation.

Feel free to share your thoughts and questions on this exciting initiative!


Common Code Smells and Heuristics - Final Part

  https://ajhawrites.medium.com/common-code-smells-and-heuristics-final-part-6391a095fd5f This is the final part of the learnings from “Clea...