Skip to article frontmatterSkip to article content

Workflow:

Kaggle

Workflow:ΒΆ

  1. Problem statement
  2. Data collectio (DB, Server, Cloud, csv, xlsx, webscrapping)
  3. Data Inspection
  4. Data Cleaning
  5. Exploratory Data Analysis
  6. Feature Engineering(Encoding)
  7. Train-Test split
  8. Model Training
  9. Dodel Evaluation
  10. Hyperparameter Tuning
  11. Deployment

Local Machine -----Docker (CI/CD Pipeline with Git ----Cloud

  • Local machine - write our codes
  • cloud - where we deploy our codes to run 24/7
  • Pipeline - to flow /pathway
  • GIT - Changes will be reflected automatically
  • Docker - Transportation of our codes

Collective InspectionΒΆ

Individual InspectionΒΆ

Visual InspectionΒΆ

# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)

# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory

import os
for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))

# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" 
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# 🧹 This line removes JupyterLab (a tool that helps run notebooks),
# because we don't need it here and it can sometimes cause errors.
!pip uninstall -qy jupyterlab

# πŸš€ This line installs the Google Generative AI Python library (google-genai)
# The version we're installing is 1.7.0 β€” it's like downloading the latest brain
# that helps our code talk to Google's smart Gemini robot!
!pip install -U -q "google-genai==1.7.0"
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 144.7/144.7 kB 3.2 MB/s eta 0:00:00
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.9/100.9 kB 4.3 MB/s eta 0:00:00
# We are using special tools from Google to talk to the Gemini AI
from google import genai

# We are importing extra tools (called "types") to help talk to Gemini in a smart way
from google.genai import types

# These tools help us show pretty text and output (like bold, colors, or math) in Jupyter Notebooks
from IPython.display import HTML, Markdown, display
# We import something called `retry` from Google's API tools.
# It helps us automatically try again if something goes wrong (like the internet is slow).
from google.api_core import retry

# This line says: "If there's an error, check if it's an API error AND the code is 429 or 503"
# - 429 means: "Too many requests, slow down!"
# - 503 means: "Server is too busy or not ready!"
# So we want to retry only if it's one of these errors.
is_retriable = lambda e: (isinstance(e, genai.errors.APIError) and e.code in {429, 503})

# Now we take Gemini’s generate_content function and wrap it with a retry feature.
# That means: if Gemini is too busy (429/503), it will try again automatically instead of giving up.
genai.models.Models.generate_content = retry.Retry(
    predicate=is_retriable  # This tells it which errors to retry
)(genai.models.Models.generate_content)  # This is the actual function we're retrying
# This line imports a special helper from Kaggle that lets us get secret info (like passwords or keys)
from kaggle_secrets import UserSecretsClient

# This line gets our super-secret Google API key from Kaggle's safe storage.
# It's like opening a locked treasure chest and grabbing the special key we need to talk to Gemini!
GOOGLE_API_KEY = UserSecretsClient().get_secret("GOOGLE_API_KEY")
# First, we make a connection to the smart Gemini robot by using our secret API key.
# This is like saying "Hi Gemini! Here's my key, can I ask you something?"
client = genai.Client(api_key = GOOGLE_API_KEY)

# Now we ask Gemini a question using the model "gemini-2.0-flash".
# We give it some text (our question) and ask it to explain black holes in a simple way.
response = client.models.generate_content(
    model = "gemini-2.0-flash",  # This is the brain we're using (a fast version!)
    contents = "Explain about the Black holes to me like I'm a kid in a single line"  # Our question
)

# We print the answer that Gemini gives us.
# This will show up in our screen like a fun explanation from a smart robot friend.
print(response.text)
Black holes are like cosmic vacuum cleaners that suck up everything, even light, because they're so super heavy!

Markdown(response.text)
Loading...
display(response.text)
"Black holes are like cosmic vacuum cleaners that suck up everything, even light, because they're so super heavy!\n"
client = genai.Client(api_key=GOOGLE_API_KEY)

response = client.models.generate_content(
    model = "gemini-2.0-flash",
    contents = "Explain about Singularity briefly about 2 lines"
)
print(response.text)
Singularity is a containerization platform focused on portability, security, and reproducibility, often used in high-performance computing and enterprise environments. It allows users to package entire software environments into single files, ensuring consistent execution across different systems without requiring root privileges.

# This line starts a new chat with the Gemini robot.
# We tell it which version to use ('gemini-2.0-flash') and start with an empty history.
chat = client.chats.create(model='gemini-2.0-flash', history=[])

# Now we send a message to the Gemini robot, like saying "Hi!"
# The robot will think and answer back.
response = chat.send_message('Hello!, My name is Rajesh Karra')

# Finally, we print the robot's reply so we can read it.
print(response.text)
Hello Rajesh Karra! It's nice to meet you. How can I help you today?

response = chat.send_message('Can you explain me about how the warmholes created in just about 30 words')

print(response.text)
Wormholes, theoretical tunnels through spacetime, could form where spacetime is extremely curved, possibly near black holes. They might connect distant points, but their existence and stability are unproven.

response = chat.send_message('Do you remember what my name is')

print(response.text)
Yes, your name is Rajesh Karra.

client = genai.Client(api_key = GOOGLE_API_KEY)

response = client.models.generate_content(
    model = "gemini-2.0-flash",
    contents = "Causes of smell from the mouth, whether it is because of Vitamin defeciency or food"
)

print(response.text)
#Markdown(response.text)
Let's break down the causes of bad breath (halitosis), separating the common culprits from the less likely role of vitamin deficiencies.

**Common Causes of Bad Breath (Halitosis)**

*   **Poor Oral Hygiene:** This is the most frequent cause.
    *   **Plaque and Bacteria:** When you don't brush and floss regularly, food particles linger in your mouth. Bacteria break down these particles, producing volatile sulfur compounds (VSCs) that smell unpleasant (like rotten eggs).  Plaque is a sticky film of bacteria that accumulates on your teeth and gums.
    *   **Tongue Coating:** The back of the tongue can harbor bacteria and dead cells, contributing significantly to odor.
*   **Dry Mouth (Xerostomia):** Saliva helps cleanse the mouth by washing away food particles and bacteria. When saliva production is reduced, these particles linger, leading to bad breath. Dry mouth can be caused by:
    *   Medications (many medications list dry mouth as a side effect)
    *   Medical conditions (SjΓΆgren's syndrome, diabetes)
    *   Dehydration
    *   Mouth breathing
*   **Food:** Certain foods contain volatile compounds that are absorbed into the bloodstream and exhaled through the lungs.
    *   **Garlic and Onions:** These are notorious examples. Their sulfur compounds linger in your system for hours.
    *   **Coffee:** Can contribute to dry mouth and may have its own characteristic odor.
    *   **Sugary Foods:** Feed bacteria in the mouth.
    *   **High-Protein, Low-Carb Diets:** These can sometimes lead to ketosis, which produces a distinct, fruity odor on the breath (though not universally considered "bad").
*   **Tobacco Use:** Smoking and chewing tobacco contribute to bad breath in several ways:
    *   They dry out the mouth.
    *   They leave behind tobacco particles that decompose.
    *   They can cause gum disease.
*   **Dental Problems:**
    *   **Cavities (Tooth Decay):** Provide areas for bacteria to thrive.
    *   **Gum Disease (Gingivitis, Periodontitis):**  Gum disease creates pockets where bacteria can accumulate and cause inflammation and odor.
    *   **Impacted Teeth:** Partially erupted teeth can trap food and bacteria.
*   **Medical Conditions:** In a smaller percentage of cases, bad breath can be a symptom of an underlying medical condition:
    *   **Sinus Infections (Sinusitis):** Nasal discharge can drain into the back of the throat.
    *   **Postnasal Drip:** Similar to sinusitis, excess mucus can harbor bacteria.
    *   **Respiratory Infections (Bronchitis, Pneumonia):**
    *   **Diabetes:** Can lead to a specific breath odor (often described as fruity or like acetone).
    *   **Kidney Disease:** Can cause a fishy or ammonia-like odor.
    *   **Liver Disease:** Can cause a musty odor.
    *   **Gastroesophageal Reflux Disease (GERD):** Stomach acids and undigested food can reflux into the esophagus and mouth.
    *   **Some cancers**
*   **Dentures:** If not properly cleaned, dentures can harbor bacteria and food debris.
*   **Foreign Bodies:** Rarely, objects lodged in the nasal passages (more common in children) can cause bad breath.

**Vitamin Deficiencies and Bad Breath:  Less Likely, but Possible**

While vitamin deficiencies are not a *direct* and common cause of halitosis, they *can* indirectly contribute by affecting oral health. Here's the connection:

*   **Vitamin C Deficiency (Scurvy):** Severe vitamin C deficiency leads to scurvy, which weakens connective tissues, including those in the gums. This can cause:
    *   Bleeding gums
    *   Loose teeth
    *   Increased susceptibility to gum disease, which, as mentioned above, causes bad breath.
*   **Vitamin B Deficiencies (Especially B12 and Niacin):** Deficiencies in certain B vitamins can contribute to:
    *   **Mouth Sores (Stomatitis):**  Painful sores in the mouth can harbor bacteria.
    *   **Glossitis (Inflammation of the Tongue):**  A swollen, inflamed tongue can trap more bacteria.
*   **Vitamin D Deficiency:** Vitamin D plays a role in bone health and immune function. While more research is needed, some studies suggest a possible link between vitamin D deficiency and an increased risk of periodontal disease (gum disease).

**In summary:**  Vitamin deficiencies are usually **not the primary cause** of bad breath.  However, if a deficiency leads to oral health problems like gum disease or mouth sores, it can *indirectly* contribute to the problem.

**What to Do About Bad Breath**

1.  **Improve Oral Hygiene:**
    *   Brush your teeth at least twice a day, especially after meals. Use fluoride toothpaste.
    *   Floss daily to remove plaque and food particles from between your teeth.
    *   Brush your tongue. A tongue scraper can be very helpful.
    *   Clean dentures properly.
2.  **Stay Hydrated:** Drink plenty of water to help keep your mouth moist.
3.  **Chew Sugar-Free Gum:** Stimulates saliva flow.
4.  **Avoid Odor-Causing Foods:** Limit garlic, onions, and coffee.
5.  **Quit Smoking:**
6.  **See Your Dentist Regularly:** For checkups and cleanings.  Your dentist can identify and treat cavities, gum disease, and other dental problems.
7.  **See Your Doctor:** If you suspect an underlying medical condition is causing your bad breath.
8.  **Consider a Mouthwash:** A mouthwash containing chlorhexidine or cetylpyridinium chloride (CPC) can help kill bacteria. However, these should be used short-term, as they can have side effects.  A simple saline rinse can also be helpful.

If you're concerned about bad breath, start with a thorough evaluation of your oral hygiene habits and consult with your dentist.  They can help you identify the cause and develop a treatment plan.  Unless you have clear symptoms of a vitamin deficiency (fatigue, weakness, etc.), it's unlikely to be the primary cause, but a doctor can test for deficiencies if needed.

# This line goes through all the models that the Gemini client knows about
for model in client.models.list():
    # For each model, we print its name so we can see which ones we can use
    print(model.name)
models/embedding-gecko-001
models/gemini-1.0-pro-vision-latest
models/gemini-pro-vision
models/gemini-1.5-pro-latest
models/gemini-1.5-pro-001
models/gemini-1.5-pro-002
models/gemini-1.5-pro
models/gemini-1.5-flash-latest
models/gemini-1.5-flash-001
models/gemini-1.5-flash-001-tuning
models/gemini-1.5-flash
models/gemini-1.5-flash-002
models/gemini-1.5-flash-8b
models/gemini-1.5-flash-8b-001
models/gemini-1.5-flash-8b-latest
models/gemini-1.5-flash-8b-exp-0827
models/gemini-1.5-flash-8b-exp-0924
models/gemini-2.5-pro-exp-03-25
models/gemini-2.5-pro-preview-03-25
models/gemini-2.5-flash-preview-04-17
models/gemini-2.5-flash-preview-05-20
models/gemini-2.5-flash-preview-04-17-thinking
models/gemini-2.5-pro-preview-05-06
models/gemini-2.0-flash-exp
models/gemini-2.0-flash
models/gemini-2.0-flash-001
models/gemini-2.0-flash-exp-image-generation
models/gemini-2.0-flash-lite-001
models/gemini-2.0-flash-lite
models/gemini-2.0-flash-preview-image-generation
models/gemini-2.0-flash-lite-preview-02-05
models/gemini-2.0-flash-lite-preview
models/gemini-2.0-pro-exp
models/gemini-2.0-pro-exp-02-05
models/gemini-exp-1206
models/gemini-2.0-flash-thinking-exp-01-21
models/gemini-2.0-flash-thinking-exp
models/gemini-2.0-flash-thinking-exp-1219
models/gemini-2.5-flash-preview-tts
models/gemini-2.5-pro-preview-tts
models/learnlm-2.0-flash-experimental
models/gemma-3-1b-it
models/gemma-3-4b-it
models/gemma-3-12b-it
models/gemma-3-27b-it
models/gemma-3n-e4b-it
models/embedding-001
models/text-embedding-004
models/gemini-embedding-exp-03-07
models/gemini-embedding-exp
models/aqa
models/imagen-3.0-generate-002
models/veo-2.0-generate-001
models/gemini-2.5-flash-preview-native-audio-dialog
models/gemini-2.5-flash-exp-native-audio-thinking-dialog
models/gemini-2.0-flash-live-001
from pprint import pprint  # This helps print things in a neat and easy-to-read way

# We go through all the AI models available using the Gemini API
for model in client.models.list():
    
    # We check if the model's name is "gemini-2.0-flash" (this is the one we want to use)
    if model.name == 'models/gemini-2.0-flash':
        
        # If we found it, we print out all its details nicely (like how it works, its version, etc.)
        pprint(model.to_json_dict())
        
        # Stop the loop β€” we found what we were looking for!
        break
{'description': 'Gemini 2.0 Flash',
 'display_name': 'Gemini 2.0 Flash',
 'input_token_limit': 1048576,
 'name': 'models/gemini-2.0-flash',
 'output_token_limit': 8192,
 'supported_actions': ['generateContent',
                       'countTokens',
                       'createCachedContent',
                       'batchGenerateContent'],
 'tuned_model_info': {},
 'version': '2.0'}
# We import something called "types" from Google's genai tools.
# These help us set up rules for how the Gemini model should respond.
from google.genai import types

# We are creating a setting (called config) that says:
# "Give me no more than 200 words (tokens) in the answer."
short_config = types.GenerateContentConfig(max_output_tokens = 200)

# Now we ask Gemini to do something for us!
# We tell it:
# - Which model to use: "gemini-2.0-flash" (this is a fast and smart AI)
# - What settings to follow (short_config)
# - What question or task to answer: "Write a 100 word essay on Einstein's E=MC^2"
response = client.models.generate_content(
    model = "gemini-2.0-flash",
    config = short_config,
    contents = "Write a 100 word essay on the Einstein's E=MC^2"
)

# Finally, we print out what Gemini said (its answer!)
print(response.text)
Einstein's iconic equation, E=mcΒ², revolutionized our understanding of the universe by revealing the fundamental relationship between energy (E) and mass (m). The "c" represents the speed of light in a vacuum, a massive constant. This equation signifies that mass and energy are interchangeable; a small amount of mass can be converted into an enormous amount of energy, and vice versa.

This principle underpins nuclear power, where the splitting of atoms releases tremendous energy. Conversely, it explains the formation of stars, where nuclear fusion converts mass into light and heat. E=mcΒ² is not just a formula; it's a profound statement about the interconnectedness of matter and energy, shaping modern physics and our understanding of the cosmos.

response = chat.send_message('Do you remember my name')

print(response.text)
Yes, I remember your name is Rajesh Karra.

# First, we create a "config" that controls how random the AI's answers are.
# A higher temperature like 2.0 makes the answers more creative and surprising!
high_temp_config = types.GenerateContentConfig(temperature=2.0)

# Now we repeat the next part 5 times
for _ in range(5):
    # We ask the Gemini model (the smart robot) to give us a random color name
    # We give it our "high randomness" config so it gives different answers
    response = client.models.generate_content(
        model="gemini-2.0-flash",  # This tells it which version of Gemini to use
        config=high_temp_config,   # This is how random/creative it should be
        contents="Pick a random color...(respond in a single word)"  # This is our question
    )
    
    # If we got an answer from Gemini...
    if response.text:
        # Print the answer and a row of dashes to separate the outputs
        print(response.text, '-' * 25)
Magenta
 -------------------------
Magenta
 -------------------------
Emerald
 -------------------------
Azure
 -------------------------
Azure
 -------------------------
response = chat.send_message('Do you rememeber my name')

print(response.text)
Yes, I remember your name is Rajesh Karra. I'm designed to remember information from our conversation.

# This line sets up how "creative" or "random" Gemini should be.
# Temperature = 0.0 means: "Be very serious and give the same answer every time"
low_temp_config = types.GenerateContentConfig(temperature = 0.0)

# We are going to ask Gemini the same question 5 times
for _ in range(5):
    # We ask Gemini: "Pick a random color... (just one word)"
    # But because the temperature is low, it may not be very random
    response = client.models.generate_content(
        model = "gemini-2.0-flash",       # This is the name of the Gemini model we are using
        config = low_temp_config,         # We tell Gemini to use the serious (low-temp) setting
        contents = 'Pick a random color...(respond in a single word)'  # This is the question we ask
    )

    # If Gemini gave us an answer, we print it
    if response.text:
        print(response.text, '-' * 25)    # We also print some dashes to separate each answer
Azure
 -------------------------
Azure
 -------------------------
Azure
 -------------------------
Azure
 -------------------------
Azure
 -------------------------
# First, we are setting how the model should behave using settings
# temperature = how creative Gemini should be (higher = more random and fun!)
# top_p = helps pick the best words (like choosing from a smart list)
model_config = types.GenerateContentConfig(
    temperature = 1.0,   # Full creativity
    top_p = 0.95         # Pick from the top 95% of best next words
)

# Now we give Gemini a fun task!
# We're pretending it's a creative writer and asking it to write a story
story_prompt = "you are a creative writer. Write a short story about a cat who goes on an adventure"

# This is where we ask Gemini to do the job
# We send our prompt and settings to Gemini, and it gives us a story
response = client.models.generate_content(
    model = "gemini-2.0-flash",   # We're using the fast Gemini model
    config = model_config,        # Use the creative settings we picked
    contents = story_prompt       # This is our prompt (what we want Gemini to write)
)

# Finally, we print out the story Gemini created!
print(response.text)
Clementine, a ginger tabby of discerning taste and even more discerning naps, considered the windowsill boring. Terribly, irrevocably boring. Beyond the geraniums, the world shimmered with a summer heat haze, buzzing with unseen delights. For months, she'd been content with sunbeams and swatting dust motes, but today, a profound yearning throbbed in her tiny, feline heart. Adventure called.

She stretched, extending each claw with theatrical flair, and leaped. The forbidden thrill of rough concrete greeted her paws. Freedom!

Clementine padded cautiously towards the untamed wilderness of Mrs. Higgins' prize-winning roses. Each bloom was a fragrant fortress, buzzing with bees she'd only ever glimpsed from afar. She stalked, a miniature tiger in a floral jungle, her tail twitching with predatory excitement. A particularly plump bumblebee, oblivious to its imminent danger, hovered near a velvety crimson petal.

This, Clementine decided, was her first obstacle.

She crouched, muscles coiled, and sprung. The bee, startled by the orange blur, veered sharply. Clementine landed in a thorny bush, letting out a yowl of indignation. Adventure, it seemed, came with prickles.

Undeterred, she pushed on, emerging scratched but triumphant. Her journey led her to the Whispering Woods, a patch of unkempt shrubbery bordering the Higgins' garden. Here, the air was thick with the scent of damp earth and decaying leaves. Strange rustlings emanated from the shadows, and Clementine, despite her bravado, felt a tremor of fear.

Suddenly, a movement caught her eye. A tiny, grey mouse, its whiskers twitching nervously, peeked out from beneath a gnarled root. Here was an opportunity, a chance to prove her hunting prowess.

She stalked, silent as a shadow. This was it. The culmination of her grand adventure. She tensed, ready to pounce.

Then, the mouse squeaked. It was a pathetic, high-pitched sound, filled with such abject terror that Clementine's predatory instincts faltered. She imagined the mouse's tiny heart beating like a hummingbird's.

Instead of attacking, she sat down, her ginger fur shimmering in the dappled sunlight. The mouse, emboldened by her inaction, crept closer. It circled her cautiously, sniffing at her paws.

For a long moment, they regarded each other, predator and prey, separated only by a fragile understanding. Finally, the mouse scampered away, disappearing back into the darkness.

Clementine watched it go. The thrill of the hunt was gone, replaced by a strange, unfamiliar warmth. Perhaps, she thought, adventure wasn't about conquering, but about connection.

The sun was beginning to dip below the horizon, painting the sky in hues of orange and pink. It was time to go home.

She padded back towards the windowsill, her tail held high. The world outside was still vast and mysterious, but she no longer yearned to conquer it. She had faced thorns, braved the whispering woods, and even spared a mouse. Clementine, the ginger tabby, had gone on an adventure, and returned wiser, not just braver.

As she settled back onto her favourite cushion, the familiar warmth of the sun on her fur, she knew this was just the beginning. Tomorrow, perhaps, she would explore the mysteries of the bird bath. But for now, a nap was definitely in order. Adventure, after all, was tiring work.

# 🧠 This block sets how the Gemini model should behave
# Think of it like telling the AI:
# "Be calm (low temperature), give full answers (top_p), and don’t talk too much (limit tokens)"
model_config = types.GenerateContentConfig(
    temperature = 0.1,       # Low temperature = more predictable answers
    top_p = 1,               # Top-p = allow full creativity range (1 = use all words)
    max_output_tokens = 5,   # Only return up to 5 words in the answer
)

# πŸ“ This is the message (prompt) we send to Gemini
# We’re asking it to guess the emotion of a movie review (Positive, Neutral, or Negative)
zero_shot_prompt = """Classify movie reviews as POSITIVE, NEUTRAL or NEGATIVE.
Review: "Her" is a disturbing study revealing the direction
humanity is headed if AI is allowed to keep evolving,
unchecked. I wish there were more movies like this masterpiece.
Sentiment: """  # πŸ‘ˆ We're asking Gemini to complete this line with one word: POSITIVE, NEUTRAL, or NEGATIVE

# πŸ€– Now we send the prompt to Gemini to get an answer
response = client.models.generate_content(
    model = "gemini-2.0-flash",  # Use the fast Gemini model
    config = model_config,       # Tell it how to behave (from above)
    contents = zero_shot_prompt  # Give it the question we want answered
)

# πŸ“’ Show Gemini's answer on the screen
print(response.text)
POSITIVE

# We import a special tool called 'enum' that helps us make a list of choices
import enum

# We make a class (like a labeled box) called Sentiment
# Inside, we list 3 possible feelings (or moods):
# - POSITIVE: means happy or good
# - NEUTRAL: means okay, not good or bad
# - NEGATIVE: means sad or bad
class Sentiment(enum.Enum):
    POSITIVE = 'positive'
    NEUTRAL = 'neutral'
    NEGATIVE = 'negative'

# We ask the Gemini robot to give us a response based on some input (zero_shot_prompt)
# We tell it:
# - Use the "gemini-2.0-flash" model
# - Expect the answer to be one of the moods (Sentiment)
# - Send the question we want to ask (zero_shot_prompt)
response = client.models.generate_content(
    model = "gemini-2.0-flash",
    config = types.GenerateContentConfig(
        response_mime_type = "text/x.enum",  # This says: "Send the answer in one of our enum moods"
        response_schema = Sentiment  # This tells Gemini to use our Sentiment choices
    ), 
    contents = zero_shot_prompt  # This is the actual thing we ask Gemini to think about
)

# Print the answer Gemini gives us!
print(response.text)
positive
enum_response = response.parsed
print(enum_response)
print(type(enum_response))
Sentiment.POSITIVE
<enum 'Sentiment'>
help(response.parsed)
Help on Sentiment in module __main__ object:

class Sentiment(enum.Enum)
 |  Sentiment(value, names=None, *, module=None, qualname=None, type=None, start=1)
 |  
 |  An enumeration.
 |  
 |  Method resolution order:
 |      Sentiment
 |      enum.Enum
 |      builtins.object
 |  
 |  Data and other attributes defined here:
 |  
 |  NEGATIVE = <Sentiment.NEGATIVE: 'negative'>
 |  
 |  NEUTRAL = <Sentiment.NEUTRAL: 'neutral'>
 |  
 |  POSITIVE = <Sentiment.POSITIVE: 'positive'>
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from enum.Enum:
 |  
 |  name
 |      The name of the Enum member.
 |  
 |  value
 |      The value of the Enum member.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from enum.EnumMeta:
 |  
 |  __members__
 |      Returns a mapping of member name->value.
 |      
 |      This mapping lists all enum members, including aliases. Note that this
 |      is a read-only view of the internal mapping.

few_shot_prompt = """Parse a customer's pizza order into valid JSON:

EXAMPLE:
I want a small pizza with cheese, tomato sauce, and pepperoni.
JSON Response:
```
{
"size": "small",
"type": "normal",
"ingredients": ["cheese", "tomato sauce", "pepperoni"]
}
```

EXAMPLE:
Can I get a large pizza with tomato sauce, basil and mozzarella
JSON Response:
```
{
"size": "large",
"type": "normal",
"ingredients": ["tomato sauce", "basil", "mozzarella"]
}
```

ORDER:
"""
# πŸ—£οΈ This is what the customer actually says
customer_order = "Give me a large with cheese & pineapple"

# πŸ€– We now ask Gemini AI to help us figure out what this order means in code (JSON)
response = client.models.generate_content(
    model = "gemini-2.0-flash",  # 🧠 We're using Gemini Flash, a fast AI model
    config = types.GenerateContentConfig(
        temperature = 0.1,        # 🎯 Low temperature = more accurate/serious answers
        top_p = 1,                # πŸ“Š Controls randomness; 1 = normal
        max_output_tokens = 250, # πŸ“ Max length of the answer
    ), 
    # πŸ“¦ We send both our instructions (examples) and the customer order
    contents = [few_shot_prompt, customer_order]
)

# πŸ–¨οΈ Finally, we print what Gemini answered!
print(response.text)
```json
{
"size": "large",
"type": "normal",
"ingredients": ["cheese", "pineapple"]
}
```