Skip to content

Conversation

@Saahi30
Copy link
Collaborator

@Saahi30 Saahi30 commented Sep 29, 2025

📝 Description

This pull request introduces the backend API endpoints for the brand dashboard. It provides routes for dashboard metrics, brand profile management, campaign management, creator matching, analytics, contract management, application management, payment management, and campaign metrics. All endpoints now use strict input validation for improved security and reliability.

🔧 Changes Made
Added FastAPI endpoints for brand dashboard backend features
Implemented strict input validation for all query and path parameters
Refactored argument order and type hints for compatibility and safety
Integrated Supabase for database operations
Included helper functions for validation and error handling
📷 Screenshots or Visual Changes (if applicable)
N/A (No visual changes, backend only)

✅ Checklist
[✅] I have read the contributing guidelines.

Summary by CodeRabbit

  • New Features
    • Brand Dashboard APIs for profiles, campaigns, contracts, applications, payments, metrics, and analytics.
    • AI Assistant chat with session memory, enabling natural-language dashboard actions.
    • Revamped Brand Dashboard UI with collapsible sidebar, quick actions, and integrated chat.
    • Creator search and matching capabilities.
    • Navigation update: Dashboard now at /brand/dashboard with optional visibility in the user menu.
  • Documentation
    • Added frontend-backend integration guide.
  • Chores
    • Updated env example (incl. Redis/YouTube keys), added AI dependencies, configurable Redis sessions, and demo seed data.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

Walkthrough

Introduces a Brand Dashboard backend with extensive REST endpoints and schemas, an AI query routing service (Groq-powered) with Redis-backed session state, and frontend integrations: services, hook, chat assistant, and a redesigned Brand Dashboard page. Wires new routers in FastAPI, adds env/requirements, SQL tables/seeds, and minor UI/util updates.

Changes

Cohort / File(s) Summary
Environment & Dependencies
Backend/.env-example, Backend/requirements.txt
Adds GROQ/YouTube keys and Redis Cloud vars; includes groq and openai packages.
App Wiring
Backend/app/main.py
Registers new routers: brand dashboard and AI query.
ORM Models
Backend/app/models/models.py
Adds BrandProfile, CampaignMetrics, Contract, CreatorMatch with timezone fields; duplicate class definitions present.
AI Routing & Session State
Backend/app/services/ai_router.py, Backend/app/routes/ai_query.py, Backend/app/services/ai_services.py, Backend/app/services/redis_client.py
Implements LLM-backed intent routing, AI endpoints, model/param changes, and Redis-configured session helpers with TTL.
Brand Dashboard API & Schemas
Backend/app/routes/brand_dashboard.py, Backend/app/schemas/schema.py, Backend/sql.txt
Adds extensive brand/campaign/contracts/payments/metrics endpoints, new Pydantic schemas, and SQL tables plus seed data.
Frontend Services
Frontend/src/services/brandApi.ts, Frontend/src/services/aiApi.ts
Introduces API clients for brand dashboard and AI endpoints with typed interfaces.
Frontend Components & Pages
Frontend/src/components/chat/BrandChatAssistant.tsx, Frontend/src/pages/Brand/Dashboard.tsx, Frontend/src/hooks/useBrandDashboard.ts, Frontend/src/components/user-nav.tsx, Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx, Frontend/src/context/AuthContext.tsx, Frontend/README-INTEGRATION.md
Adds chat assistant, rewrites Brand Dashboard with AI integration, new hook for data/actions, nav tweaks, minor key/boolean fixes, and integration docs.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor U as User
  participant FC as Frontend (BrandChatAssistant)
  participant A as AI API (/api/ai)
  participant AR as AI Router (Groq)
  participant RS as Redis
  participant BD as Brand API (/api/brand ...)
  participant S as DB

  U->>FC: Type query
  FC->>A: POST /api/ai/query {query, brand_id, context} (+ X-Session-ID)
  A->>RS: get_session_state(session_id)
  A->>AR: process_query(query, brand_id)
  AR->>AR: LLM prompt + parse JSON
  AR-->>A: {intent, route, parameters, ...}
  alt Route requires backend call
    A->>BD: Call mapped endpoint with parameters
    BD->>S: Read/aggregate data
    S-->>BD: Data
    BD-->>A: Result
  else Missing required params
    A-->>FC: follow_up_needed + question
  end
  A->>RS: save_session_state(session_id, merged_state)
  A-->>FC: AIQueryResponse (+ session_id, result)
  FC-->>U: Render AI reply/result
Loading
sequenceDiagram
  autonumber
  actor U as User
  participant FP as Frontend (Brand Dashboard)
  participant B as Brand API (/api/brand)
  participant D as DB

  U->>FP: Open Dashboard
  par Initial loads
    FP->>B: GET /overview
    FP->>B: GET /profile?user_id
    FP->>B: GET /campaigns
    FP->>B: GET /applications
    FP->>B: GET /payments
  end
  B->>D: Query tables (sponsorships, profiles, payments, matches, metrics)
  D-->>B: Records
  B-->>FP: Aggregated responses
  FP-->>U: Render metrics, lists, actions
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Possibly related PRs

Suggested labels

enhancement, frontend, backend, documentation

Suggested reviewers

  • chandansgowda

Poem

Hoppity-hop, I wired the streams,
Routes and schemas, dashboard dreams.
Redis burrows stash our chat,
Groq-y whispers guide the chat.
Left a trail of seeds in SQL loam—
Tap your ears—new features roam! 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The title is a noun phrase that lacks an imperative verb and does not explicitly state what action was taken. It omits the fact that new backend API endpoints and data models were added for the brand dashboard. As a result, it is too terse to clearly communicate the primary change to someone scanning the PR history. Consider revising the title to an imperative statement that clearly describes the change, for example “Implement backend API endpoints and models for brand dashboard”.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 90.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 39

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
Frontend/src/context/AuthContext.tsx (2)

76-83: Rate‑limit fallback returns incorrect onboarding result; return cached last result instead.

Returning { hasOnboarding: false, role: null } when throttled can misroute users. Cache the last known result and return it when within the throttle window; also use a ref for atomic checks.

-    const now = Date.now();
-    if (now - lastRequest < 2000) {
-      console.log("Rate limiting: skipping request");
-      return { hasOnboarding: false, role: null };
-    }
-    setLastRequest(now);
+    const now = Date.now();
+    if (now - lastRequestRef.current < 2000) {
+      console.log("Rate limiting: skipping request");
+      return lastOnboardingRef.current;
+    }
+    lastRequestRef.current = now;

176-180: Guard unsubscribe to prevent a potential NPE on cleanup.

If the listener is undefined due to an initialization error, this can throw during unmount.

-      listener.subscription.unsubscribe();
+      listener?.subscription?.unsubscribe();
Backend/app/services/ai_services.py (1)

21-28: Harden external call: enable JSON mode, add timeout, and fail on non‑2xx.

  • Enforce valid JSON with response_format={"type":"json_object"} to match your prompt’s “Respond in JSON...” requirement.
  • Add timeout and response.raise_for_status(); current code returns empty object on 4xx/5xx.
  • Keep max_completion_tokens (Groq supports it; max_tokens is deprecated).
-    headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
-    payload = {"model": "moonshotai/kimi-k2-instruct", "messages": [{"role": "user", "content": prompt}], "temperature": 0.6, "max_completion_tokens": 1024}
+    headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
+    payload["response_format"] = {"type": "json_object"}
@@
-        response = requests.post(CHATGROQ_API_URL_CHAT, json=payload, headers=headers)
-        return response.json().get("choices", [{}])[0].get("message", {}).get("content", {})
+        response = requests.post(CHATGROQ_API_URL_CHAT, json=payload, headers=headers, timeout=30)
+        response.raise_for_status()
+        data = response.json()
+        content = data.get("choices", [{}])[0].get("message", {}).get("content", "{}")
+        # Always return a dict.
+        import json as _json
+        return _json.loads(content) if isinstance(content, str) else content
     except Exception as e:
         return {"error": str(e)}

Refs: JSON Object Mode/Structured outputs and max_completion_tokens in Groq API reference. (console.groq.com)

🧹 Nitpick comments (63)
Frontend/src/context/AuthContext.tsx (4)

84-106: Handle Supabase errors and fetch in parallel; also simplify boolean onboarding check (covers changed line 97).

Avoid false negatives on transient errors and cut latency by parallelizing the three queries. Cache and return the result.

-    // Check if user has completed onboarding by looking for social profiles or brand data
-    const { data: socialProfiles } = await supabase
-      .from("social_profiles")
-      .select("id")
-      .eq("user_id", userToUse.id)
-      .limit(1);
-    
-    const { data: brandData } = await supabase
-      .from("brands")
-      .select("id")
-      .eq("user_id", userToUse.id)
-      .limit(1);
-    
-    const hasOnboarding = Boolean((socialProfiles && socialProfiles.length > 0) || (brandData && brandData.length > 0));
-    
-    // Get user role
-    const { data: userData } = await supabase
-      .from("users")
-      .select("role")
-      .eq("id", userToUse.id)
-      .single();
-    
-    return { hasOnboarding, role: userData?.role || null };
+    // Check onboarding and role in parallel
+    const [
+      { data: socialProfiles, error: spErr },
+      { data: brandData, error: brErr },
+      { data: userData, error: userErr },
+    ] = await Promise.all([
+      supabase.from("social_profiles").select("id").eq("user_id", userToUse.id).limit(1),
+      supabase.from("brands").select("id").eq("user_id", userToUse.id).limit(1),
+      supabase.from("users").select("role").eq("id", userToUse.id).single(),
+    ]);
+
+    if (spErr || brErr || userErr) {
+      console.error("Error checking onboarding/role", { spErr, brErr, userErr });
+      return lastOnboardingRef.current;
+    }
+
+    const hasOnboarding = Boolean(socialProfiles?.length || brandData?.length);
+    const result = { hasOnboarding, role: userData?.role ?? null };
+    lastOnboardingRef.current = result;
+    return result;

66-69: Use refs for throttle timestamp and last known onboarding result.

Preps the above fixes and avoids stale state in async paths.

-  const [lastRequest, setLastRequest] = useState(0);
+  const lastRequestRef = useRef(0);
+  const lastOnboardingRef = useRef<{ hasOnboarding: boolean; role: string | null }>({
+    hasOnboarding: false,
+    role: null,
+  });

1-7: Import useRef for the throttle/cache refs.

 import {
   createContext,
   useContext,
   useState,
+  useRef,
   ReactNode,
   useEffect,
 } from "react";

25-25: Tighten typing for ensureUserInTable.

Avoid any; use the existing User type.

-async function ensureUserInTable(user: any) {
+async function ensureUserInTable(user: User | null) {
Frontend/src/index.css (2)

154-165: Respect prefers‑reduced‑motion for animations.

Add a reduced‑motion override so users opting out of animation aren’t forced to see them.

Append:

@media (prefers-reduced-motion: reduce) {
  .animate-gradient,
  .animate-float,
  .animate-glow {
    animation: none !important;
  }
}

Also applies to: 159-161, 163-165


118-121: Optional: define font usage explicitly.

If Orbitron is intended for headings only, add a heading class or Tailwind theme font family; if it’s the app default, set body { font-family: 'Orbitron', ui-sans-serif, system-ui, ... }.

Confirm intended scope (global vs. specific components).

Frontend/README-INTEGRATION.md (1)

49-53: Add install steps before running dev servers.

Include dependencies installation to prevent first‑run failures.

-1. **Start Backend:** `cd Backend && python -m uvicorn app.main:app --reload`
-2. **Start Frontend:** `cd Frontend && npm run dev`
+1. **Install Backend deps:** `cd Backend && pip install -r requirements.txt`
+2. **Start Backend:** `python -m uvicorn app.main:app --reload`
+3. **Install Frontend deps:** `cd ../Frontend && npm install`
+4. **Start Frontend:** `npm run dev`
-3. **Navigate to:** `http://localhost:5173/brand/dashboard`
-4. **Try AI Search:** Type questions like "Show me my campaigns" or "Find creators for tech industry"
+5. **Navigate to:** `http://localhost:5173/brand/dashboard`
+6. **Try AI Search:** Type questions like "Show me my campaigns" or "Find creators for tech industry"
Backend/.env-example (2)

1-5: Standardize DB env variables for interoperability.

Prefer common names or a single DSN to ease framework/tooling integration.

Options:

  • Single DSN: DATABASE_URL=postgresql://USER:PASSWORD@HOST:PORT/DBNAME
  • Or discrete vars: POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_HOST, POSTGRES_PORT, POSTGRES_DB

If code expects current names, consider adding both (with DATABASE_URL taking precedence).


12-15: Support unified Redis URL and TLS flag

  • Add REDIS_URL and REDIS_TLS to Backend/.env-example for managed, TLS-enabled Redis.
  • Update app/services/redis_client.py to prefer REDIS_URL (e.g. via redis.from_url) and enable ssl when REDIS_TLS=true, falling back to HOST/PORT/PASSWORD.
Frontend/src/components/user-nav.tsx (2)

17-22: Nice prop surface to control menu item visibility.

Optional: export UserNavProps if others import it, or keep internal if not needed.


45-55: Minor a11y: add an accessible name to the avatar trigger.

Assistive tech benefits from an aria‑label.

-      <DropdownMenuTrigger asChild>
-        <Button variant="ghost" className="relative h-8 w-8 rounded-full">
+      <DropdownMenuTrigger asChild>
+        <Button aria-label="User menu" variant="ghost" className="relative h-8 w-8 rounded-full">
Backend/app/models/models.py (2)

15-15: Standardize timezone handling across models

You’ve introduced timezone-aware DateTime usage; good. However, some earlier fields still use TIMESTAMP + datetime.utcnow (naive). Recommend making all timestamps DateTime(timezone=True) with a consistent default (e.g., lambda: datetime.now(timezone.utc) or server_default=func.now()), and aligning read/write expectations. This prevents subtle tz drift and serialization mismatches.


189-205: Indexes for CampaignMetrics query patterns; consider types

Likely hot path filters by campaign_id and recorded_at. Add a composite index to keep analytics responsive at scale.

Apply this diff:

 class CampaignMetrics(Base):
   __tablename__ = "campaign_metrics"
+  __table_args__ = (
+      Index("ix_campaign_metrics_campaign_id_recorded_at", "campaign_id", "recorded_at"),
+  )

Optional:

  • Use BIGINT for counters (impressions/clicks/conversions) to avoid overflow on large campaigns.
  • If revenue has precision needs, ensure downstream Pydantic uses Decimal. Based on learnings.
Backend/app/main.py (1)

59-60: Gate seeding in production and externalize CORS.

  • Seed runs on every startup; gate behind an env flag to avoid prod side‑effects.
  • Move allowed origins to env for deploy flexibility.
@@
-    await seed_db()
+    if os.getenv("SEED_ON_STARTUP", "0") == "1":
+        await seed_db()
@@
-app.add_middleware(
-    CORSMiddleware,
-    allow_origins=["http://localhost:5173"],
+allowed_origins = os.getenv("CORS_ALLOW_ORIGINS", "http://localhost:5173").split(",")
+app.add_middleware(
+    CORSMiddleware,
+    allow_origins=allowed_origins,
     allow_credentials=True,
     allow_methods=["*"],
     allow_headers=["*"],
 )
Backend/app/services/redis_client.py (1)

19-24: Harden JSON state handling.

Guard against malformed/corrupted values to avoid raising in json.loads.

 async def get_session_state(session_id: str):
-    state = await redis_client.get(f"session:{session_id}")
-    return json.loads(state) if state else {}
+    state = await redis_client.get(f"session:{session_id}")
+    if not state:
+        return {}
+    try:
+        return json.loads(state)
+    except Exception:
+        # best-effort recovery
+        return {}
Frontend/src/hooks/useBrandDashboard.ts (2)

25-38: Loading state only reflects overview fetch.

Other loads run concurrently but don’t affect loading, so UI can stop “loading” while work continues. Consider a ref-count or separate initialLoading/isRefreshing.

-const [loading, setLoading] = useState(true);
+const [loading, setLoading] = useState(true);       // initial page load
+const [isRefreshing, setIsRefreshing] = useState(false); // for manual refreshes

Apply setIsRefreshing(true/false) around refresh Promise.all, and keep setLoading scoped to the very first mount.


19-22: Type AI response.

Prefer a concrete type to avoid any leaks.

-import { aiApi } from '../services/aiApi';
+import { aiApi, type AIQueryResponse } from '../services/aiApi';
@@
-const [aiResponse, setAiResponse] = useState<any>(null);
+const [aiResponse, setAiResponse] = useState<AIQueryResponse | null>(null);
Frontend/src/services/aiApi.ts (2)

4-4: Externalize API base URL.

Hardcoding localhost hinders deploys. Read from env and fall back to localhost for dev.

-const AI_API_BASE_URL = 'http://localhost:8000/api/ai';
+const AI_API_BASE_URL =
+  (import.meta as any).env?.VITE_API_BASE_URL
+    ? `${(import.meta as any).env.VITE_API_BASE_URL}/api/ai`
+    : 'http://localhost:8000/api/ai';

91-93: Encode route names in URLs.

Guard against spaces/special chars.

-    return this.makeRequest<{ route_name: string; info: any }>(`/route/${routeName}`);
+    return this.makeRequest<{ route_name: string; info: any }>(`/route/${encodeURIComponent(routeName)}`);
Frontend/src/pages/Brand/Dashboard.tsx (3)

463-467: Use onKeyDown; onKeyPress is deprecated.

Swap to onKeyDown to catch Enter reliably across React versions.

-                    onKeyPress={(e) => {
-                      if (e.key === 'Enter' && searchQuery.trim()) {
+                    onKeyDown={(e) => {
+                      if (e.key === 'Enter' && searchQuery.trim()) {
                         handleAISearch();
                       }
                     }}

247-249: Avoid hard-coded user info.

Pull name/email from auth (e.g., useAuth) or hide if unavailable to prevent confusion in multi-user environments.


512-522: Spinner animation fallback.

className="animate-spin" needs Tailwind. Add inline CSS keyframes to ensure rotation without Tailwind, or wrap Loader2 in a small CSS module with keyframes.

-<Loader2 size={32} className="animate-spin" style={{ color: PRIMARY }} />
+<Loader2 size={32} style={{ color: PRIMARY, animation: "spin 1s linear infinite" }} />
+<style>{`@keyframes spin{to{transform:rotate(360deg)}}`}</style>
Frontend/src/services/brandApi.ts (3)

4-4: Avoid hard‑coded base URL; use env with sensible fallback.

Use an env‑driven base (Vite: VITE_API_BASE_URL, CRA: REACT_APP_API_BASE_URL) and default to a relative /api/brand in prod.

Which bundler are we using (Vite vs CRA/Next)? I can tailor the snippet accordingly.


6-13: Tighten types to reduce any in responses.

Replace any/any[] with minimal shapes to catch regressions (e.g., recent_activity: Application[], creator?: Creator, campaign?: Campaign where applicable).

Also applies to: 54-56, 66-68


72-97: Optional: add request timeout and surface HTTP status in errors.

Use AbortController for a sane timeout (e.g., 15s) and include status/endpoint in thrown Error for better telemetry.

-  private async makeRequest<T>(
+  private async makeRequest<T>(
     endpoint: string, 
     options: RequestInit = {}
   ): Promise<T> {
     const url = `${API_BASE_URL}${endpoint}`;
     
     try {
-      const response = await fetch(url, {
+      const controller = new AbortController();
+      const timeout = setTimeout(() => controller.abort(), 15000);
+      const response = await fetch(url, {
         // headers merged in previous comment
-      });
+        signal: options.signal ?? controller.signal,
+      });
+      clearTimeout(timeout);
...
-      if (!response.ok) {
+      if (!response.ok) {
         const errorData = await response.json().catch(() => ({}));
-        throw new Error(errorData.detail || `HTTP error! status: ${response.status}`);
+        const msg = errorData.detail || errorData.message || response.statusText || 'HTTP error';
+        throw new Error(`${msg} (${response.status}) @ ${endpoint}`);
       }
Backend/app/routes/ai_query.py (8)

9-12: Don’t call basicConfig in library code.

Let the app configure logging; module‑level basicConfig can clobber global logging.

-# Setup logging
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
+# Module logger; configure handlers/levels in app startup
+logger = logging.getLogger(__name__)

154-154: Pydantic v2: prefer model_dump() over dict().

Improves forward compatibility. Based on learnings.

-        response_dict = response.dict()
+        response_dict = response.model_dump()

138-141: Log full traceback on API call failure.

Use logger.exception to capture stack.

-            except Exception as api_exc:
-                logger.error(f"API call failed for intent '{intent}': {api_exc}")
-                api_error = str(api_exc)
+            except Exception as api_exc:
+                logger.exception("API call failed for intent '%s'", intent)
+                api_error = str(api_exc)

183-187: Use logger.exception and chain the HTTPException.

Improves debuggability and aligns with static analysis hints.

-    except HTTPException:
-        raise
-    except Exception as e:
-        logger.error(f"Error processing AI query: {e}")
-        raise HTTPException(status_code=500, detail="Failed to process AI query")
+    except HTTPException:
+        raise
+    except Exception as e:
+        logger.exception("Error processing AI query")
+        raise HTTPException(status_code=500, detail="Failed to process AI query") from e

200-202: Same: exception logging and chaining in /routes.

-    except Exception as e:
-        logger.error(f"Error fetching available routes: {e}")
-        raise HTTPException(status_code=500, detail="Failed to fetch routes")
+    except Exception as e:
+        logger.exception("Error fetching available routes")
+        raise HTTPException(status_code=500, detail="Failed to fetch routes") from e

220-222: Same: exception logging and chaining in /route/{route_name}.

-    except Exception as e:
-        logger.error(f"Error fetching route info: {e}")
-        raise HTTPException(status_code=500, detail="Failed to fetch route info")
+    except Exception as e:
+        logger.exception("Error fetching route info")
+        raise HTTPException(status_code=500, detail="Failed to fetch route info") from e

238-239: Use logger.exception in test endpoint.

Retain non‑throwing behavior but capture traceback.

-    except Exception as e:
-        logger.error(f"Error in test AI query: {e}")
+    except Exception as e:
+        logger.exception("Error in test AI query")

61-67: Naming clarity: distinguish route vs intent.

intent = result.get("route") is confusing since response also has intent. Consider selected_route = result.get("route") and use consistently.

Also applies to: 167-173

Backend/sql.txt (1)

95-114: Add constraints and indexes to support API access patterns.

  • Enforce one profile per brand.
  • Prevent duplicate matches.
  • Speed up analytics by indexing.
-- brand_profiles: one row per user_id
ALTER TABLE brand_profiles
  ADD CONSTRAINT brand_profiles_user_id_uniq UNIQUE (user_id);

-- creator_matches: avoid duplicates and speed brand lookups
CREATE UNIQUE INDEX IF NOT EXISTS uq_creator_matches_brand_creator
  ON creator_matches (brand_id, creator_id);
CREATE INDEX IF NOT EXISTS ix_creator_matches_brand
  ON creator_matches (brand_id);

-- campaign_metrics: time-series constraints + index
ALTER TABLE campaign_metrics
  ALTER COLUMN recorded_at SET NOT NULL,
  ALTER COLUMN campaign_id SET NOT NULL;
CREATE INDEX IF NOT EXISTS ix_campaign_metrics_campaign_time
  ON campaign_metrics (campaign_id, recorded_at);

-- contracts: speed brand/status queries
CREATE INDEX IF NOT EXISTS ix_contracts_brand_status
  ON contracts (brand_id, status);
Frontend/src/components/chat/BrandChatAssistant.tsx (3)

37-51: Abort in‑flight requests on unmount or when issuing a new one

Avoid setting state after unmount and cancel stale requests when the user sends quickly.

-  const sendMessageToBackend = async (message: string, currentSessionId?: string) => {
+  const sendMessageToBackend = async (message: string, currentSessionId?: string) => {
+    const controller = new AbortController();
+    const { signal } = controller;
+    // Optional: track controller refs if you want to cancel previous calls on new sends
     try {
       const response = await fetch('/api/ai/query', {
         method: 'POST',
         headers: {
           'Content-Type': 'application/json',
           ...(currentSessionId && { 'X-Session-ID': currentSessionId }),
         },
+        signal,
         body: JSON.stringify({
           query: message,
           // brand_id inferred server-side
           context: currentSessionId ? { session_id: currentSessionId } : undefined,
         }),
       });

Also return the controller so callers can cancel when needed.


71-95: One‑time effect reads state from closure; guard explicitly

The mount effect reads messages.length from a stale closure. Use a ref guard for clarity and to silence lints.

-  useEffect(() => {
-    if (messages.length === 1) {
+  const didInitRef = useRef(false);
+  useEffect(() => {
+    if (didInitRef.current) return;
+    didInitRef.current = true;
       setLoading(true);
       sendMessageToBackend(initialQuery)
         .then((response) => {

256-273: Minor a11y: label and focus the input

Add an accessible label and focus the input when opening.

-        <input
+        <input
+          aria-label="Chat message"
+          autoFocus
           type="text"
Backend/app/services/ai_router.py (3)

195-197: Timezone‑aware timestamps

Store ISO 8601 UTC timestamps to avoid ambiguity.

-        response["timestamp"] = str(datetime.now())
+        from datetime import timezone
+        response["timestamp"] = datetime.now(timezone.utc).isoformat()

13-15: Avoid configuring logging in library modules

Set up logging in the app entrypoint to prevent duplicate handlers/format clashes when imported.

-# Setup logging
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
+# Logger for this module; configure handlers/levels in app startup
+logger = logging.getLogger(__name__)

26-82: Route metadata: treat parameter names as machine‑readable

Since _enhance_response checks "brand_id" in route_info["parameters"], any "(optional)" suffixes can cause mismatches if you later check exact names. Consider normalizing to a structure like {"name": "campaign_id", "required": False}.

Would you like a follow‑up patch to introduce a typed RouteParam model and validator?

Backend/app/schemas/schema.py (6)

151-157: Use precise collection types

Prefer typing with generics to improve validation and docs.

-class DashboardOverviewResponse(BaseModel):
+class DashboardOverviewResponse(BaseModel):
     total_campaigns: int
     active_campaigns: int
     total_revenue: float
     total_creators_matched: int
-    recent_activity: list
+    recent_activity: List[Dict[str, Any]]
...
-class ApplicationSummaryResponse(BaseModel):
+class ApplicationSummaryResponse(BaseModel):
     total_applications: int
     pending_applications: int
     accepted_applications: int
     rejected_applications: int
     applications_by_campaign: Dict[str, int]
-    recent_applications: List[Dict]
+    recent_applications: List[Dict[str, Any]]
...
-class PaymentAnalyticsResponse(BaseModel):
+class PaymentAnalyticsResponse(BaseModel):
     total_payments: int
     completed_payments: int
     pending_payments: int
     total_amount: float
     average_payment: float
     payments_by_month: Dict[str, float]

Also applies to: 200-207, 227-234


25-27: Tighten required_audience element type

This is currently Dict[str, list] (untyped). If it’s a list of strings or numbers, specify it. If mixed, use List[Any].

-    required_audience: Dict[str, list]
+    required_audience: Dict[str, List[Any]]

What are the expected element types? I can tailor the model precisely.


69-75: Constrain status fields with Literal/Enum

Free‑form strings invite invalid states. Use Literal or Enum once the allowed set is confirmed.

Examples:

from typing import Literal

ApplicationStatus = Literal["accepted","rejected","pending"]
PaymentStatus = Literal["pending","completed","failed","cancelled"]
# ContractStatus = Literal["draft","..."]  # confirm values

Then apply to:

  • ApplicationUpdateRequest.status
  • PaymentStatusUpdate.status
  • ContractUpdate.status / ContractResponse.status / ContractCreate.status (defaults intact)
    Do you want me to open a follow‑up PR once you confirm the exact status vocab?

Also applies to: 121-124, 196-199, 224-226


61-68: Validate URLs/emails using Pydantic types

Use AnyUrl/HttpUrl and EmailStr for stronger validation.

-from pydantic import BaseModel
+from pydantic import BaseModel, EmailStr, AnyUrl
...
 class BrandProfileCreate(BaseModel):
     user_id: str
     company_name: Optional[str] = None
-    website: Optional[str] = None
+    website: Optional[AnyUrl] = None
     industry: Optional[str] = None
     contact_person: Optional[str] = None
-    contact_email: Optional[str] = None
+    contact_email: Optional[EmailStr] = None

Apply same to BrandProfileUpdate.

Also applies to: 69-75, 76-85


22-23: Consider UUID types for IDs

If your DB uses UUIDs, annotate as UUID for automatic validation/serialization.

-from typing import Optional, Dict, List
+from typing import Optional, Dict, List
+from uuid import UUID
...
-    brand_id: str
+    brand_id: UUID

Repeat for id, user_id, creator_id, sponsorship_id, campaign_id, etc., if applicable.

Also applies to: 30-31, 38-41, 44-48, 114-118, 126-133, 139-145, 210-218


92-98: Monetary values: Decimal over float

For currency (revenue/amount), Decimal avoids rounding issues.

-from decimal import Decimal
...
-    revenue: Optional[float] = None
+    revenue: Optional[Decimal] = None

Similarly for PaymentResponse.amount/total_amount/average_payment. Ensure JSON encoders handle Decimal.

Also applies to: 236-243

Backend/app/routes/brand_dashboard.py (17)

76-93: In-memory rate limiter: unused window_seconds, key collision, unbounded growth

  • window_seconds is ignored; using only minute causes odd windows and memory growth.
  • No cleanup; dict grows unbounded.

Minimal fix:

-from datetime import datetime, timezone
+from datetime import datetime, timezone, timedelta
 from collections import defaultdict
-request_counts = {}
+request_counts = {}
@@
-def check_rate_limit(user_id: str, max_requests: int = 100, window_seconds: int = 60):
+def check_rate_limit(user_id: str, max_requests: int = 100, window_seconds: int = 60):
     """Simple rate limiting check (in production, use Redis)"""
     current_time = datetime.now(timezone.utc)
-    key = f"{user_id}:{current_time.minute}"
+    window_key = int(current_time.timestamp()) // window_seconds
+    key = f"{user_id}:{window_key}"
@@
-    if request_counts[key] > max_requests:
+    if request_counts[key] > max_requests:
         raise HTTPException(status_code=429, detail="Rate limit exceeded")
-    
-    return True
+    # basic GC for old windows
+    for k in list(request_counts.keys()):
+        uid, win = k.split(":")
+        if uid == user_id and int(win) < window_key - 1:
+            request_counts.pop(k, None)
+    return True

Wire it as a dependency (e.g., Depends(lambda current=Depends(get_current_user): check_rate_limit(current["id"]))) once auth is added.


98-162: Exception handling: prefer logger.exception + exception chaining; remove unused local

  • Use logger.exception(...) inside except to capture stacktrace; re-raise with from e.
  • Local profile is assigned and never used.
-        profile = profile_result.data[0] if profile_result.data else None
+        # profile currently unused; keep if you plan to expose later
+        # profile = profile_result.data[0] if profile_result.data else None
@@
-    except Exception as e:
-        logger.error(f"Unexpected error in dashboard overview: {e}")
-        raise HTTPException(status_code=500, detail="Internal server error")
+    except Exception as e:
+        logger.exception("Unexpected error in dashboard overview")
+        raise HTTPException(status_code=500, detail="Internal server error") from e

Please apply this logging pattern across routes. Static hints (TRY400/B904/F841) agree.


666-712: Applications: N+1 lookups for users/campaigns

You fetch creator and campaign per row. Batch with in_ to reduce round-trips.

Sketch:

ids_users = {a["creator_id"] for a in applications}
ids_camps = {a["sponsorship_id"] for a in applications}
users = await sb_exec(supabase.table("users").select("*").in_("id", list(ids_users)))
camps = await sb_exec(supabase.table("sponsorships").select("*").in_("id", list(ids_camps)))
u_map = {u["id"]: u for u in users}; c_map = {c["id"]: c for c in camps}

Then map without per-row queries.


856-890: Payments: N+1 enrichment

Same pattern; batch fetch creators/campaigns via in_.


972-1006: Payment analytics: month key may be datetime; normalize safely

If transaction_date is a datetime, slicing fails. Normalize first.

-                month = payment["transaction_date"][:7] if payment["transaction_date"] else "unknown"
+                dt = payment.get("transaction_date")
+                if isinstance(dt, str):
+                    month = dt[:7]
+                elif hasattr(dt, "strftime"):
+                    month = dt.strftime("%Y-%m")
+                else:
+                    month = "unknown"

1019-1057: Metrics POST: add response model and async exec helper

Return typed response and use the async Supabase exec helper.

-@router.post("/campaigns/{campaign_id}/metrics")
+@router.post("/campaigns/{campaign_id}/metrics", response_model=CampaignMetricsResponse)
 async def add_campaign_metrics(
@@
-        response = supabase.table("campaign_metrics").insert(metrics_data).execute()
-        
-        if response.data:
-            return response.data[0]
+        response = supabase.table("campaign_metrics").insert(metrics_data).execute()
+        if response.data:
+            return response.data[0]

(And consider switching to await sb_exec(...) if you adopt the async helper.)


1066-1091: Metrics GET: add response model

Return List[CampaignMetricsResponse] for consistency.


1098-1126: Metrics PUT: add response model

Return CampaignMetricsResponse.


149-161: Unify exception handling across routes

Many except Exception blocks use logger.error and drop traceback. Replace with logger.exception and chain the HTTPException from e to retain context. This aligns with static analysis hints (BLE001/TRY400/B904).

Also applies to: 192-195, 213-216, 237-240, 284-287, 326-329, 354-357, 378-381, 413-416, 458-461, 498-501, 549-552, 582-585, 601-604, 629-632, 657-660, 715-718, 761-764, 803-806, 847-850, 894-897, 935-938, 968-971, 1010-1013, 1062-1065, 1094-1097, 1130-1132


245-261: Add response_model for campaigns list

Expose a typed list for /campaigns. If you don’t have a SponsorshipResponse schema yet, consider adding one; otherwise FastAPI will return unvalidated dicts.


1-5: Remove unused SQLAlchemy-related imports

AsyncSession, select, and AsyncSessionLocal are unused in this module (Supabase is used). Keeping them can confuse readers.


49-60: Wire require_brand_role/validate_brand_access as dependencies

Consider small dependency helpers to enforce role/access consistently:

def assert_brand(current_user):
    require_brand_role(current_user["role"])
    return current_user["id"]

Then each route can accept current_user=Depends(get_current_user) and derive brand_id = assert_brand(current_user).


419-423: Validate status enums via Pydantic types

For ApplicationUpdateRequest and PaymentStatusUpdate, promote status to Literal[...] or an Enum to reject invalid values at parse time.

I can update the schemas and route signatures if you want a patch.

Also applies to: 771-799, 939-965


139-147: “Recent” items should be ordered explicitly

You slice first 5 without ordering. Add an .order("applied_at", desc=True) when fetching applications/payments to ensure deterministic “recent”.

Also applies to: 833-835


819-820: Avoid calling route functions from other route handlers

await get_brand_applications(brand_id) and await get_brand_payments(brand_id) couple handlers and mix request concerns. Extract shared service-layer functions and call those instead.

Also applies to: 983-984


10-19: Clean up unused imports in brand_dashboard.py: remove SQLAlchemy imports (AsyncSession, select, AsyncSessionLocal) as this module exclusively uses the Supabase client for data access; note that CreatorMatchAnalyticsResponse is correctly defined in schemas/schema.py and shouldn’t be removed.


462-494: get_creator_profile: remove unused brand_id, avoid magic constant, add response_model

  • Drop the unused brand_id parameter or enforce it for authorization checks
  • Declare a Pydantic response_model on the @router.get decorator
  • Replace the hardcoded match_score (0.85) with a real calculation or configurable logic
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3be437 and 7ff5998.

⛔ Files ignored due to path filters (1)
  • Frontend/public/aossielogo.png is excluded by !**/*.png
📒 Files selected for processing (21)
  • Backend/.env-example (1 hunks)
  • Backend/app/main.py (2 hunks)
  • Backend/app/models/models.py (2 hunks)
  • Backend/app/routes/ai_query.py (1 hunks)
  • Backend/app/routes/brand_dashboard.py (1 hunks)
  • Backend/app/schemas/schema.py (2 hunks)
  • Backend/app/services/ai_router.py (1 hunks)
  • Backend/app/services/ai_services.py (1 hunks)
  • Backend/app/services/redis_client.py (1 hunks)
  • Backend/requirements.txt (1 hunks)
  • Backend/sql.txt (1 hunks)
  • Frontend/README-INTEGRATION.md (1 hunks)
  • Frontend/src/components/chat/BrandChatAssistant.tsx (1 hunks)
  • Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx (1 hunks)
  • Frontend/src/components/user-nav.tsx (2 hunks)
  • Frontend/src/context/AuthContext.tsx (1 hunks)
  • Frontend/src/hooks/useBrandDashboard.ts (1 hunks)
  • Frontend/src/index.css (1 hunks)
  • Frontend/src/pages/Brand/Dashboard.tsx (1 hunks)
  • Frontend/src/services/aiApi.ts (1 hunks)
  • Frontend/src/services/brandApi.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-07T21:28:06.358Z
Learnt from: muntaxir4
PR: AOSSIE-Org/InPactAI#56
File: Backend/app/services/redis_client.py:1-4
Timestamp: 2025-05-07T21:28:06.358Z
Learning: Hardcoded Redis connection parameters in Backend/app/services/redis_client.py are intentional during development, with plans to implement environment variable configuration later during production preparation.

Applied to files:

  • Backend/app/services/redis_client.py
🧬 Code graph analysis (7)
Frontend/src/hooks/useBrandDashboard.ts (3)
Frontend/src/context/AuthContext.tsx (1)
  • useAuth (216-222)
Frontend/src/services/brandApi.ts (14)
  • DashboardOverview (7-13)
  • BrandProfile (15-24)
  • Campaign (26-36)
  • CreatorMatch (38-44)
  • Application (46-56)
  • Payment (58-68)
  • brandApi (246-246)
  • createCampaign (132-144)
  • updateCampaign (146-151)
  • deleteCampaign (153-157)
  • updateApplicationStatus (202-212)
  • searchCreators (164-178)
  • getCampaignPerformance (185-187)
  • getRevenueAnalytics (189-191)
Frontend/src/services/aiApi.ts (2)
  • queryAI (57-83)
  • aiApi (102-102)
Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx (1)
Frontend/src/components/collaboration-hub/CreatorMatchCard.tsx (1)
  • CreatorMatchCard (65-130)
Backend/app/services/ai_router.py (1)
Backend/app/routes/ai_query.py (1)
  • get_route_info (205-222)
Backend/app/routes/ai_query.py (3)
Backend/app/services/redis_client.py (2)
  • get_session_state (19-21)
  • save_session_state (23-24)
Backend/app/services/ai_router.py (3)
  • process_query (131-169)
  • list_available_routes (338-340)
  • get_route_info (334-336)
Backend/app/routes/brand_dashboard.py (9)
  • search_creators (418-460)
  • get_dashboard_overview (99-161)
  • get_creator_matches (387-415)
  • get_brand_profile (197-215)
  • get_brand_campaigns (246-260)
  • get_creator_profile (463-500)
  • get_campaign_performance (507-551)
  • get_revenue_analytics (554-584)
  • get_brand_contracts (591-603)
Backend/app/models/models.py (2)
Backend/app/models/chat.py (1)
  • generate_uuid (9-10)
Backend/app/routes/post.py (1)
  • generate_uuid (31-32)
Frontend/src/pages/Brand/Dashboard.tsx (1)
Frontend/src/hooks/useBrandDashboard.ts (1)
  • useBrandDashboard (6-288)
Backend/app/routes/brand_dashboard.py (3)
Backend/app/models/models.py (5)
  • User (25-53)
  • Sponsorship (76-92)
  • CampaignMetrics (189-204)
  • Contract (208-224)
  • SponsorshipApplication (114-128)
Backend/app/schemas/schema.py (19)
  • BrandProfileCreate (61-67)
  • BrandProfileUpdate (69-74)
  • BrandProfileResponse (76-87)
  • CampaignMetricsCreate (91-97)
  • CampaignMetricsResponse (99-110)
  • ContractCreate (114-119)
  • ContractUpdate (121-123)
  • ContractResponse (125-135)
  • CreatorMatchResponse (139-147)
  • DashboardOverviewResponse (151-156)
  • CampaignAnalyticsResponse (158-166)
  • SponsorshipApplicationResponse (182-194)
  • ApplicationUpdateRequest (196-198)
  • ApplicationSummaryResponse (200-206)
  • PaymentResponse (210-222)
  • PaymentStatusUpdate (224-225)
  • PaymentAnalyticsResponse (227-233)
  • CampaignMetricsUpdate (237-242)
  • SponsorshipCreate (21-27)
Frontend/src/utils/supabase.tsx (1)
  • supabase (11-11)
🪛 Ruff (0.13.1)
Backend/app/services/ai_router.py

22-22: Avoid specifying long messages outside the exception class

(TRY003)


131-131: PEP 484 prohibits implicit Optional

Convert to Optional[T]

(RUF013)


165-165: Consider moving this statement to an else block

(TRY300)


167-167: Do not catch blind exception: Exception

(BLE001)


168-168: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


169-169: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

Backend/app/routes/ai_query.py

40-40: Abstract raise to an inner function

(TRY301)


138-138: Do not catch blind exception: Exception

(BLE001)


139-139: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


147-147: Parenthesize a and b expressions when chaining and and or together, to make the precedence clear

Parenthesize the and subexpression

(RUF021)


182-182: Consider moving this statement to an else block

(TRY300)


185-185: Do not catch blind exception: Exception

(BLE001)


186-186: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


187-187: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


200-200: Do not catch blind exception: Exception

(BLE001)


201-201: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


202-202: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


212-212: Abstract raise to an inner function

(TRY301)


214-217: Consider moving this statement to an else block

(TRY300)


220-220: Do not catch blind exception: Exception

(BLE001)


221-221: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


222-222: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


233-237: Consider moving this statement to an else block

(TRY300)


238-238: Do not catch blind exception: Exception

(BLE001)


239-239: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

Backend/app/routes/brand_dashboard.py

71-71: Consider moving this statement to an else block

(TRY300)


72-72: Do not catch blind exception: Exception

(BLE001)


73-73: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


74-74: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


79-79: Unused function argument: window_seconds

(ARG001)


117-117: Local variable profile is assigned to but never used

Remove assignment to unused variable profile

(F841)


159-159: Do not catch blind exception: Exception

(BLE001)


160-160: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


161-161: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


190-190: Abstract raise to an inner function

(TRY301)


192-192: Do not catch blind exception: Exception

(BLE001)


193-193: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


194-194: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


209-209: Abstract raise to an inner function

(TRY301)


213-213: Do not catch blind exception: Exception

(BLE001)


214-214: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


215-215: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


233-233: Abstract raise to an inner function

(TRY301)


237-237: Do not catch blind exception: Exception

(BLE001)


238-238: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


239-239: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


280-280: Abstract raise to an inner function

(TRY301)


284-284: Do not catch blind exception: Exception

(BLE001)


285-285: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


286-286: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


322-322: Abstract raise to an inner function

(TRY301)


326-326: Do not catch blind exception: Exception

(BLE001)


327-327: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


328-328: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


343-343: Abstract raise to an inner function

(TRY301)


350-350: Abstract raise to an inner function

(TRY301)


354-354: Do not catch blind exception: Exception

(BLE001)


355-355: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


356-356: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


370-370: Abstract raise to an inner function

(TRY301)


372-372: Local variable response is assigned to but never used

Remove assignment to unused variable response

(F841)


374-374: Consider moving this statement to an else block

(TRY300)


378-378: Do not catch blind exception: Exception

(BLE001)


379-379: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


380-380: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


411-411: Consider moving this statement to an else block

(TRY300)


413-413: Do not catch blind exception: Exception

(BLE001)


414-414: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


415-415: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


419-419: Unused function argument: brand_id

(ARG001)


420-420: Unused function argument: industry

(ARG001)


422-422: Unused function argument: location

(ARG001)


456-456: Consider moving this statement to an else block

(TRY300)


458-458: Do not catch blind exception: Exception

(BLE001)


459-459: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


460-460: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


465-465: Unused function argument: brand_id

(ARG001)


474-474: Abstract raise to an inner function

(TRY301)


489-494: Consider moving this statement to an else block

(TRY300)


498-498: Do not catch blind exception: Exception

(BLE001)


499-499: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


500-500: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


547-547: Consider moving this statement to an else block

(TRY300)


549-549: Do not catch blind exception: Exception

(BLE001)


550-550: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


551-551: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


582-582: Do not catch blind exception: Exception

(BLE001)


583-583: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


584-584: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


599-599: Consider moving this statement to an else block

(TRY300)


601-601: Do not catch blind exception: Exception

(BLE001)


602-602: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


603-603: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


627-627: Abstract raise to an inner function

(TRY301)


629-629: Do not catch blind exception: Exception

(BLE001)


630-630: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


631-631: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


646-646: Abstract raise to an inner function

(TRY301)


653-653: Abstract raise to an inner function

(TRY301)


657-657: Do not catch blind exception: Exception

(BLE001)


658-658: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


659-659: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


711-711: Consider moving this statement to an else block

(TRY300)


715-715: Do not catch blind exception: Exception

(BLE001)


716-716: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


717-717: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


735-735: Abstract raise to an inner function

(TRY301)


742-742: Abstract raise to an inner function

(TRY301)


757-757: Consider moving this statement to an else block

(TRY300)


761-761: Do not catch blind exception: Exception

(BLE001)


762-762: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


763-763: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


782-782: Abstract raise to an inner function

(TRY301)


787-787: Abstract raise to an inner function

(TRY301)


799-799: Abstract raise to an inner function

(TRY301)


803-803: Do not catch blind exception: Exception

(BLE001)


804-804: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


805-805: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


847-847: Do not catch blind exception: Exception

(BLE001)


848-848: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


849-849: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


890-890: Consider moving this statement to an else block

(TRY300)


894-894: Do not catch blind exception: Exception

(BLE001)


895-895: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


896-896: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


913-913: Abstract raise to an inner function

(TRY301)


931-931: Consider moving this statement to an else block

(TRY300)


935-935: Do not catch blind exception: Exception

(BLE001)


936-936: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


937-937: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


956-956: Abstract raise to an inner function

(TRY301)


964-964: Abstract raise to an inner function

(TRY301)


968-968: Do not catch blind exception: Exception

(BLE001)


969-969: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


970-970: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1010-1010: Do not catch blind exception: Exception

(BLE001)


1011-1011: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1012-1012: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1036-1036: Abstract raise to an inner function

(TRY301)


1058-1058: Abstract raise to an inner function

(TRY301)


1062-1062: Do not catch blind exception: Exception

(BLE001)


1063-1063: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1064-1064: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1082-1082: Abstract raise to an inner function

(TRY301)


1090-1090: Consider moving this statement to an else block

(TRY300)


1094-1094: Do not catch blind exception: Exception

(BLE001)


1095-1095: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1096-1096: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1117-1117: Abstract raise to an inner function

(TRY301)


1126-1126: Abstract raise to an inner function

(TRY301)


1130-1130: Do not catch blind exception: Exception

(BLE001)


1131-1131: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1132-1132: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (8)
Backend/app/models/models.py (1)

165-168: Summary incorrect – no duplicate class definitions found
Verified that BrandProfile, CampaignMetrics, Contract, and CreatorMatch each appear only once in Backend/app/models/models.py.

Likely an incorrect or invalid review comment.

Backend/app/main.py (2)

9-10: Routers wired correctly.

The new brand and AI routers are cleanly imported.


59-60: No route prefix collisions detected ai_query_router (/api/ai) and ai.router endpoints (/api/trending-niches, /youtube/channel-info) use distinct paths; no action needed.

Backend/app/services/redis_client.py (1)

17-17: TTL constant is clear and sensible.

SESSION_TTL = 1800 is a good default for chat sessions.

Frontend/src/hooks/useBrandDashboard.ts (1)

23-23: Approve brandId mapping. The backend’s brand_id parameter refers to the Brand user’s UUID (current_user_id), so using user?.id here is correct.

Backend/sql.txt (1)

47-89: Ensure FK types match users.id and sponsorships.id.
The DDL for those tables isn’t present here—verify their id column types. If they’re defined as UUID, change the FKs in brand_profiles, campaign_metrics, contracts, and creator_matches from VARCHAR to UUID DEFAULT gen_random_uuid() to avoid implicit casts.

Frontend/src/services/brandApi.ts (1)

227-236: Backend endpoints verified
All referenced routes (PUT /payments/{payment_id}/status, GET /payments/analytics, PUT /applications/{application_id}) accept brand_id as a query parameter and correspond to the frontend calls.

Frontend/src/components/chat/BrandChatAssistant.tsx (1)

41-44: Confirm auth mechanism for API calls
I didn’t find any existing credentials or Authorization usage in this component or elsewhere. Please confirm whether your API relies on cookie-based sessions (add credentials: 'include') or bearer tokens (add an Authorization: Bearer <token> header), and update this fetch call accordingly.

Comment on lines +169 to +186
# Brand Profile Table (Extended brand information)
class BrandProfile(Base):
__tablename__ = "brand_profiles"

id = Column(String, primary_key=True, default=generate_uuid)
user_id = Column(String, ForeignKey("users.id"), nullable=False)
company_name = Column(String, nullable=True)
website = Column(String, nullable=True)
industry = Column(String, nullable=True)
contact_person = Column(String, nullable=True)
contact_email = Column(String, nullable=True)
created_at = Column(
DateTime(timezone=True), default=lambda: datetime.now(timezone.utc)
)

# Relationships
user = relationship("User", backref="brand_profile")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce one-to-one BrandProfile ↔ User and add DB uniqueness

If each brand should have a single profile, enforce it in the schema to avoid data anomalies.

Apply this diff to the class body:

 class BrandProfile(Base):
   __tablename__ = "brand_profiles"
+  __table_args__ = (
+      UniqueConstraint("user_id", name="uq_brand_profiles_user_id"),
+  )
@@
-  # Relationships
-  user = relationship("User", backref="brand_profile")
+  # Relationships
+  user = relationship("User", backref=backref("brand_profile", uselist=False))

Outside this hunk, add the import:

from sqlalchemy.orm import backref
from sqlalchemy import UniqueConstraint
🤖 Prompt for AI Agents
In Backend/app/models/models.py around lines 169 to 186, the BrandProfile→User
relation isn't enforced as one-to-one and lacks a DB uniqueness constraint; add
a UniqueConstraint on user_id and change the relationship backref to use
backref("brand_profile", uselist=False) so SQLAlchemy treats it as one-to-one.
Also add the imports `from sqlalchemy.orm import backref` and `from sqlalchemy
import UniqueConstraint` at the top, and add __table_args__ =
(UniqueConstraint("user_id", name="uq_brand_profiles_user_id"),) inside the
BrandProfile class body.

Comment on lines +207 to +225
# Contracts Table (Contract management)
class Contract(Base):
__tablename__ = "contracts"

id = Column(String, primary_key=True, default=generate_uuid)
sponsorship_id = Column(String, ForeignKey("sponsorships.id"), nullable=False)
creator_id = Column(String, ForeignKey("users.id"), nullable=False)
brand_id = Column(String, ForeignKey("users.id"), nullable=False)
contract_url = Column(String, nullable=True)
status = Column(String, default="draft") # draft, signed, completed, cancelled
created_at = Column(
DateTime(timezone=True), default=lambda: datetime.now(timezone.utc)
)

# Relationships
sponsorship = relationship("Sponsorship", backref="contracts")
creator = relationship("User", foreign_keys=[creator_id], backref="creator_contracts")
brand = relationship("User", foreign_keys=[brand_id], backref="brand_contracts")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Contracts: constrain status and prevent duplicates per sponsorship+creator

Current free-form status invites invalid values; and duplicates per (sponsorship, creator) are possible.

Apply this diff:

 class Contract(Base):
   __tablename__ = "contracts"
+  __table_args__ = (
+      UniqueConstraint("sponsorship_id", "creator_id", name="uq_contract_sponsorship_creator"),
+  )
@@
-  status = Column(String, default="draft")  # draft, signed, completed, cancelled
+  status = Column(
+      Enum("draft", "signed", "completed", "cancelled", name="contract_status"),
+      default="draft",
+      nullable=False,
+  )

Outside this hunk, add imports:

from sqlalchemy import UniqueConstraint, Enum

Operational note:

  • Consider ondelete behaviors for FKs (e.g., cascade/RESTRICT) to avoid orphans.
🤖 Prompt for AI Agents
In Backend/app/models/models.py around lines 207 to 225, the Contract model
currently allows free-form status values and permits duplicate contracts for the
same sponsorship+creator; add a SQLAlchemy Enum type for the status column
(restricting values to e.g., "draft","signed","completed","cancelled") and
replace the status Column to use that Enum (with an appropriate default), add a
UniqueConstraint on (sponsorship_id, creator_id) to prevent duplicates, and
import UniqueConstraint and Enum at the top of the file; also adjust the
ForeignKey definitions to include explicit ondelete behaviors (e.g.,
ondelete="CASCADE" or "RESTRICT") as appropriate to avoid orphaned records.

Comment on lines +228 to +241
class CreatorMatch(Base):
__tablename__ = "creator_matches"

id = Column(String, primary_key=True, default=generate_uuid)
brand_id = Column(String, ForeignKey("users.id"), nullable=False)
creator_id = Column(String, ForeignKey("users.id"), nullable=False)
match_score = Column(Float, nullable=True)
matched_at = Column(
DateTime(timezone=True), default=lambda: datetime.now(timezone.utc)
)

# Relationships
brand = relationship("User", foreign_keys=[brand_id], backref="creator_matches")
creator = relationship("User", foreign_keys=[creator_id], backref="brand_matches")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

CreatorMatch: enforce uniqueness and score bounds; add query index

Avoid duplicate matches and keep score sane.

Apply this diff:

 class CreatorMatch(Base):
   __tablename__ = "creator_matches"
+  __table_args__ = (
+      UniqueConstraint("brand_id", "creator_id", name="uq_creator_matches_pair"),
+      CheckConstraint("match_score >= 0 AND match_score <= 100", name="ck_match_score_0_100"),
+      Index("ix_creator_matches_brand_rank", "brand_id", "match_score", "matched_at"),
+  )

Outside this hunk, add imports:

from sqlalchemy import UniqueConstraint, CheckConstraint, Index
🤖 Prompt for AI Agents
In Backend/app/models/models.py around lines 228 to 241, the CreatorMatch model
currently allows duplicate brand-creator pairs and unconstrained match_score;
add a UniqueConstraint on (brand_id, creator_id) to prevent duplicates, a
CheckConstraint to ensure match_score is between 0 and 1 (or non-null bounds as
desired), and an Index on (brand_id, creator_id) to speed lookups; also add the
required imports at the top of the file: UniqueConstraint, CheckConstraint, and
Index from sqlalchemy.

Comment on lines +22 to +26
class AIQueryResponse(BaseModel):
intent: str
route: Optional[str] = None
parameters: Dict[str, Any] = {}
follow_up_needed: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use Field(default_factory=dict) for mutable default.

Avoid shared default dict across instances.

-from pydantic import BaseModel
+from pydantic import BaseModel, Field
...
 class AIQueryResponse(BaseModel):
     intent: str
     route: Optional[str] = None
-    parameters: Dict[str, Any] = {}
+    parameters: Dict[str, Any] = Field(default_factory=dict)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class AIQueryResponse(BaseModel):
intent: str
route: Optional[str] = None
parameters: Dict[str, Any] = {}
follow_up_needed: bool = False
++ b/Backend/app/routes/ai_query.py
@@ -1,3 +1,4 @@
from pydantic import BaseModel, Field
from typing import Optional, Dict, Any
class AIQueryResponse(BaseModel):
intent: str
@@ -5,7 +6,7 @@ class AIQueryResponse(BaseModel):
route: Optional[str] = None
parameters: Dict[str, Any] = Field(default_factory=dict)
follow_up_needed: bool = False
🤖 Prompt for AI Agents
In Backend/app/routes/ai_query.py around lines 22 to 26, the Pydantic model uses
a mutable default dict for the parameters field which can be shared across
instances; change the field to use Field(default_factory=dict) (i.e.,
parameters: Dict[str, Any] = Field(default_factory=dict)) and add the necessary
import from pydantic (Field) at the top of the file so each model instance gets
its own empty dict.

Comment on lines +91 to +95
for param in intent_param_map[intent]["required"]:
if not params.get(param):
all_params_present = False
missing_params.append(param)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Required‑param check treats falsy values as missing.

Use is None to avoid misclassifying valid falsy inputs (e.g., 0).

-            for param in intent_param_map[intent]["required"]:
-                if not params.get(param):
-                    all_params_present = False
-                    missing_params.append(param)
+            for param in intent_param_map[intent]["required"]:
+                if params.get(param) is None:
+                    all_params_present = False
+                    missing_params.append(param)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for param in intent_param_map[intent]["required"]:
if not params.get(param):
all_params_present = False
missing_params.append(param)
for param in intent_param_map[intent]["required"]:
if params.get(param) is None:
all_params_present = False
missing_params.append(param)
🤖 Prompt for AI Agents
In Backend/app/routes/ai_query.py around lines 91 to 95, the required-param
check treats any falsy value (like 0 or empty string) as missing because it uses
a truthiness test; change the check to explicitly test for None (e.g., use
params.get(param) is None or param not in params or params[param] is None) so
valid falsy inputs are accepted while truly absent parameters are flagged, and
update missing_params/all_params_present logic accordingly.

Comment on lines 172 to 176
const params = new URLSearchParams({ brand_id: brandId });
if (filters?.industry) params.append('industry', filters.industry);
if (filters?.min_engagement) params.append('min_engagement', filters.min_engagement.toString());
if (filters?.location) params.append('location', filters.location);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Allow min_engagement = 0 in filters.

if (filters?.min_engagement) skips legitimate 0. Use nullish check.

-    if (filters?.min_engagement) params.append('min_engagement', filters.min_engagement.toString());
+    if (filters?.min_engagement != null) params.append('min_engagement', filters.min_engagement.toString());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const params = new URLSearchParams({ brand_id: brandId });
if (filters?.industry) params.append('industry', filters.industry);
if (filters?.min_engagement) params.append('min_engagement', filters.min_engagement.toString());
if (filters?.location) params.append('location', filters.location);
const params = new URLSearchParams({ brand_id: brandId });
if (filters?.industry) params.append('industry', filters.industry);
if (filters?.min_engagement != null) params.append('min_engagement', filters.min_engagement.toString());
if (filters?.location) params.append('location', filters.location);
🤖 Prompt for AI Agents
In Frontend/src/services/brandApi.ts around lines 172 to 176, the current truthy
check `if (filters?.min_engagement)` incorrectly skips legitimate zero values;
change the condition to check for null/undefined (e.g., `if
(filters?.min_engagement != null)`) so that 0 is accepted, then append the value
with toString() as before.

Comment on lines 202 to 212
async updateApplicationStatus(
applicationId: string,
status: string,
notes?: string,
brandId?: string
): Promise<Application> {
return this.makeRequest<Application>(`/applications/${applicationId}?brand_id=${brandId}`, {
method: 'PUT',
body: JSON.stringify({ status, notes }),
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

brandId is optional but required in URL — avoid sending brand_id=undefined.

With brandId?: string, the URL becomes ...?brand_id=undefined when omitted, violating backend UUID validation. Make it required.

-  async updateApplicationStatus(
-    applicationId: string,
-    status: string,
-    notes?: string,
-    brandId?: string
-  ): Promise<Application> {
+  async updateApplicationStatus(
+    applicationId: string,
+    status: string,
+    notes: string | undefined,
+    brandId: string
+  ): Promise<Application> {

Optionally, guard at runtime:

-    return this.makeRequest<Application>(`/applications/${applicationId}?brand_id=${brandId}`, {
+    if (!brandId) throw new Error('brandId is required');
+    return this.makeRequest<Application>(`/applications/${applicationId}?brand_id=${brandId}`, {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async updateApplicationStatus(
applicationId: string,
status: string,
notes?: string,
brandId?: string
): Promise<Application> {
return this.makeRequest<Application>(`/applications/${applicationId}?brand_id=${brandId}`, {
method: 'PUT',
body: JSON.stringify({ status, notes }),
});
}
async updateApplicationStatus(
applicationId: string,
status: string,
notes: string | undefined,
brandId: string
): Promise<Application> {
if (!brandId) throw new Error('brandId is required');
return this.makeRequest<Application>(`/applications/${applicationId}?brand_id=${brandId}`, {
method: 'PUT',
body: JSON.stringify({ status, notes }),
});
}
🤖 Prompt for AI Agents
In Frontend/src/services/brandApi.ts around lines 202 to 212, the optional
brandId is being interpolated into the query string which yields
brand_id=undefined when omitted; either make brandId required in the method
signature or (preferably) change the URL construction to only append the
brand_id query parameter when brandId is provided (e.g., build the URL with
conditional logic or URLSearchParams) and validate/guard that brandId is a
non-empty UUID at runtime (throw or reject if missing/invalid when the backend
requires it).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 94

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
Backend/app/security/auth.py (1)

1-41: Enforce non-blocking I/O, defer env checks, and use project API key for Supabase.

  • Wrap the requests.get call in run_in_threadpool to prevent blocking the async event loop.
  • Defer loading and validation of SUPABASE_URL (and raise an HTTPException if unset) inside get_current_user rather than at import time.
  • Provide the Supabase anon (publishable) key in the apikey header and reserve the JWT exclusively for the Authorization: Bearer <token> header.
Backend/app/routes/user_profile.py (1)

75-115: Fix public URL extraction; drop unused var; add size cap and stricter validation.

Supabase Python SDK get_public_url typically returns a dict; currently you write that dict into profile_image_url. Also remove the unused upload result and reject oversized files.

 @router.post("/profile/image")
 def upload_profile_image(file: UploadFile = File(...), current_user: dict = Depends(get_current_user)):
     try:
         # Basic validation
         content_type = file.content_type or ""
         if not content_type.startswith("image/"):
             raise HTTPException(status_code=400, detail="Only image uploads are allowed")
 
         # Create a unique path
         user_id = current_user.get("id")
         ext = os.path.splitext(file.filename or "")[1] or ".jpg"
         path = f"avatars/{user_id}/{uuid.uuid4().hex}{ext}"
 
-        # Read file bytes
-        data = file.file.read()
+        # Read file bytes with a hard limit (8 MB)
+        max_bytes = 8 * 1024 * 1024
+        data = file.file.read(max_bytes + 1)
+        if len(data) > max_bytes:
+            raise HTTPException(status_code=413, detail="File too large (max 8MB)")
 
         # Upload to Supabase storage (bucket must exist)
         storage = supabase.storage()
         bucket = storage.from_("public")  # assuming a 'public' bucket
         # Upsert to replace existing avatar path if same name re-used
-        upload_res = bucket.upload(path, data, {
+        bucket.upload(path, data, {
             "contentType": content_type or "image/jpeg",
             "upsert": True,
         })
 
         # Get a public URL
-        public_url = bucket.get_public_url(path)
+        res = bucket.get_public_url(path)
+        public_url = (res.get("data") or {}).get("publicUrl") if isinstance(res, dict) else str(res)
+        if not public_url:
+            raise HTTPException(status_code=500, detail="Failed to resolve public URL")
 
         # Update user's profile_image_url
         update_res = supabase.table("users").update({"profile_image_url": public_url}).eq("id", user_id).execute()
         if not update_res.data:
             raise HTTPException(status_code=400, detail="Failed to update user with image URL")

Optional hardening:

  • Whitelist extensions by content_type (jpg/png/webp/svg) and normalize to .jpg/.png.
  • Consider async def and await file.read() for non-blocking I/O.

Comment on lines +111 to +138
if intent == "creator_search":
from ..routes.brand_dashboard import search_creators
api_result = await search_creators(**api_args)
elif intent == "dashboard_overview":
from ..routes.brand_dashboard import get_dashboard_overview
api_result = await get_dashboard_overview(**api_args)
elif intent == "creator_matches":
from ..routes.brand_dashboard import get_creator_matches
api_result = await get_creator_matches(**api_args)
elif intent == "brand_profile":
from ..routes.brand_dashboard import get_brand_profile
api_result = await get_brand_profile(**api_args)
elif intent == "campaigns":
from ..routes.brand_dashboard import get_brand_campaigns as get_campaigns
api_result = await get_campaigns(**api_args)
elif intent == "creator_profile":
from ..routes.brand_dashboard import get_creator_profile
api_result = await get_creator_profile(**api_args)
elif intent == "analytics_performance":
from ..routes.brand_dashboard import get_campaign_performance
api_result = await get_campaign_performance(**api_args)
elif intent == "analytics_revenue":
from ..routes.brand_dashboard import get_revenue_analytics
api_result = await get_revenue_analytics(**api_args)
elif intent == "contracts":
from ..routes.brand_dashboard import get_brand_contracts as get_contracts
api_result = await get_contracts(**api_args)
except Exception as api_exc:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Do not call FastAPI route handlers directly; they rely on DI (Depends) and will error with unexpected kwargs.

Handlers like get_dashboard_overview expect current_user: User = Depends(...). Direct calls with **api_args will raise TypeError and bypass auth. Extract shared business logic into services and import/call those instead, or invoke via HTTP with proper auth/session.

Suggested approach (high-level):

  • Move logic from route handlers into functions/services that accept explicit params (e.g., brand_id), called by both the route and this AI layer.
  • Keep routes thin, only handling DI and response shaping.

I can draft a refactor plan or patches for specific endpoints if you confirm target service boundaries.

🧰 Tools
🪛 Ruff (0.13.1)

138-138: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In Backend/app/routes/ai_query.py around lines 111-138, the code directly calls
FastAPI route handlers (e.g., get_dashboard_overview) with **api_args which will
break because those handlers use DI (Depends) and auth; instead extract the
underlying business logic into standalone service functions that accept explicit
parameters (e.g., brand_id, filters) and return serializable results, update
each route handler to call the new service and keep DI only in the route, then
import and call those service functions from this AI layer; ensure the service
functions perform auth checks or accept an authenticated user parameter,
preserve the same return shape or adapt ai_query response handling, and
add/adjust tests and error handling accordingly.

Comment on lines +156 to +175
session_id = http_request.headers.get("X-Session-ID")
if not session_id and request.context:
session_id = request.context.get("session_id")
if not session_id:
session_id = str(uuid.uuid4())

# 2. Load previous state from Redis
state = await get_session_state(session_id)
prev_params = state.get("params", {})
prev_intent = state.get("intent")

# 3. Merge new params and intent
# Use new intent if present, else previous
intent = result.get("route") or prev_intent
params = {**prev_params, **result.get("parameters", {})}
state["params"] = params
state["intent"] = intent

# 4. Save updated state to Redis
await save_session_state(session_id, state)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure session state merge handles non-dict parameters robustly.

If the LLM returns a non-dict parameters, {**prev_params, **result.get("parameters", {})} will raise. Guard with isinstance(..., dict).

-        params = {**prev_params, **result.get("parameters", {})}
+        new_params = result.get("parameters", {}) or {}
+        if not isinstance(new_params, dict):
+            new_params = {}
+        params = {**prev_params, **new_params}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
session_id = http_request.headers.get("X-Session-ID")
if not session_id and request.context:
session_id = request.context.get("session_id")
if not session_id:
session_id = str(uuid.uuid4())
# 2. Load previous state from Redis
state = await get_session_state(session_id)
prev_params = state.get("params", {})
prev_intent = state.get("intent")
# 3. Merge new params and intent
# Use new intent if present, else previous
intent = result.get("route") or prev_intent
params = {**prev_params, **result.get("parameters", {})}
state["params"] = params
state["intent"] = intent
# 4. Save updated state to Redis
await save_session_state(session_id, state)
# 3. Merge new params and intent
# Use new intent if present, else previous
intent = result.get("route") or prev_intent
new_params = result.get("parameters", {}) or {}
if not isinstance(new_params, dict):
new_params = {}
params = {**prev_params, **new_params}
state["params"] = params
state["intent"] = intent
🤖 Prompt for AI Agents
In Backend/app/routes/ai_query.py around lines 156 to 175, the merge of session
params assumes result.get("parameters") is a dict and will crash if it's not;
guard by ensuring both prev_params and incoming parameters are dicts (e.g., if
not isinstance(prev_params, dict) set prev_params = {} and if not
isinstance(result.get("parameters"), dict) set incoming_params = {}), then merge
using {**prev_params, **incoming_params}; keep the intent selection logic the
same and continue to update state["params"] and state["intent"] before saving.

Comment on lines 48 to 54
@router.post("/create", response_model=ExportResponse)
async def create_export(
export_config: ExportConfigRequest,
background_tasks: BackgroundTasks,
user_id: str = "test_user", # TODO: Get from authentication
db: Session = Depends(get_db)
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Blocker: IDOR — don’t accept user_id from the client; derive from auth.

All endpoints expose user_id as a query param, enabling users to act on others’ exports. Use get_current_user and current_user.id.

-from fastapi import APIRouter, Depends, HTTPException, BackgroundTasks
+from fastapi import APIRouter, Depends, HTTPException, BackgroundTasks, Path
@@
-from app.db.db import get_db
+from app.db.db import get_sync_db  # Use sync DB to match service (see comment below)
 from app.services.export_service import ExportService
 from app.services.export_job_service import job_queue
-from app.models.models import ExportRequest
+from app.models.models import ExportRequest, User
+from app.security.auth import get_current_user  # adjust to your canonical auth module
@@
-async def create_export(
-    export_config: ExportConfigRequest,
-    background_tasks: BackgroundTasks,
-    user_id: str = "test_user",  # TODO: Get from authentication
-    db: Session = Depends(get_db)
-):
+async def create_export(
+    export_config: ExportConfigRequest,
+    background_tasks: BackgroundTasks,
+    current_user: User = Depends(get_current_user),
+    db: Session = Depends(get_sync_db),
+):
@@
-        export_id = export_service.create_export_request(
-            db, 
-            user_id, 
-            export_config.dict()
-        )
+        export_id = export_service.create_export_request(
+            db,
+            current_user.id,
+            export_config.model_dump(),
+        )
@@
-async def get_export_status(
-    export_id: str,
-    user_id: str = "test_user",  # TODO: Get from authentication
-    db: Session = Depends(get_db)
-):
+async def get_export_status(
+    export_id: str,
+    current_user: User = Depends(get_current_user),
+    db: Session = Depends(get_sync_db),
+):
@@
-        status = export_service.get_export_status(db, export_id, user_id)
+        status = export_service.get_export_status(db, export_id, current_user.id)
@@
-async def download_export_file(
-    filename: str,
-    user_id: str = "test_user",  # TODO: Get from authentication
-    db: Session = Depends(get_db)
-):
+async def download_export_file(
+    filename: str = Path(..., pattern=r'^[A-Za-z0-9._-]{1,200}$'),
+    current_user: User = Depends(get_current_user),
+):
@@
-        parts = filename.split('_')
+        from os.path import basename
+        filename = basename(filename)  # prevent path traversal
+        parts = filename.split('_')
@@
-        file_user_id = parts[1]
-        if file_user_id != user_id:
+        file_user_id = parts[1]
+        if file_user_id != current_user.id:
             raise HTTPException(status_code=403, detail="Access denied")
@@
-async def get_user_exports(
-    user_id: str = "test_user",  # TODO: Get from authentication
-    db: Session = Depends(get_db)
-):
+async def get_user_exports(
+    current_user: User = Depends(get_current_user),
+    db: Session = Depends(get_sync_db),
+):
@@
-        exports = db.query(ExportRequest).filter(
-            ExportRequest.user_id == user_id
+        exports = db.query(ExportRequest).filter(
+            ExportRequest.user_id == current_user.id
         ).order_by(ExportRequest.created_at.desc()).limit(50).all()
@@
-async def delete_export(
-    export_id: str,
-    user_id: str = "test_user",  # TODO: Get from authentication
-    db: Session = Depends(get_db)
-):
+async def delete_export(
+    export_id: str,
+    current_user: User = Depends(get_current_user),
+    db: Session = Depends(get_sync_db),
+):
@@
-        export_request = db.query(ExportRequest).filter(
-            ExportRequest.id == export_id,
-            ExportRequest.user_id == user_id
+        export_request = db.query(ExportRequest).filter(
+            ExportRequest.id == export_id,
+            ExportRequest.user_id == current_user.id
         ).first()

Also applies to: 88-92, 109-113, 149-152, 190-194

🧰 Tools
🪛 Ruff (0.13.1)

53-53: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

🤖 Prompt for AI Agents
In Backend/app/routes/export_routes.py around lines 48-54 (and similarly handle
the occurrences at 88-92, 109-113, 149-152, 190-194), do not accept user_id from
the client; instead inject the authenticated user via the existing
get_current_user dependency and use current_user.id for all user-scoped
operations. Remove the user_id parameter from the function signature and replace
any usage with current_user.id (ensure you add the dependency: current_user:
User = Depends(get_current_user) where User is your auth model), and update any
route calls/logic accordingly so no endpoint reads user_id from query/body.
Ensure tests and callers are adjusted to rely on auth context rather than
passing user_id.

Comment on lines 5 to 16
import asyncio
import logging
from typing import Dict, Any, List
from datetime import datetime, timedelta
from sqlalchemy.orm import Session
from sqlalchemy import and_

from ..db.db import get_db
from ..models.models import AlertConfig, CampaignMetrics, Sponsorship
from .alert_service import AlertService
from .roi_service import ROIService

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use AsyncSession and select(); avoid importing unused Session

Switch to AsyncSession and SQLAlchemy 2.0 select API; import AsyncSessionLocal for proper session management.

-import asyncio
-import logging
-from typing import Dict, Any, List
-from datetime import datetime, timedelta
-from sqlalchemy.orm import Session
-from sqlalchemy import and_
-
-from ..db.db import get_db
+import asyncio
+import logging
+from typing import Dict, Any, List
+from datetime import datetime, timedelta
+from sqlalchemy.ext.asyncio import AsyncSession
+from sqlalchemy import and_, select
+
+from ..db.db import get_db, AsyncSessionLocal
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import asyncio
import logging
from typing import Dict, Any, List
from datetime import datetime, timedelta
from sqlalchemy.orm import Session
from sqlalchemy import and_
from ..db.db import get_db
from ..models.models import AlertConfig, CampaignMetrics, Sponsorship
from .alert_service import AlertService
from .roi_service import ROIService
import asyncio
import logging
from typing import Dict, Any, List
from datetime import datetime, timedelta
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import and_, select
from ..db.db import get_db, AsyncSessionLocal
from ..models.models import AlertConfig, CampaignMetrics, Sponsorship
from .alert_service import AlertService
from .roi_service import ROIService
🤖 Prompt for AI Agents
In Backend/app/services/alert_monitoring_service.py around lines 5 to 16, the
module currently imports a synchronous Session and uses older ORM patterns;
update imports and DB usage to use AsyncSession and SQLAlchemy 2.0 select() API
and remove the unused Session import. Replace Session import with AsyncSession
(and import AsyncSessionLocal from your db module), change any synchronous query
code to use async select() calls with await and the AsyncSession context (async
with AsyncSessionLocal() as session: ...), and update any helper functions to
accept an AsyncSession where needed so session management is correct and
non-blocking.

Comment on lines 59 to 71
async def _check_all_alerts(self):
"""Check all active alerts and trigger notifications if needed."""
db = None
try:
db = next(get_db())

# Get all active alerts
active_alerts = db.query(AlertConfig).filter(
AlertConfig.is_active == True
).all()

logger.info(f"Checking {len(active_alerts)} active alerts")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Acquire async DB session correctly and use SQLAlchemy 2.0 select()

next(get_db()) is invalid for an async dependency; use AsyncSessionLocal and select(AlertConfig).where(AlertConfig.is_active).

-        db = None
-        try:
-            db = next(get_db())
-            
-            # Get all active alerts
-            active_alerts = db.query(AlertConfig).filter(
-                AlertConfig.is_active == True
-            ).all()
+        try:
+            async with AsyncSessionLocal() as db:
+                # Get all active alerts
+                result = await db.execute(
+                    select(AlertConfig).where(AlertConfig.is_active)
+                )
+                active_alerts = result.scalars().all()
@@
-        finally:
-            if db:
-                db.close()
+        finally:
+            pass

Also applies to: 86-91

🧰 Tools
🪛 Ruff (0.13.1)

67-67: Avoid equality comparisons to True; use AlertConfig.is_active: for truth checks

Replace with AlertConfig.is_active

(E712)

🤖 Prompt for AI Agents
In Backend/app/services/alert_monitoring_service.py around lines 59-71 (and
similarly at 86-91) you are incorrectly calling next(get_db()) for an async DB
dependency and using the old query API; replace this with an async session
usage: obtain an AsyncSession from AsyncSessionLocal (or the proper async
session factory) via an async with block, use SQLAlchemy 2.0
select(AlertConfig).where(AlertConfig.is_active) and await session.execute(...)
then use result.scalars().all() to get active_alerts; ensure all DB calls use
await and the async session is closed by the context manager.

Comment on lines 14 to 19
from app.models.models import (
CampaignMetrics,
Sponsorship,
SponsorshipPayment,
User
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Model-field mismatch: CampaignMetrics likely needs a reach column

Tests and ROIService aggregate reach, but CampaignMetrics snippet shows no reach. This will break real queries and mocks with spec_set=True.

  • Add reach = Column(Integer, nullable=True) to CampaignMetrics and include in migrations; or
  • Stop relying on reach and derive from other sources consistently.

I can draft the migration if you confirm preferred direction.

🤖 Prompt for AI Agents
In Backend/test_roi_service.py around lines 14 to 19, tests and ROIService
expect a CampaignMetrics model field named `reach` but the CampaignMetrics model
lacks this column; add `reach = Column(Integer, nullable=True)` to the
CampaignMetrics model definition and create a corresponding DB migration (or
alternatively update the ROIService/tests to stop referencing `reach` and derive
it consistently elsewhere) so code and schema align; confirm preferred approach
and, if adding the column, update any factory/mocks to include `reach` to avoid
spec_set=True failures.

Comment on lines 53 to 54
- For brand users: `http://localhost:5173/brand/dashboard`
- For creators/default users: `http://localhost:5173/dashboard`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace hard tabs with spaces (MD010)

Fix markdownlint errors by replacing tabs before the list items with spaces.

-	- For brand users: `http://localhost:5173/brand/dashboard`
-	- For creators/default users: `http://localhost:5173/dashboard`
+  - For brand users: `http://localhost:5173/brand/dashboard`
+  - For creators/default users: `http://localhost:5173/dashboard`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- For brand users: `http://localhost:5173/brand/dashboard`
- For creators/default users: `http://localhost:5173/dashboard`
- For brand users: `http://localhost:5173/brand/dashboard`
- For creators/default users: `http://localhost:5173/dashboard`
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

53-53: Hard tabs
Column: 1

(MD010, no-hard-tabs)


54-54: Hard tabs
Column: 1

(MD010, no-hard-tabs)

🤖 Prompt for AI Agents
In Frontend/README-INTEGRATION.md around lines 53 to 54, the list items use hard
tab characters before the hyphens which triggers MD010; replace each leading tab
with spaces (preferably two or four spaces as per project style) so the list
items start with spaces followed by the hyphen and content, and then run a
markdownlint check to confirm the MD010 warning is resolved.

Comment on lines 227 to 279
it('should display workflow status when workflows are active', async () => {
// Mock active workflows
vi.doMock('../../hooks/useIntegration', () => ({
useIntegration: () => ({
workflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
activeWorkflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
getWorkflowStatus: vi.fn(),
cancelWorkflow: vi.fn(),
refreshWorkflows: vi.fn(),
isExecuting: false,
error: null,
executeBrandOnboarding: vi.fn(),
executeContentLinking: vi.fn(),
executeExport: vi.fn(),
executeAlertSetup: vi.fn(),
clearError: vi.fn()
})
}));

render(
<TestWrapper>
<Analytics />
</TestWrapper>
);

await waitFor(() => {
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});

// The component should handle active workflows (even if not displayed due to mocking)
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Late vi.doMock won’t affect already-imported Analytics

Analytics is imported at module scope, so the vi.doMock('../../hooks/useIntegration', ...) inside the test won’t rewire its dependency. Re-import the component after resetting modules, or move the mock to the top before importing Analytics.

-    it('should display workflow status when workflows are active', async () => {
-      // Mock active workflows
-      vi.doMock('../../hooks/useIntegration', () => ({
+    it('should display workflow status when workflows are active', async () => {
+      // Rewire the hook before (re)importing the component
+      vi.resetModules();
+      vi.doMock('../../hooks/useIntegration', () => ({
         useIntegration: () => ({
           workflows: [
             {
               id: 'workflow-1',
               name: 'Content Linking',
               status: 'running',
               steps: [
                 { id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
                 { id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
               ]
             }
           ],
           activeWorkflows: [
             {
               id: 'workflow-1',
               name: 'Content Linking',
               status: 'running',
               steps: [
                 { id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
                 { id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
               ]
             }
           ],
           getWorkflowStatus: vi.fn(),
           cancelWorkflow: vi.fn(),
           refreshWorkflows: vi.fn(),
           isExecuting: false,
           error: null,
           executeBrandOnboarding: vi.fn(),
           executeContentLinking: vi.fn(),
           executeExport: vi.fn(),
           executeAlertSetup: vi.fn(),
           clearError: vi.fn()
         })
-      }));
-
-      render(
-        <TestWrapper>
-          <Analytics />
-        </TestWrapper>
-      );
+      }));
+
+      const { default: AnalyticsWithMock } = await import('../../pages/Analytics');
+      render(
+        <TestWrapper>
+          <AnalyticsWithMock />
+        </TestWrapper>
+      );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should display workflow status when workflows are active', async () => {
// Mock active workflows
vi.doMock('../../hooks/useIntegration', () => ({
useIntegration: () => ({
workflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
activeWorkflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
getWorkflowStatus: vi.fn(),
cancelWorkflow: vi.fn(),
refreshWorkflows: vi.fn(),
isExecuting: false,
error: null,
executeBrandOnboarding: vi.fn(),
executeContentLinking: vi.fn(),
executeExport: vi.fn(),
executeAlertSetup: vi.fn(),
clearError: vi.fn()
})
}));
render(
<TestWrapper>
<Analytics />
</TestWrapper>
);
await waitFor(() => {
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});
// The component should handle active workflows (even if not displayed due to mocking)
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});
});
it('should display workflow status when workflows are active', async () => {
// Rewire the hook before (re)importing the component
vi.resetModules();
vi.doMock('../../hooks/useIntegration', () => ({
useIntegration: () => ({
workflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
activeWorkflows: [
{
id: 'workflow-1',
name: 'Content Linking',
status: 'running',
steps: [
{ id: 'step-1', name: 'Validate URL', status: 'completed', action: vi.fn() },
{ id: 'step-2', name: 'Link Content', status: 'running', action: vi.fn() }
]
}
],
getWorkflowStatus: vi.fn(),
cancelWorkflow: vi.fn(),
refreshWorkflows: vi.fn(),
isExecuting: false,
error: null,
executeBrandOnboarding: vi.fn(),
executeContentLinking: vi.fn(),
executeExport: vi.fn(),
executeAlertSetup: vi.fn(),
clearError: vi.fn()
})
}));
const { default: AnalyticsWithMock } = await import('../../pages/Analytics');
render(
<TestWrapper>
<AnalyticsWithMock />
</TestWrapper>
);
await waitFor(() => {
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});
// The component should handle active workflows (even if not displayed due to mocking)
expect(screen.getByText('Brand Analytics & Tracking')).toBeInTheDocument();
});

Comment on lines 14 to 19
beforeEach(() => {
vi.clearAllMocks();
localStorage.clear();
localStorage.setItem('token', 'test-token');
localStorage.setItem('userId', 'test-user-id');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Token in localStorage: keep this test-only, avoid coupling prod code to it

Fine for tests, but ensure production auth does not depend on localStorage tokens; prefer HttpOnly cookies or in‑memory tokens to mitigate XSS.

🧰 Tools
🪛 ast-grep (0.39.5)

[warning] 16-16: Detected potential storage of sensitive information in browser localStorage. Sensitive data like email addresses, personal information, or authentication tokens should not be stored in localStorage as it's accessible to any script.
Context: localStorage.setItem('token', 'test-token')
Note: [CWE-312] Cleartext Storage of Sensitive Information [REFERENCES]
- https://owasp.org/www-community/vulnerabilities/HTML5_Security_Cheat_Sheet
- https://cwe.mitre.org/data/definitions/312.html

(browser-storage-sensitive-data)

🤖 Prompt for AI Agents
Frontend/src/__tests__/integration/integration-workflows.test.ts lines 14-19:
the test is directly setting a token in browser localStorage which risks
coupling production auth to test behavior; update the test to keep this behavior
test-only by either stubbing/mocking the storage interface (e.g.,
vi.stubGlobal('localStorage', mockLocalStorage())) or injecting a test auth
provider that returns the token, add a clear comment/flag that this manipulation
is test-only, and ensure the mock is removed/restored in afterEach so production
code never relies on persisted localStorage tokens.

Comment on lines 45 to 55
beforeEach(() => {
mockFetch.mockClear();
mockProps.onAlertCreated.mockClear();
mockProps.onAlertUpdated.mockClear();
mockProps.onAlertDeleted.mockClear();
});

afterEach(() => {
vi.clearAllMocks();
});

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use reset over clear to avoid stale one‑off implementations.

mockClear/clearAllMocks don’t reset queued mockResolvedValueOnce; prefer reset.

-beforeEach(() => {
-  mockFetch.mockClear();
-  mockProps.onAlertCreated.mockClear();
-  mockProps.onAlertUpdated.mockClear();
-  mockProps.onAlertDeleted.mockClear();
-});
-
-afterEach(() => {
-  vi.clearAllMocks();
-});
+beforeEach(() => {
+  mockFetch.mockReset();
+  mockProps.onAlertCreated.mockReset();
+  mockProps.onAlertUpdated.mockReset();
+  mockProps.onAlertDeleted.mockReset();
+});
+
+afterEach(() => {
+  vi.resetAllMocks();
+});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
beforeEach(() => {
mockFetch.mockClear();
mockProps.onAlertCreated.mockClear();
mockProps.onAlertUpdated.mockClear();
mockProps.onAlertDeleted.mockClear();
});
afterEach(() => {
vi.clearAllMocks();
});
beforeEach(() => {
mockFetch.mockReset();
mockProps.onAlertCreated.mockReset();
mockProps.onAlertUpdated.mockReset();
mockProps.onAlertDeleted.mockReset();
});
afterEach(() => {
vi.resetAllMocks();
});
🤖 Prompt for AI Agents
In Frontend/src/components/analytics/__tests__/alert-configuration.test.tsx
around lines 45 to 55, the test teardown uses mockClear/clearAllMocks which do
not reset queued one‑off implementations; change mockFetch.mockClear() to
mockFetch.mockReset(), change mockProps.onAlertCreated.mockClear(),
mockProps.onAlertUpdated.mockClear(), and mockProps.onAlertDeleted.mockClear()
to mockReset(), and replace vi.clearAllMocks() in afterEach with
vi.resetAllMocks() so all mock state and queued implementations are fully
cleared between tests.

@Saahi30 Saahi30 force-pushed the brand-dashboard-logic-backend branch from eb91df2 to 5191523 Compare October 7, 2025 01:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (15)
Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx (1)

21-22: Index-based keys still present—duplicate of previous review

The key still incorporates the index, which was flagged in the previous review. Combining creator.name with index does not solve the underlying problem: when items paginate or reorder, the index changes, causing React to incorrectly reuse component state.

Use a stable unique identifier from the creator object instead:

-{currentCreators.map((creator, index) => (
-  <CreatorMatchCard key={`${creator.name}-${index}`} {...creator} />
+{currentCreators.map((creator) => (
+  <CreatorMatchCard 
+    key={('id' in creator && creator.id) ? creator.id : creator.name}
+    {...creator} 
+  />
))}

If creator.id is not available, update the type to include it:

type CreatorListItem = CreatorMatchCardProps & { id: string };
interface CreatorMatchGridProps {
  creators: CreatorListItem[];
}
Backend/app/services/redis_client.py (1)

5-14: Allow TLS/URL configuration and resilient timeouts for Redis

Hardcoding ssl=False (plus placeholder host/port/password defaults) breaks connectivity for any managed Redis that requires TLS—exactly the case for Redis Cloud and most hosted providers—so the session service will fail in production and leaks credentials in flight. Please support REDIS_URL/REDIS_SSL (or similar) toggles and add basic timeout/retry settings.

-import redis.asyncio as redis
-import os
-import json
+import redis.asyncio as redis
+import os
+import json
+from typing import Optional
@@
-REDIS_HOST = os.getenv("REDIS_HOST", "your-redis-cloud-host")
-REDIS_PORT = int(os.getenv("REDIS_PORT", 12345))  # replace with your port
-REDIS_PASSWORD = os.getenv("REDIS_PASSWORD", "your-redis-cloud-password")
+REDIS_URL: Optional[str] = os.getenv("REDIS_URL")
+REDIS_HOST = os.getenv("REDIS_HOST", "localhost")
+REDIS_PORT = int(os.getenv("REDIS_PORT", 6379))
+REDIS_PASSWORD = os.getenv("REDIS_PASSWORD")
+REDIS_SSL = os.getenv("REDIS_SSL", "false").lower() in {"1", "true", "yes", "on"}
+SOCKET_TIMEOUT = float(os.getenv("REDIS_SOCKET_TIMEOUT", "5"))
@@
-redis_client = redis.Redis(
-    host=REDIS_HOST,
-    port=REDIS_PORT,
-    password=REDIS_PASSWORD,
-    decode_responses=True,
-    ssl=False  # Redis Cloud connection works without SSL
-)
+if REDIS_URL:
+    redis_client = redis.from_url(
+        REDIS_URL,
+        decode_responses=True,
+        socket_timeout=SOCKET_TIMEOUT,
+        retry_on_timeout=True,
+        health_check_interval=30,
+    )
+else:
+    redis_client = redis.Redis(
+        host=REDIS_HOST,
+        port=REDIS_PORT,
+        password=REDIS_PASSWORD,
+        decode_responses=True,
+        ssl=REDIS_SSL,
+        socket_timeout=SOCKET_TIMEOUT,
+        retry_on_timeout=True,
+        health_check_interval=30,
+    )
Backend/app/services/ai_services.py (1)

22-22: Switch to the supported Groq model ID (or make it configurable)

moonshotai/kimi-k2-instruct is already deprecated and scheduled for shutdown on October 10 2025, so this endpoint will fail any time now. Please default to the live replacement (e.g. moonshotai/kimi-k2-instruct-0905) and let the model be overridden via GROQ_MODEL_ID.

-    headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
-    payload = {"model": "moonshotai/kimi-k2-instruct", "messages": [{"role": "user", "content": prompt}], "temperature": 0.6, "max_completion_tokens": 1024}
+    headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
+    model = os.getenv("GROQ_MODEL_ID", "moonshotai/kimi-k2-instruct-0905")
+    payload = {
+        "model": model,
+        "messages": [{"role": "user", "content": prompt}],
+        "temperature": 0.6,
+        "max_completion_tokens": 1024,
+    }
Backend/app/schemas/schema.py (1)

76-147: Restore from_attributes support under Pydantic v2

Pydantic 2.x no longer reads inner class Config blocks, so every response model here continues with the default from_attributes=False. Any ORM object passed to these serializers will now blow up with ValidationError: Input should be a valid dictionary or instance with attribute access.

Port the config to the v2 style once and reuse it, e.g.:

-from pydantic import BaseModel
+from pydantic import BaseModel, ConfigDict
+
+
+class ORMResponseModel(BaseModel):
+    model_config = ConfigDict(from_attributes=True)
@@
-class BrandProfileResponse(BaseModel):
+class BrandProfileResponse(ORMResponseModel):
@@
-    class Config:
-        from_attributes = True
+    ...
@@
-class CampaignMetricsResponse(BaseModel):
+class CampaignMetricsResponse(ORMResponseModel):
@@
-class ContractResponse(BaseModel):
+class ContractResponse(ORMResponseModel):
@@
-class CreatorMatchResponse(BaseModel):
+class CreatorMatchResponse(ORMResponseModel):
@@
-class SponsorshipApplicationResponse(BaseModel):
+class SponsorshipApplicationResponse(ORMResponseModel):
@@
-class PaymentResponse(BaseModel):
+class PaymentResponse(ORMResponseModel):

(Or set model_config = ConfigDict(from_attributes=True) inline on each model.)

Also applies to: 183-222

Backend/app/routes/ai_query.py (4)

91-95: Required-parameter check still strips valid falsy values.

if not params.get(param) flags legitimate falsy inputs (e.g. 0, empty strings). Test for presence/None explicitly so real values survive.

-                if not params.get(param):
+                if param not in params or params.get(param) is None:
                     all_params_present = False
                     missing_params.append(param)

107-138: Don’t invoke FastAPI route handlers directly.

These handlers rely on FastAPI’s dependency injection (auth, sessions, validation). Importing them and calling with **api_args bypasses that wiring, raises TypeError, and skips auth. Extract the shared business logic into plain async functions/services and invoke those here; let the actual routes keep their DI wrappers.


62-175: Guard session/LLM parameters before iterating or merging.

params (and persisted prev_params) can be None/non-dict; .items() and {**prev_params, **...} will explode. Normalize both to dicts before use.

-        params = result.get("parameters", {})
+        raw_params = result.get("parameters") or {}
+        if not isinstance(raw_params, dict):
+            logger.warning("Ignoring non-dict parameters returned by router: %s", type(raw_params))
+            raw_params = {}
+        params = raw_params
@@
-        prev_params = state.get("params", {})
+        prev_params = state.get("params") or {}
+        if not isinstance(prev_params, dict):
+            prev_params = {}
@@
-        params = {**prev_params, **result.get("parameters", {})}
+        params = {**prev_params, **raw_params}

25-25: Avoid mutable default dict in AIQueryResponse.parameters.

Shared mutable defaults leak state between responses. Give each instance its own dict via Field(default_factory=dict).

-from pydantic import BaseModel
+from pydantic import BaseModel, Field
@@
-    parameters: Dict[str, Any] = {}
+    parameters: Dict[str, Any] = Field(default_factory=dict)
Backend/app/services/ai_router.py (1)

147-153: Run the Groq client off the event loop.

self.client.chat.completions.create is sync; calling it directly inside async def blocks every request. Run it in a threadpool.

+from fastapi.concurrency import run_in_threadpool
@@
-            response = self.client.chat.completions.create(
-                model="moonshotai/kimi-k2-instruct",  # Updated to Kimi K2 instruct
-                messages=messages,
-                temperature=0.1,  # Lower temperature for more consistent JSON output
-                max_tokens=1024  # Updated max tokens
-            )
+            response = await run_in_threadpool(
+                self.client.chat.completions.create,
+                model="moonshotai/kimi-k2-instruct",  # keep existing default or gate via env
+                messages=messages,
+                temperature=0.1,
+                max_tokens=1024,
+            )
Frontend/src/services/brandApi.ts (1)

79-93: Reopen: merge headers correctly and guard JSON parsing
This still spreads ...options after headers, so caller headers can overwrite the merged Content-Type. It also unconditionally calls response.json(), which throws for DELETE/204 or non-JSON bodies (several methods expect void). Please apply the earlier fix so headers merge safely and JSON parsing is conditional.

Frontend/src/components/chat/BrandChatAssistant.tsx (2)

47-49: Stop sending hard-coded brand_id
The client still injects a fixed brand_id, letting any user spoof another brand. The backend must infer brand context from auth/session; drop this field from the payload.


59-60: Handle session rotation
setSessionId only runs when there was no session. If the server rotates IDs, the client keeps the stale one. Update whenever data.session_id !== currentSessionId.

Backend/sql.txt (3)

48-57: Enforce 1:1 brand_profiles ↔ users
user_id must be NOT NULL and unique to match ORM expectations; otherwise orphaned/duplicate profiles slip in. Please add NOT NULL plus a unique constraint.


72-80: Align contracts DDL with ORM
We still need the contract_status enum, status contract_status NOT NULL DEFAULT 'draft', a (sponsorship_id, creator_id) unique constraint, and matching ON DELETE RESTRICT for brand_id. Without these, runtime invariants break.


82-89: Add uniqueness, bounds, and index to creator_matches
Please enforce (brand_id, creator_id) uniqueness, a check that match_score stays in [0,1], and the ranking index for performant queries.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3be437 and 5191523.

⛔ Files ignored due to path filters (1)
  • Frontend/public/aossielogo.png is excluded by !**/*.png
📒 Files selected for processing (21)
  • Backend/.env-example (1 hunks)
  • Backend/app/main.py (2 hunks)
  • Backend/app/models/models.py (2 hunks)
  • Backend/app/routes/ai_query.py (1 hunks)
  • Backend/app/routes/brand_dashboard.py (1 hunks)
  • Backend/app/schemas/schema.py (2 hunks)
  • Backend/app/services/ai_router.py (1 hunks)
  • Backend/app/services/ai_services.py (1 hunks)
  • Backend/app/services/redis_client.py (1 hunks)
  • Backend/requirements.txt (1 hunks)
  • Backend/sql.txt (1 hunks)
  • Frontend/README-INTEGRATION.md (1 hunks)
  • Frontend/src/components/chat/BrandChatAssistant.tsx (1 hunks)
  • Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx (1 hunks)
  • Frontend/src/components/user-nav.tsx (2 hunks)
  • Frontend/src/context/AuthContext.tsx (1 hunks)
  • Frontend/src/hooks/useBrandDashboard.ts (1 hunks)
  • Frontend/src/index.css (1 hunks)
  • Frontend/src/pages/Brand/Dashboard.tsx (1 hunks)
  • Frontend/src/services/aiApi.ts (1 hunks)
  • Frontend/src/services/brandApi.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
Frontend/src/components/collaboration-hub/CreatorMatchGrid.tsx (1)
Frontend/src/components/collaboration-hub/CreatorMatchCard.tsx (1)
  • CreatorMatchCard (65-130)
Frontend/src/hooks/useBrandDashboard.ts (3)
Frontend/src/context/AuthContext.tsx (1)
  • useAuth (216-222)
Frontend/src/services/brandApi.ts (14)
  • DashboardOverview (7-13)
  • BrandProfile (15-24)
  • Campaign (26-36)
  • CreatorMatch (38-44)
  • Application (46-56)
  • Payment (58-68)
  • brandApi (257-257)
  • createCampaign (132-144)
  • updateCampaign (146-151)
  • deleteCampaign (153-157)
  • updateApplicationStatus (202-223)
  • searchCreators (164-178)
  • getCampaignPerformance (185-187)
  • getRevenueAnalytics (189-191)
Frontend/src/services/aiApi.ts (2)
  • queryAI (57-83)
  • aiApi (102-102)
Backend/app/services/ai_router.py (1)
Backend/app/routes/ai_query.py (1)
  • get_route_info (205-222)
Backend/app/routes/ai_query.py (3)
Backend/app/services/redis_client.py (2)
  • get_session_state (19-21)
  • save_session_state (23-24)
Backend/app/services/ai_router.py (3)
  • process_query (131-169)
  • list_available_routes (338-340)
  • get_route_info (334-336)
Backend/app/routes/brand_dashboard.py (9)
  • search_creators (418-460)
  • get_dashboard_overview (99-161)
  • get_creator_matches (387-415)
  • get_brand_profile (197-215)
  • get_brand_campaigns (246-260)
  • get_creator_profile (463-500)
  • get_campaign_performance (507-551)
  • get_revenue_analytics (554-584)
  • get_brand_contracts (591-603)
Frontend/src/pages/Brand/Dashboard.tsx (1)
Frontend/src/hooks/useBrandDashboard.ts (1)
  • useBrandDashboard (6-288)
Backend/app/routes/brand_dashboard.py (3)
Backend/app/models/models.py (5)
  • User (25-53)
  • Sponsorship (76-92)
  • CampaignMetrics (189-204)
  • Contract (208-224)
  • SponsorshipApplication (114-128)
Backend/app/schemas/schema.py (19)
  • BrandProfileCreate (61-67)
  • BrandProfileUpdate (69-74)
  • BrandProfileResponse (76-87)
  • CampaignMetricsCreate (91-97)
  • CampaignMetricsResponse (99-110)
  • ContractCreate (114-119)
  • ContractUpdate (121-123)
  • ContractResponse (125-135)
  • CreatorMatchResponse (139-147)
  • DashboardOverviewResponse (151-156)
  • CampaignAnalyticsResponse (158-166)
  • SponsorshipApplicationResponse (182-194)
  • ApplicationUpdateRequest (196-198)
  • ApplicationSummaryResponse (200-206)
  • PaymentResponse (210-222)
  • PaymentStatusUpdate (224-225)
  • PaymentAnalyticsResponse (227-233)
  • CampaignMetricsUpdate (237-242)
  • SponsorshipCreate (21-27)
Frontend/src/utils/supabase.tsx (1)
  • supabase (11-11)
Backend/app/models/models.py (2)
Backend/app/models/chat.py (1)
  • generate_uuid (9-10)
Backend/app/routes/post.py (1)
  • generate_uuid (31-32)
🪛 Ruff (0.13.3)
Backend/app/services/ai_router.py

22-22: Avoid specifying long messages outside the exception class

(TRY003)


131-131: PEP 484 prohibits implicit Optional

Convert to Optional[T]

(RUF013)


165-165: Consider moving this statement to an else block

(TRY300)


167-167: Do not catch blind exception: Exception

(BLE001)


168-168: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


169-169: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

Backend/app/routes/ai_query.py

40-40: Abstract raise to an inner function

(TRY301)


138-138: Do not catch blind exception: Exception

(BLE001)


139-139: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


147-147: Parenthesize a and b expressions when chaining and and or together, to make the precedence clear

Parenthesize the and subexpression

(RUF021)


182-182: Consider moving this statement to an else block

(TRY300)


185-185: Do not catch blind exception: Exception

(BLE001)


186-186: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


187-187: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


200-200: Do not catch blind exception: Exception

(BLE001)


201-201: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


202-202: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


212-212: Abstract raise to an inner function

(TRY301)


214-217: Consider moving this statement to an else block

(TRY300)


220-220: Do not catch blind exception: Exception

(BLE001)


221-221: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


222-222: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


233-237: Consider moving this statement to an else block

(TRY300)


238-238: Do not catch blind exception: Exception

(BLE001)


239-239: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

Backend/app/routes/brand_dashboard.py

71-71: Consider moving this statement to an else block

(TRY300)


72-72: Do not catch blind exception: Exception

(BLE001)


73-73: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


74-74: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


79-79: Unused function argument: window_seconds

(ARG001)


117-117: Local variable profile is assigned to but never used

Remove assignment to unused variable profile

(F841)


159-159: Do not catch blind exception: Exception

(BLE001)


160-160: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


161-161: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


190-190: Abstract raise to an inner function

(TRY301)


192-192: Do not catch blind exception: Exception

(BLE001)


193-193: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


194-194: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


209-209: Abstract raise to an inner function

(TRY301)


213-213: Do not catch blind exception: Exception

(BLE001)


214-214: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


215-215: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


233-233: Abstract raise to an inner function

(TRY301)


237-237: Do not catch blind exception: Exception

(BLE001)


238-238: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


239-239: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


280-280: Abstract raise to an inner function

(TRY301)


284-284: Do not catch blind exception: Exception

(BLE001)


285-285: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


286-286: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


322-322: Abstract raise to an inner function

(TRY301)


326-326: Do not catch blind exception: Exception

(BLE001)


327-327: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


328-328: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


343-343: Abstract raise to an inner function

(TRY301)


350-350: Abstract raise to an inner function

(TRY301)


354-354: Do not catch blind exception: Exception

(BLE001)


355-355: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


356-356: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


370-370: Abstract raise to an inner function

(TRY301)


372-372: Local variable response is assigned to but never used

Remove assignment to unused variable response

(F841)


374-374: Consider moving this statement to an else block

(TRY300)


378-378: Do not catch blind exception: Exception

(BLE001)


379-379: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


380-380: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


411-411: Consider moving this statement to an else block

(TRY300)


413-413: Do not catch blind exception: Exception

(BLE001)


414-414: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


415-415: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


419-419: Unused function argument: brand_id

(ARG001)


420-420: Unused function argument: industry

(ARG001)


422-422: Unused function argument: location

(ARG001)


456-456: Consider moving this statement to an else block

(TRY300)


458-458: Do not catch blind exception: Exception

(BLE001)


459-459: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


460-460: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


465-465: Unused function argument: brand_id

(ARG001)


474-474: Abstract raise to an inner function

(TRY301)


489-494: Consider moving this statement to an else block

(TRY300)


498-498: Do not catch blind exception: Exception

(BLE001)


499-499: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


500-500: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


547-547: Consider moving this statement to an else block

(TRY300)


549-549: Do not catch blind exception: Exception

(BLE001)


550-550: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


551-551: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


582-582: Do not catch blind exception: Exception

(BLE001)


583-583: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


584-584: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


599-599: Consider moving this statement to an else block

(TRY300)


601-601: Do not catch blind exception: Exception

(BLE001)


602-602: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


603-603: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


627-627: Abstract raise to an inner function

(TRY301)


629-629: Do not catch blind exception: Exception

(BLE001)


630-630: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


631-631: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


646-646: Abstract raise to an inner function

(TRY301)


653-653: Abstract raise to an inner function

(TRY301)


657-657: Do not catch blind exception: Exception

(BLE001)


658-658: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


659-659: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


711-711: Consider moving this statement to an else block

(TRY300)


715-715: Do not catch blind exception: Exception

(BLE001)


716-716: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


717-717: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


735-735: Abstract raise to an inner function

(TRY301)


742-742: Abstract raise to an inner function

(TRY301)


757-757: Consider moving this statement to an else block

(TRY300)


761-761: Do not catch blind exception: Exception

(BLE001)


762-762: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


763-763: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


782-782: Abstract raise to an inner function

(TRY301)


787-787: Abstract raise to an inner function

(TRY301)


799-799: Abstract raise to an inner function

(TRY301)


803-803: Do not catch blind exception: Exception

(BLE001)


804-804: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


805-805: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


847-847: Do not catch blind exception: Exception

(BLE001)


848-848: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


849-849: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


890-890: Consider moving this statement to an else block

(TRY300)


894-894: Do not catch blind exception: Exception

(BLE001)


895-895: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


896-896: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


913-913: Abstract raise to an inner function

(TRY301)


931-931: Consider moving this statement to an else block

(TRY300)


935-935: Do not catch blind exception: Exception

(BLE001)


936-936: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


937-937: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


956-956: Abstract raise to an inner function

(TRY301)


964-964: Abstract raise to an inner function

(TRY301)


968-968: Do not catch blind exception: Exception

(BLE001)


969-969: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


970-970: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1010-1010: Do not catch blind exception: Exception

(BLE001)


1011-1011: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1012-1012: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1036-1036: Abstract raise to an inner function

(TRY301)


1058-1058: Abstract raise to an inner function

(TRY301)


1062-1062: Do not catch blind exception: Exception

(BLE001)


1063-1063: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1064-1064: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1082-1082: Abstract raise to an inner function

(TRY301)


1090-1090: Consider moving this statement to an else block

(TRY300)


1094-1094: Do not catch blind exception: Exception

(BLE001)


1095-1095: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1096-1096: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


1117-1117: Abstract raise to an inner function

(TRY301)


1126-1126: Abstract raise to an inner function

(TRY301)


1130-1130: Do not catch blind exception: Exception

(BLE001)


1131-1131: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1132-1132: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

Comment on lines +98 to +259
@router.get("/dashboard/overview", response_model=DashboardOverviewResponse)
async def get_dashboard_overview(
brand_id: str = Query(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="Brand user ID (UUID)")
):
"""
Get dashboard overview with key metrics for a brand
"""
# Validate brand_id format
validate_uuid_format(brand_id, "brand_id")

try:
# Get brand's campaigns
campaigns = safe_supabase_query(
lambda: supabase.table("sponsorships").select("*").eq("brand_id", brand_id).execute(),
"Failed to fetch campaigns"
)

# Get brand's profile
profile_result = supabase.table("brand_profiles").select("*").eq("user_id", brand_id).execute()
profile = profile_result.data[0] if profile_result.data else None

# Get recent applications (only if campaigns exist)
applications = []
if campaigns:
campaign_ids = [campaign["id"] for campaign in campaigns]
applications = safe_supabase_query(
lambda: supabase.table("sponsorship_applications").select("*").in_("sponsorship_id", campaign_ids).execute(),
"Failed to fetch applications"
)

# Calculate metrics
total_campaigns = len(campaigns)
active_campaigns = len([c for c in campaigns if c.get("status") == "open"])

# Calculate total revenue from completed payments
payments = safe_supabase_query(
lambda: supabase.table("sponsorship_payments").select("*").eq("brand_id", brand_id).eq("status", "completed").execute(),
"Failed to fetch payments"
)
total_revenue = sum(float(payment.get("amount", 0)) for payment in payments)

# Get creator matches
matches = safe_supabase_query(
lambda: supabase.table("creator_matches").select("*").eq("brand_id", brand_id).execute(),
"Failed to fetch creator matches"
)
total_creators_matched = len(matches)

# Recent activity (last 5 applications)
recent_activity = applications[:5] if applications else []

return DashboardOverviewResponse(
total_campaigns=total_campaigns,
active_campaigns=active_campaigns,
total_revenue=total_revenue,
total_creators_matched=total_creators_matched,
recent_activity=recent_activity
)

except HTTPException:
raise
except Exception as e:
logger.error(f"Unexpected error in dashboard overview: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

# ============================================================================
# BRAND PROFILE ROUTES
# ============================================================================

@router.post("/profile", response_model=BrandProfileResponse)
async def create_brand_profile(profile: BrandProfileCreate):
"""
Create a new brand profile
"""
try:
profile_id = generate_uuid()
t = current_timestamp()

response = supabase.table("brand_profiles").insert({
"id": profile_id,
"user_id": profile.user_id,
"company_name": profile.company_name,
"website": profile.website,
"industry": profile.industry,
"contact_person": profile.contact_person,
"contact_email": profile.contact_email,
"created_at": t
}).execute()

if response.data:
return BrandProfileResponse(**response.data[0])
else:
raise HTTPException(status_code=400, detail="Failed to create brand profile")

except Exception as e:
logger.error(f"Error creating brand profile: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

@router.get("/profile/{user_id}", response_model=BrandProfileResponse)
async def get_brand_profile(
user_id: str = Path(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="User ID (UUID)")
):
"""
Get brand profile by user ID
"""
try:
result = supabase.table("brand_profiles").select("*").eq("user_id", user_id).execute()

if result.data:
return BrandProfileResponse(**result.data[0])
else:
raise HTTPException(status_code=404, detail="Brand profile not found")

except HTTPException:
raise
except Exception as e:
logger.error(f"Error fetching brand profile: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

@router.put("/profile/{user_id}", response_model=BrandProfileResponse)
async def update_brand_profile(
profile_update: BrandProfileUpdate,
user_id: str = Path(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="User ID (UUID)")
):
"""
Update brand profile
"""
try:
update_data = profile_update.dict(exclude_unset=True)

response = supabase.table("brand_profiles").update(update_data).eq("user_id", user_id).execute()

if response.data:
return BrandProfileResponse(**response.data[0])
else:
raise HTTPException(status_code=404, detail="Brand profile not found")

except HTTPException:
raise
except Exception as e:
logger.error(f"Error updating brand profile: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

# ============================================================================
# CAMPAIGN MANAGEMENT ROUTES
# ============================================================================

@router.get("/campaigns")
async def get_brand_campaigns(
brand_id: str = Query(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="Brand user ID (UUID)")
):
"""
Get all campaigns for a brand
"""
# Validate brand_id format
validate_uuid_format(brand_id, "brand_id")

campaigns = safe_supabase_query(
lambda: supabase.table("sponsorships").select("*").eq("brand_id", brand_id).execute(),
"Failed to fetch brand campaigns"
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Stop trusting client-provided brand_id for authorization.

Every route here lets callers supply any UUID via brand_id and hands back that brand’s data. That’s a textbook IDOR: one user can pull or mutate another brand’s records. Derive the brand id from the authenticated principal (e.g., current_user=Depends(get_current_user)), enforce require_brand_role, then set brand_id = current_user["id"] before querying. Apply this pattern across the router instead of accepting brand_id as query/path input.

🧰 Tools
🪛 Ruff (0.13.3)

117-117: Local variable profile is assigned to but never used

Remove assignment to unused variable profile

(F841)


159-159: Do not catch blind exception: Exception

(BLE001)


160-160: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


161-161: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


190-190: Abstract raise to an inner function

(TRY301)


192-192: Do not catch blind exception: Exception

(BLE001)


193-193: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


194-194: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


209-209: Abstract raise to an inner function

(TRY301)


213-213: Do not catch blind exception: Exception

(BLE001)


214-214: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


215-215: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


233-233: Abstract raise to an inner function

(TRY301)


237-237: Do not catch blind exception: Exception

(BLE001)


238-238: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


239-239: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In Backend/app/routes/brand_dashboard.py around lines 98-259, the routes accept
a client-supplied brand_id which enables IDOR; instead, inject the authenticated
principal and derive the brand id from it: add
current_user=Depends(get_current_user) (and enforce require_brand_role or
similar role check) to each route signature that currently takes brand_id,
remove brand_id from path/query parameters, set brand_id = current_user["id"]
before any DB queries, and update any validation/permissions to rely on the
authenticated user rather than client input; apply this pattern consistently
across all routes in the file that use brand_id.

Comment on lines +167 to +195
@router.post("/profile", response_model=BrandProfileResponse)
async def create_brand_profile(profile: BrandProfileCreate):
"""
Create a new brand profile
"""
try:
profile_id = generate_uuid()
t = current_timestamp()

response = supabase.table("brand_profiles").insert({
"id": profile_id,
"user_id": profile.user_id,
"company_name": profile.company_name,
"website": profile.website,
"industry": profile.industry,
"contact_person": profile.contact_person,
"contact_email": profile.contact_email,
"created_at": t
}).execute()

if response.data:
return BrandProfileResponse(**response.data[0])
else:
raise HTTPException(status_code=400, detail="Failed to create brand profile")

except Exception as e:
logger.error(f"Error creating brand profile: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bind brand profile creation to the authenticated user.

create_brand_profile trusts profile.user_id from the request body, so a caller can create/overwrite someone else’s profile. Pull the user from auth (current_user), enforce the brand role, check for an existing profile, and use the authenticated id in the insert. Also drop user_id from BrandProfileCreate once you stop reading it.

🧰 Tools
🪛 Ruff (0.13.3)

190-190: Abstract raise to an inner function

(TRY301)


192-192: Do not catch blind exception: Exception

(BLE001)


193-193: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


194-194: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In Backend/app/routes/brand_dashboard.py around lines 167 to 195, the handler
currently trusts profile.user_id from the request body; instead inject and use
the authenticated user (e.g., via a get_current_user dependency), enforce the
user has the brand role (return 403 if not), remove any use of profile.user_id
and set user_id in the DB insert to current_user.id, check for an existing
profile for that user first (return 409 or appropriate error if it exists), and
update the BrandProfileCreate schema to drop user_id from the request payload.

Comment on lines +605 to +654
@router.post("/contracts")
async def create_contract(contract: ContractCreate):
"""
Create a new contract
"""
try:
contract_id = generate_uuid()
t = current_timestamp()

response = supabase.table("contracts").insert({
"id": contract_id,
"sponsorship_id": contract.sponsorship_id,
"creator_id": contract.creator_id,
"brand_id": contract.brand_id,
"contract_url": contract.contract_url,
"status": contract.status,
"created_at": t
}).execute()

if response.data:
return response.data[0]
else:
raise HTTPException(status_code=400, detail="Failed to create contract")

except Exception as e:
logger.error(f"Error creating contract: {e}")
raise HTTPException(status_code=500, detail="Internal server error")

@router.put("/contracts/{contract_id}/status")
async def update_contract_status(
status: str = Query(..., min_length=3, max_length=32, description="New contract status"),
contract_id: str = Path(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="Contract ID (UUID)"),
brand_id: str = Query(..., min_length=36, max_length=36, regex=r"^[a-fA-F0-9\-]{36}$", description="Brand user ID (UUID)")
):
"""
Update contract status
"""
try:
# Verify contract belongs to brand
existing = supabase.table("contracts").select("*").eq("id", contract_id).eq("brand_id", brand_id).execute()
if not existing.data:
raise HTTPException(status_code=404, detail="Contract not found")

response = supabase.table("contracts").update({"status": status}).eq("id", contract_id).execute()

if response.data:
return response.data[0]
else:
raise HTTPException(status_code=400, detail="Failed to update contract status")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Verify sponsorship ownership and derive contract brand_id server-side.

Contracts must be tied to the logged-in brand. Currently the caller supplies brand_id and sponsorship_id; nothing stops them from creating contracts for another brand’s campaign. Use auth to obtain the brand id, confirm the given sponsorship belongs to that brand, and use that id in the insert. Remove brand_id from ContractCreate to prevent spoofing.

🧰 Tools
🪛 Ruff (0.13.3)

627-627: Abstract raise to an inner function

(TRY301)


629-629: Do not catch blind exception: Exception

(BLE001)


630-630: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


631-631: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


646-646: Abstract raise to an inner function

(TRY301)


653-653: Abstract raise to an inner function

(TRY301)

🤖 Prompt for AI Agents
In Backend/app/routes/brand_dashboard.py around lines 605 to 654, the endpoints
accept caller-supplied brand_id and sponsorship_id allowing spoofing; remove
brand_id from ContractCreate and derive the brand_id from the authenticated user
instead, verify the provided sponsorship_id belongs to that authenticated brand
before inserting, and use the verified brand id in the insert payload; likewise,
for the status update endpoint, obtain the brand id from auth (do not accept it
as a query param), confirm the contract (and its sponsorship if relevant)
belongs to that brand before updating, and return appropriate 403/404 if
ownership checks fail. Ensure ContractCreate schema is updated (remove
brand_id), log and surface clear errors from ownership checks, and keep database
queries using the server-derived brand id only.

Comment on lines +30 to +37
setLoading(true);
const overview = await brandApi.getDashboardOverview(brandId);
setDashboardOverview(overview);
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load dashboard overview');
} finally {
setLoading(false);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clear stale dashboard errors after retries

When loadDashboardOverview fails once, error stays populated forever because we never reset it on subsequent successful runs. The dashboard will keep showing a failure message even after data loads.

Reset the error before retrying (or inside the success path), e.g.:

   try {
-      setLoading(true);
+      setLoading(true);
+      setError(null);
       const overview = await brandApi.getDashboardOverview(brandId);
       setDashboardOverview(overview);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
setLoading(true);
const overview = await brandApi.getDashboardOverview(brandId);
setDashboardOverview(overview);
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load dashboard overview');
} finally {
setLoading(false);
}
setLoading(true);
setError(null);
const overview = await brandApi.getDashboardOverview(brandId);
setDashboardOverview(overview);
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to load dashboard overview');
} finally {
setLoading(false);
}
🤖 Prompt for AI Agents
In Frontend/src/hooks/useBrandDashboard.ts around lines 30 to 37, the catch sets
error but the error is never cleared on subsequent successful loads, so a
previous failure keeps the error state even after a successful retry; fix by
clearing the error before starting the request (setError(null) or
setError(undefined)) or clear it immediately after a successful fetch
(setError(null) before setDashboardOverview), ensuring error is reset on success
and/or before retrying.

// AI API Service
// Handles AI-related API calls to the backend

const AI_API_BASE_URL = 'http://localhost:8000/api/ai';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Blocker: Hard-coded localhost AI URL
AI_API_BASE_URL is locked to http://localhost:8000, so production builds will call the wrong host. Please swap to an environment/relative base URL before release.

🤖 Prompt for AI Agents
In Frontend/src/services/aiApi.ts around line 4 the AI_API_BASE_URL is
hard-coded to http://localhost:8000 which will break production; replace the
literal with a configurable value by reading an environment variable (e.g.
process.env.REACT_APP_AI_API_URL or import.meta.env.VITE_AI_API_URL depending on
the build system) and fall back to a relative path like /api/ai if the env var
is not set; update .env.example with the variable name and ensure the production
build uses the proper environment value instead of committing any host-specific
URLs.

Comment on lines +36 to +50
const response = await fetch(url, {
headers: {
'Content-Type': 'application/json',
...options.headers,
},
...options,
});

if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(errorData.detail || `HTTP error! status: ${response.status}`);
}

return await response.json();
} catch (error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix fetch header merge & JSON parsing guard
Spreading ...options after defining headers can drop the merged header set, and response.json() on every success will explode for 204 or plain-text endpoints. Align this helper with the safe pattern (merge headers first, then conditionally parse based on status/content-type).

🤖 Prompt for AI Agents
In Frontend/src/services/aiApi.ts around lines 36 to 50, the code spreads
...options after setting headers which can overwrite the merged headers and it
unconditionally calls response.json() which will throw for 204/no-content or
non-JSON responses; fix by building a new init object where headers are merged
first (e.g., const mergedHeaders = { 'Content-Type': 'application/json',
...options.headers } and then const init = { ...options, headers: mergedHeaders
}) so headers aren't lost, and change the response parsing to only call
response.json() when response.status !== 204 and the Content-Type header
includes application/json (otherwise return null or response.text() as
appropriate); apply the same guarded parsing for the error branch when reading
errorData.

// Brand Dashboard API Service
// Handles all API calls to the backend for brand dashboard functionality

const API_BASE_URL = 'http://localhost:8000/api/brand';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Blocker: Hard-coded localhost backend URL
This service points every call to http://localhost:8000. Once the app runs in any non-local environment, all requests will fail (CORS or DNS) because the domain differs. Please derive the base URL from an environment variable or relative path before shipping.

🤖 Prompt for AI Agents
In Frontend/src/services/brandApi.ts around line 4 the base URL is hard-coded to
'http://localhost:8000/api/brand'; change it to read the base URL from an
environment variable (e.g. process.env.REACT_APP_API_BASE_URL or
import.meta.env.VITE_API_BASE_URL depending on the build system) and fall back
to a relative URL (e.g. window.location.origin + '/api/brand' or just
'/api/brand') so the app works in non-local environments and during local
development; update any docs/README and the environment (.env) files to define
the chosen variable.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
Frontend/src/services/brandApi.ts (1)

4-4: Blocker: Hard-coded localhost URL remains unresolved.

This service still points all calls to http://localhost:8000. The app will fail in any non-local environment due to CORS/DNS issues. Derive the base URL from an environment variable (e.g., import.meta.env.VITE_API_BASE_URL) or use a relative path.

Apply this diff:

-const API_BASE_URL = 'http://localhost:8000/api/brand';
+const API_BASE_URL = import.meta.env.VITE_API_BASE_URL 
+  ? `${import.meta.env.VITE_API_BASE_URL}/api/brand`
+  : '/api/brand';

Then add VITE_API_BASE_URL=http://localhost:8000 to your .env file for local development.

🧹 Nitpick comments (2)
Backend/app/services/ai_router.py (2)

18-24: Optional: Extract long error message.

The error message on Line 22 is clear but flagged by static analysis (TRY003). For consistency, consider extracting it to a module-level constant.

+GROQ_API_KEY_ERROR = "GROQ_API_KEY environment variable is required"
+
 class AIRouter:
     def __init__(self):
         """Initialize AI Router with Groq client"""
         self.groq_api_key = os.getenv("GROQ_API_KEY")
         if not self.groq_api_key:
-            raise ValueError("GROQ_API_KEY environment variable is required")
+            raise ValueError(GROQ_API_KEY_ERROR)

163-165: Minor: Redundant exception object in log message.

logger.exception automatically includes the traceback and exception details, making the f"...{e}" redundant (Ruff TRY401).

Apply this diff:

         except Exception as e:
-            logger.exception(f"Error processing query with AI Router: {e}")
+            logger.exception("Error processing query with AI Router")
             raise HTTPException(status_code=500, detail="AI processing error") from e
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5191523 and c6c5bc7.

📒 Files selected for processing (4)
  • Backend/app/schemas/schema.py (2 hunks)
  • Backend/app/services/ai_router.py (1 hunks)
  • Frontend/src/components/user-nav.tsx (2 hunks)
  • Frontend/src/services/brandApi.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • Frontend/src/components/user-nav.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
Backend/app/services/ai_router.py (1)
Backend/app/routes/ai_query.py (1)
  • get_route_info (205-222)
🪛 Ruff (0.13.3)
Backend/app/services/ai_router.py

22-22: Avoid specifying long messages outside the exception class

(TRY003)


162-162: Consider moving this statement to an else block

(TRY300)


164-164: Redundant exception object included in logging.exception call

(TRY401)

🔇 Additional comments (11)
Frontend/src/services/brandApi.ts (3)

72-122: LGTM: Request handling is now robust.

The headers merge order is correct, error handling safely parses JSON, and response parsing guards against 204/empty/non-JSON bodies. Well done addressing the previous feedback.


199-199: LGTM: Nullish check correctly allows zero.

Using != null properly handles legitimate 0 values for min_engagement. Good fix.


227-248: LGTM: Optional brandId now properly handled.

The method validates the UUID when provided and conditionally constructs the query string, avoiding brand_id=undefined in the URL. This addresses the previous concern.

Backend/app/services/ai_router.py (3)

146-154: LGTM: Async handling and model configuration correct.

The Groq call now runs in a threadpool via asyncio.to_thread, the model is sourced from GROQ_MODEL env var with a valid fallback, and response_format enforces JSON output. All previous concerns addressed.


167-194: LGTM: Response enhancement is thorough.

The method properly validates route existence, injects brand_id when required, ensures type consistency, and adds useful metadata. Well-structured.


220-249: LGTM: Robust JSON parsing with multiple fallback strategies.

The four-tier parsing approach (direct, cleaned, regex-extracted, keyword-based fallback) is excellent defensive programming. This should handle most edge cases gracefully.

Backend/app/schemas/schema.py (5)

1-6: LGTM: Pydantic v2 config correctly implemented.

Using ORMBaseModel with model_config = ConfigDict(from_attributes=True) is the correct Pydantic v2 pattern. This cleanly addresses the previous v1 class Config issue and provides a DRY base for all ORM-backed schemas.

Based on learnings.


64-87: LGTM: BrandProfile schemas well-structured.

The Create/Update/Response schema pattern is consistent, and BrandProfileResponse correctly extends ORMBaseModel. Field types are appropriate.


91-138: LGTM: Metrics, Contract, and Match schemas consistent.

All response schemas (CampaignMetricsResponse, ContractResponse, CreatorMatchResponse) correctly extend ORMBaseModel. The pattern is consistent across the file.


142-165: LGTM: Analytics schemas correctly use BaseModel.

These analytics response schemas appropriately extend BaseModel directly rather than ORMBaseModel, since they represent computed/aggregated data rather than direct ORM entities.


173-227: LGTM: Application and Payment schemas properly structured.

SponsorshipApplicationResponse and PaymentResponse correctly extend ORMBaseModel for ORM mapping, while request/update schemas use BaseModel. The organization is clean and consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant