Feature/int testjan5#1513
Conversation
…toring into mentorReadOptimisation
…roject/mentoring into feature/int_test
…into feature/int_test
…into feature/int_test
WalkthroughRefactors cache keys from organization-scoped to tenant-scoped across many services and helpers, replaces/raw SQL with Sequelize joins and adds Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as Mentoring API
participant Cache as Redis
participant DB as Postgres
participant UserSvc as User Service
rect rgb(232,245,233)
Client->>API: HTTP request (e.g., GET profile/session)
note right of API: Prefer tenant-scoped cache keys
API->>Cache: GET tenant:{tenantCode}:user:{userId}
alt cache hit
Cache-->>API: cached profile
API->>Client: 200 + cached response
else cache miss
API->>DB: Sequelize query / view (sessions / extensions)
DB-->>API: rows
API->>UserSvc: getUserDetailedListUsingCache(userIds, tenantCode)
UserSvc->>Cache: MGET tenant:{tenantCode}:user:{ids...}
Cache-->>UserSvc: hits / misses
alt misses exist
UserSvc->>DB: fetch missing users
DB-->>UserSvc: user rows
UserSvc->>Cache: SET tenant:{tenantCode}:user:{id} (sanitized)
end
UserSvc-->>API: full user details
API->>Client: 200 + merged response
end
end
rect rgb(255,243,224)
note over API,Cache: On updates/invalidation
API->>Cache: DEL tenant:{tenantCode}:user:{id}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 29
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Fix all issues with AI Agents 🤖
In @.circleci/config.yml:
- Around line 142-154: The seeder wait loop using the MENTORING container and
looking for "SEEDER_SUCCESS" currently times out silently; modify the loop (the
run step that sets MENTORING and greps for SEEDER_SUCCESS) to detect when the
loop exits without finding the success string and then fail the job (exit
non‑zero) with a clear error message. Concretely, after the for loop that checks
docker logs "$MENTORING" | grep -q "SEEDER_SUCCESS", test whether the success
was seen (e.g., via a flag or checking the last grep exit/status) and if not,
echo an explanatory error and run exit 1 so the CI build fails.
In @src/database/queries/sessions.js:
- Around line 166-193: findAllMigrations builds SQL by interpolating filter keys
directly into the query (via `"${key}"`) which risks SQL injection and also
ignores tenant isolation; fix by validating each key against a whitelist of
allowed column names before using it, only add clauses for validated keys (e.g.,
iterate filter keys in findAllMigrations and check against an approvedColumns
set), reject or ignore unknown keys, continue to use parameterized values via
replacements and Sequelize.query with QueryTypes.SELECT, and enforce tenant
isolation by requiring/adding a tenant_code clause into whereClauses and
replacements (or throw if tenant_code missing) so tenant filtering is always
applied.
In @src/helpers/getOrganizationList.js:
- Around line 52-58: The function getOrganizationList is missing a return and
has invalid error syntax: replace the erroneous "throw error(error)" with "throw
error" (or rethrow as needed) inside the catch block, and ensure you return the
accumulated orgData (e.g., add "return orgData") after pushing
organizationDetails so callers receive the fetched list; locate the orgData and
organizationDetails usage in getOrganizationList to apply these changes.
In @src/integration-tests-new/admin/admin.spec.js:
- Line 16: The test file admin.spec.js isn't loading the jest-json-schema
matcher because integrationJest.config.js uses testMatch
'<rootDir>/integration-tests-new/**/*.specs.js' (plural), so
src/setupFileAfterEnv.js (which registers expect.extend(matchers)) isn't run and
toMatchSchema is undefined; fix by either renaming admin.spec.js to
admin.specs.js or update integrationJest.config.js's testMatch to include both
patterns (e.g., '<rootDir>/integration-tests-new/**/*.spec*.js') so
setupFileAfterEnv.js runs and toMatchSchema is available.
- Line 11: Replace the hardcoded 'string' auth token with the TOKEN constant:
wherever the tests call req.set('x-auth-token', 'string') (in the five test
cases using req.set), change the value to TOKEN so the requests use the defined
TOKEN constant from line 3; update all occurrences (lines around the five test
cases) to ensure consistent authentication.
In @src/integration-tests-new/cloud-services/cloud-services.spec.js:
- Line 3: The TOKEN constant is declared but not used; replace the hardcoded
header values where tests set 'x-auth-token' (the two occurrences currently
using the literal 'string') to use the TOKEN variable instead so requests use
the actual test bearer token; update the header assignments that reference
'x-auth-token' in the cloud-services.spec.js tests to use TOKEN.
In @src/integration-tests-new/commonTests.js:
- Around line 62-64: The catch blocks are silently swallowing exceptions
(logging with console.error(error) then returning undefined), causing tests to
continue with undefined results; update each relevant catch (the top-level
helper at lines ~62-64 and the functions mentorLogIn and adminLogin) to re-throw
the caught error after logging (or return a rejected Promise) so the test runner
receives the failure—i.e., keep the console.error(error) call but add throw
error (or return Promise.reject(error)) at the end of each catch block.
- Around line 207-212: The module fails to export waitForProfileExtension even
though it’s used by logIn, mentorLogIn, and adminLogin; update the exported
symbols in module.exports to include waitForProfileExtension so external tests
can call it directly. Locate the existing module.exports block (exporting logIn,
logError, mentorLogIn, adminLogin) and add waitForProfileExtension to that list,
preserving existing order and formatting.
In @src/integration-tests-new/config/config.spec.js:
- Line 16: Add the missing Jest setup so the custom matcher toMatchSchema is
registered: update jest.config.js to include setupFilesAfterEnv with the path to
the existing setup file (src/setupFileAfterEnv.js) so tests invoking
expect(...).toMatchSchema(...) succeed; ensure the entry is an array with
'<rootDir>/src/setupFileAfterEnv.js' so the jest-json-schema matcher is loaded
before tests run.
- Around line 9-17: The test declares a TOKEN but never uses it, so either
include the Authorization header on the request or remove the unused TOKEN; if
auth is required, modify the request built in the test (the call that starts
with request(BASE).get(url) in the test named 'should return 200') to set the
Authorization header with the TOKEN (e.g., set 'Authorization' to 'Bearer
<TOKEN>') before awaiting the response, otherwise delete the TOKEN declaration
to avoid dead code and keep the schema assertion using
schemas['GET_mentoring_v1_config_get'] unchanged.
In @src/integration-tests-new/connections/connections.specs.js:
- Line 16: Update the three Jest test titles that currently read "should return
200" to reflect the asserted status code 201; specifically change the test(...)
string values for the tests that call expect(res.status).toBe(201) so their
descriptions read "should return 201" to match the expected behavior in those
test cases.
In @src/integration-tests-new/default-rule/default-rule.specs.js:
- Around line 53-55: Remove the duplicated status assertion in the test: keep
the first expect(res.status).toBe(201) and the createdRuleId =
res.body.result.id assignment, and delete the repeated
expect(res.status).toBe(201) that appears after createdRuleId; this ensures the
test only asserts the 201 status once while preserving the saved ID.
In @src/integration-tests-new/default-rule/schemas/default-rule.schemas.json:
- Around line 1-355: The GET response schema GET_mentoring_v1_default-rule_read
has two wrong field names; update the object inside result.data items to use
operator (replacing matching_operator) and requester_roles_config (replacing
role_config), and ensure requester_roles_config is defined the same as in
POST_mentoring_v1_default-rule_create /
PATCH_mentoring_v1_default-rule_update_id (an object with a property "exclude"
of type boolean) so the GET schema matches the actual API/db model fields.
In @src/integration-tests-new/entity-type/schemas/entity-type.schemas.json:
- Line 204: The POST_mentoring_v1_entity-type_update_id schema currently defines
"parent_id" as only "null", which is inconsistent with the
GET_mentoring_v1_entity-type_read schema that allows ["null","number"]; update
the POST_mentoring_v1_entity-type_update_id definition for the "parent_id"
property to allow both null and number (i.e., the array ["null","number"]) so
both schemas align and validation won't fail when a numeric parent_id is
returned.
- Around line 52-54: Remove the unintended placeholder property "field_0" from
the JSON schema (entity-type.schemas.json) if it's not part of the real API
contract; either delete the "field_0" object entirely or replace it with the
correct field name/type if a real property was intended, then update any
fixtures/tests that reference "field_0" so schema validation and tests reflect
the true API shape.
In @src/integration-tests-new/entity/entity.specs.js:
- Around line 191-194: The DELETE request building the `url` and `req` never
sets the `org-id` header like other tests; update the request setup (the `req =
request(BASE).delete(url)` block) to include the org header (e.g., call
`req.set('org-id', adminDetails.orgId)`) alongside the existing
`req.set('x-auth-token', adminDetails.token')` before awaiting `req`.
In @src/integration-tests-new/feedback/feedback.spec.js:
- Line 3: The tests declare TOKEN but never use it; replace the hardcoded header
value 'string' with the TOKEN variable in the requests that set the x-auth-token
header (update both occurrences where x-auth-token is currently 'string', e.g.,
in the tests around the request builders in feedback.spec.js) so the tests use
process.env.TEST_BEARER_TOKEN || 'test-token' instead of the literal string.
In @src/integration-tests-new/feedback/schemas/feedback.schemas.json:
- Line 89: The OpenAPI test schema for the endpoint
POST_/mentoring/v1/feedback/submit/{SessionId} is empty so responses aren’t
validated; add a proper JSON schema under that key describing the expected
response body (e.g., success flag, message, and any returned feedbackId or error
structure) and include expected HTTP status mappings (200/201 and error
responses) so the integration tests can validate responses for
POST_/mentoring/v1/feedback/submit/{SessionId}.
In @src/integration-tests-new/form/schemas/form.schemas.json:
- Around line 170-172: The update schema in form.schemas.json uses the
snake_case property "meeting_platform" which is inconsistent with the
create/read schemas that use camelCase "meetingPlatform"; change the property
name in the update endpoint schema from "meeting_platform" to "meetingPlatform"
and ensure any references/consumers and tests that validate the update payload
are updated to use the camelCase property so all create/read/update schemas
match.
In @src/integration-tests-new/issues/issues.spec.js:
- Line 12: The test is using a hardcoded header value; replace the literal
'string' with the TOKEN variable (used to authenticate) when setting the header
on the request (where req = req.set('x-auth-token', ...)); ensure TOKEN is
imported/defined at the top of the spec and used here so the test uses the
proper auth token rather than a hardcoded value.
- Line 15: The test fixture/object uses a misspelled key "descriptaion" which
will trigger API validation errors; locate the test data in
src/integration-tests-new/issues/issues.spec.js where the object contains
descriptaion: 'string' and rename the key to description: 'string' (update any
other occurrences of "descriptaion" in that test file to "description" so the
payload matches the API schema).
In @src/integration-tests-new/mentees/mentees.specs.js:
- Line 102: Replace incorrect property access userDetails.id with
userDetails.userId wherever mentorId is constructed (e.g., the URL string in the
mentees list calls inside mentees.specs.js). Update both occurrences (the
template literal that sets mentorId at the URL around the lines flagged) so
mentorId=${userDetails.userId} to pass the correct ID returned by logIn().
In @src/integration-tests-new/modules/modules.spec.js:
- Line 37: The test is using literal path placeholders like the const url =
`/mentoring/v1/modules/update/{id}` which never get substituted; change the
tests to use a real ID variable (e.g., build the URL with a template literal
using an id variable) by creating or retrieving a module ID in a setup step
(before/ beforeEach) or using a known seeded test ID, then use that id in the
URL (replace `{id}` with `${id}` or string concatenation) for the requests in
the tests referencing the update/delete endpoints (the constants/variables
around the `url` declarations in modules.spec.js).
- Line 12: Replace the hard-coded header value 'string' used in
req.set('x-auth-token', 'string') with a real token: either reference the shared
TOKEN variable (e.g., use TOKEN instead of the literal) or call the
authentication helper/login routine in your test setup (e.g., run the
login/getAuthToken function in beforeAll and store the result) and then use that
token in every req.set('x-auth-token', ...) occurrence (this applies to the
instances at lines 12, 28, 39, 55, 66, and 79).
- Line 19: Change the failing test expectation in the create test inside
modules.spec.js from expecting status 200 to 201 (i.e., update the
expect(res.status).toBe(200) to expect 201) so it matches the API's POST create
response; optionally, adjust the update endpoint implementation in
src/services/modules.js (the update handler that currently returns 201) to
return 200 for updates to follow REST conventions.
- Line 3: The TOKEN constant declared in modules.spec.js is unused; either
remove the declaration or wire it into the tests' auth headers. To fix, either
delete the TOKEN line entirely, or replace any hard-coded auth values in the
test requests with the TOKEN constant (e.g., set the Authorization/Bearer header
from TOKEN) and apply the same change to the other test cases noted (the
occurrences referenced in the review at lines 28, 39, 55, 66, 79) so all tests
consistently use the variable.
In @src/integration-tests-new/modules/schemas/modules.schemas.json:
- Around line 15-17: The create response uses an uppercase property "Id" causing
mismatch with the database and update response; in the create method in
src/services/modules.js change the returned result object key from "Id" to "id"
(so result: { id: modules.id, code: modules.code, status: modules.status }), and
update the modules.schemas.json property name from "Id" to "id" so the JSON
schema and the service create/update responses consistently use lowercase id.
In @src/integration-tests-new/org-admin/org-admin.spec.js:
- Around line 106-118: The test in the "POST
/mentoring/v1/org-admin/uploadSampleCSV" block is sending a JSON string with
Content-Type application/json; change it to perform a multipart file upload:
build the request to use multipart/form-data (remove the explicit JSON
Content-Type), use supertest's file upload method (e.g., call attach on the req
object) to attach a real CSV payload or Buffer under the expected form field
name (instead of req.send('string')), keep the x-auth-token header, await the
response and then validate res.status and res.body against the existing schema;
update the test code that initializes req (the request variable) and the line
that currently does req.send('string').set('Content-Type', 'application/json')
to use attach(...) for the CSV file.
- Line 12: Replace the hardcoded header value 'string' with the TOKEN constant
wherever tests call req.set('x-auth-token', 'string') in org-admin.spec.js;
update each occurrence to use req.set('x-auth-token', TOKEN) so the tests use
the shared TOKEN variable for authentication.
- Around line 1-3: The TOKEN constant is declared but never used; update the
tests in this file to use the TOKEN value for authentication instead of
hardcoded literals like 'string' or 'bearer {{token}}'. Locate occurrences where
requests set Authorization headers or bodies with fixed tokens (search for
'string', 'bearer {{token}}', or direct header assignments) and replace them to
inject TOKEN (e.g., `Authorization: 'Bearer ' + TOKEN`) or use a shared helper
that builds the auth header from TOKEN so tests respect the TEST_BEARER_TOKEN
env var.
- Around line 66-74: The test currently sends response-shaped fields in the
.send() body; replace that payload with the actual request parameters expected
by the endpoint: include delta_organization_ids (array of strings or ids),
organization_id (string), action (string), and tenant_code (string) in the
.send() call used in org-admin.spec.js so the test posts the correct request
shape to the API.
- Line 124: The header set call uses inconsistent casing and a placeholder
value; update the req.set('X-auth-token', 'bearer {{token}}') invocation to use
the same lowercase header key ('x-auth-token') and pass the real token variable
with correct Bearer scheme (e.g., convert the placeholder to "Bearer
<tokenVariable>") so authentication matches the other tests and uses the actual
token.
- Line 145: Replace the hardcoded placeholder 'bearer {{token}}' in the request
setup with the actual TOKEN constant; update the call that sets the header (the
req.set invocation) to use the TOKEN variable (e.g., "Bearer " + TOKEN or
template `Bearer ${TOKEN}`) so the request uses the real auth token.
In @src/integration-tests-new/org-admin/schemas/org-admin.schemas.json:
- Around line 390-395: The two schemas POST_mentoring_v1_org-admin_updateTheme
and GET_mentoring_v1_org-admin_themeDetails are placeholders that only declare
"type": "string" and thus won't validate the actual response objects; replace
each placeholder with a full JSON Schema object that matches the real API
response shape (include "type": "object", required properties, property types,
nested objects/arrays, and any enums/formats), e.g., for updateTheme validate
the request/response fields returned after updating (id, name, colors,
updatedAt, etc.), and for themeDetails validate the detailed theme object
returned (id, name, primary/secondary colors, metadata, createdAt/updatedAt);
ensure you reference and use the exact property names used by the API, add
schemas for any nested structures, and include examples or nullable flags where
applicable so the tests can properly validate returned payloads.
- Around line 328-389: The POST_mentoring_v1_org-admin_uploadSampleCSV entry
currently contains an inline sample response instead of a JSON Schema; replace
that sample block with a proper JSON Schema for the response object: make the
top-level schema for "POST_mentoring_v1_org-admin_uploadSampleCSV" be type
"object" with properties "responseCode" (string, e.g., enum ["OK"]), "message"
(string), "result" (array), and "meta" (object) where "meta.formsVersion" is an
array of objects with properties "id" (number), "type" (string), and "version"
(number), plus "correlation" (string) and "meetingPlatform" (string); remove the
invalid root-level "type": "string" and ensure all sample values are converted
to appropriate types and optional enums where applicable so the schema validates
instead of embedding example data.
In @src/integration-tests-new/permissions/permissions.spec.js:
- Line 41: The tests use a literal URL placeholder
`/mentoring/v1/permissions/update/{id}` (and
`/mentoring/v1/permissions/delete/{id}`) which causes 404s; fix by creating a
permission in a beforeAll hook (call the POST `/mentoring/v1/permissions/create`
and capture createRes.body.result.id into a permissionId variable), use template
interpolation to build the update and delete URLs
(`/mentoring/v1/permissions/update/${permissionId}` etc.) inside the tests, and
clean up in afterAll by calling DELETE
`/mentoring/v1/permissions/delete/${permissionId}`; update the test suites that
reference the literal `{id}` (the blocks around the tests at lines ~41 and ~63)
to use this pattern.
- Line 12: Replace the hardcoded auth token string used in tests with the shared
TOKEN constant: find the req.set('x-auth-token', 'string') calls (e.g., in
permissions.spec.js where TOKEN is defined at the top) and change them to use
TOKEN instead; update every occurrence mentioned in the review (lines with
req.set in the tests around 12, 32, 43, 65, 78) so all requests use the TOKEN
variable for authentication.
- Around line 8-37: The tests in the "POST /mentoring/v1/permissions/create"
suite use static placeholder values causing conflicts and DB pollution; modify
the test body to generate unique data (e.g., append Date.now() or a UUID to code
and api_path and use realistic fields like module, request_type array and
status), capture the created resource id(s) from the create response, push them
into a shared createdPermissionIds array, and add an afterAll (or afterEach)
teardown that iterates createdPermissionIds and issues authenticated DELETE
requests to `/mentoring/v1/permissions/delete/{id}` to remove test data; update
both the successful 201 test and the invalid-body test to use the new unique
test payloads and ensure TOKEN/x-auth-token usage is consistent with other tests
while leaving the schema assertion against
schemas['POST_mentoring_v1_permissions_create'] intact.
In @src/integration-tests-new/permissions/schemas/permissions.schemas.json:
- Line 15: The create and update permission schemas use "Id" (capital I) while
the list schema uses "id" (lowercase), causing validation mismatches; update all
schema definitions and any references to use the consistent lowercase "id"
(replace "Id" → "id" in the create and update schemas and the other occurrences
noted), and ensure any tests/fixtures or schema references that expect "Id" are
updated to the lowercase field name as well.
In @src/integration-tests-new/profile/schemas/profile.schemas.json:
- Around line 146-162: The meta object in the POST_mentoring_v1_profile_create
schema uses snake_case "meeting_platform" which is inconsistent with the API's
camelCase "meetingPlatform"; update the property name inside the "meta" object
from "meeting_platform" to "meetingPlatform" (keeping it alongside
"formsVersion" and "correlation") and ensure any other occurrences in the same
schema or related profile schemas are renamed to "meetingPlatform" so the JSON
schema matches the actual API response format.
In @src/integration-tests-new/question-set/question-set.spec.js:
- Line 3: TOKEN is defined but unused; replace the hardcoded auth string with
the TOKEN variable wherever the Authorization header is set in this spec
(replace occurrences of 'string' used in header setup with TOKEN), e.g., update
the .set('Authorization', `Bearer string`) calls in the test requests to use
TOKEN instead so they read .set('Authorization', `Bearer ${TOKEN}`) (apply to
all three occurrences).
- Line 35: Tests use hardcoded ID `1` in endpoints (e.g., the `url =
`/mentoring/v1/question-set/update/1`` line), which makes them brittle; instead
create a question-set in a shared setup (e.g., a beforeAll hook that calls the
create endpoint), capture the returned ID, and replace hardcoded `.../update/1`,
`.../read/1`, etc. with template URLs using that created ID; alternatively use a
known seeded fixture ID or add explicit 404-tolerant assertions if the
environment is dynamic—apply this change to all tests referencing the hardcoded
IDs (update, read, delete).
In @src/integration-tests-new/question-set/schemas/question-set.schemas.json:
- Around line 35-40: The schema incorrectly restricts "meta" and "deleted_at" to
only null; update their definitions so they accept null OR a value (e.g., change
the "type" to a union like ["null","object"] for "meta" and ["null","string"] or
appropriate type for "deleted_at") or remove them from "required" so they are
optional; apply the same change to the other occurrences noted (around the
blocks referenced at lines 95-100 and 149-160) to match the intended
"optional/nullable" semantics.
- Around line 55-56: Rename the `meeting_platform` properties in the create and
update JSON schemas to `meetingPlatform` so they match the read schema and
actual API responses; update both occurrences (the create schema at the first
occurrence and the update schema at the second occurrence) to use camelCase
`meetingPlatform` to ensure validation succeeds against responses produced by
src/helpers/responses.js.
In @src/integration-tests-new/questions/questions.spec.js:
- Around line 2-3: The tests define TOKEN but still use the hardcoded header
value 'string'; update the request headers in the tests that set "x-auth-token"
(the three places using the literal 'string' in the spec) to use the TOKEN
constant instead so the environment-provided TEST_BEARER_TOKEN (or fallback) is
used; locate the request calls in the test file where "x-auth-token" is set to
'string' and replace that literal with TOKEN.
In @src/integration-tests-new/questions/schemas/questions.schemas.json:
- Around line 26-28: The schema currently restricts fields like "options",
"category", "meta", and "deleted_at" to only "null", causing valid non-null API
responses to be rejected; update each field's JSON Schema type to be a union
allowing null and the expected type (e.g., make "options" accept
["null","array"], "category" accept ["null","object"], "meta" accept
["null","object" or appropriate type], and "deleted_at" accept ["null","string"]
or timestamp format) so both null and real values validate correctly; apply the
same union-type change for the other occurrences noted (lines around the second
and third occurrences).
In @src/integration-tests-new/report-mapping/report-mapping.spec.js:
- Line 12: The tests are setting the auth header to the literal string 'bearer
{{token}}' instead of using the TOKEN constant; update every
req.set('x-auth-token', ...) call in this spec (the occurrences currently at the
lines noted in the comment) to build the header using the TOKEN variable defined
at the top of the file so the real token is sent (i.e., replace the '{{token}}'
placeholder with the TOKEN constant) — ensure all five occurrences are changed.
- Line 38: Tests in report-mapping.spec.js are using hardcoded query params
(`code=total_number_of_sessions_attended`) and hardcoded IDs (`id=16`) which
couples tests to pre-seeded DB; change the tests to create the required
report-mapping resource in a setup hook (beforeAll or beforeEach) via the same
API (POST to /mentoring/v1/report-mapping), capture the returned resource
identifier and code from the create response, use those dynamic values in the
subsequent GET (/read?code=...), PUT (update by id), and DELETE requests, and
remove the resource in an afterAll/afterEach teardown to avoid ordering
dependencies and ensure tests run in fresh environments.
In
@src/integration-tests-new/report-mapping/schemas/report-mapping.schemas.json:
- Line 30: The schema's deleted_at property is currently typed as "null" which
disallows timestamp strings; update each deleted_at property in the JSON schema
to accept both null and string by replacing "type": "null" with "type": ["null",
"string"] for all occurrences (e.g., the deleted_at entries referenced in the
report-mapping schemas).
- Around line 2-64: The schema for POST_mentoring_v1_report-mapping_create lacks
"required" arrays so validation can pass with missing fields; add a top-level
"required" array containing at least ["responseCode","message","result","meta"]
for the POST_mentoring_v1_report-mapping_create definition and add nested
"required" arrays inside "result" (e.g.,
["id","report_code","role_title","created_at","updated_at"]) and inside
meta.formsVersion items (["id","type","version"]) to ensure critical fields are
enforced; apply the same pattern to the other endpoint schemas referenced (lines
65-127, 128-190, 191-236) using their respective top-level names and key nested
objects.
In @src/integration-tests-new/report-queries/report-queries.spec.js:
- Line 12: Replace the hardcoded placeholder 'bearer {{token}}' used in the
x-auth-token header with the actual TOKEN variable defined in the test file:
locate each call to req.set('x-auth-token', 'bearer {{token}}') (appearing in
the report-queries.spec.js tests) and change it to use the TOKEN variable so the
header becomes the real bearer token value (use the language's string
interpolation or concatenation method to prepend "bearer " to TOKEN); update all
five occurrences where req.set is called with that placeholder.
- Around line 44-49: In src/services/report-queries.js change the HTTP status
returned by updateQuery, getQuery, and deleteQuery from httpStatusCode.created
to httpStatusCode.ok so updates/reads/deletes return 200 instead of 201; leave
createQuery as httpStatusCode.created. Also fix the typo constant/message
REPORT_QUERY_UPATED_SUCCESSFULLY to REPORT_QUERY_UPDATED_SUCCESSFULLY and update
any usages in the same file to reference the corrected identifier. Ensure the
three methods (updateQuery, getQuery, deleteQuery) return the ok status and the
corrected success message where appropriate.
In @src/integration-tests-new/report-type/report-type.spec.js:
- Around line 8-23: In the POST /mentoring/v1/report-type/create test, replace
the hardcoded template string 'bearer {{token}}' used in the x-auth-token header
with the shared TOKEN constant and the same header format/casing used in the
other tests (e.g., "Bearer <TOKEN>"); update the line that sets x-auth-token so
it uses TOKEN (matching the usage at the other test locations) to ensure
requests are properly authenticated.
- Around line 66-77: The test uses a literal template string 'bearer {{token}}'
for the x-auth-token header which will fail authentication; replace that header
value with the actual TOKEN constant used elsewhere (use the same casing/format
as other tests, e.g. "Bearer <TOKEN>") in the DELETE
/mentoring/v1/report-type/delete test (look for the TOKEN constant and the
request variable used in this test) so the request sets the real token before
awaiting the response.
In @src/integration-tests-new/reports/reports.spec.js:
- Line 12: Replace the hardcoded header string with the TOKEN constant: wherever
you call req.set('x-auth-token', 'bearer {{token}}') (occurrences near lines 12,
55, 66, 79, 102, 115, 138) use the TOKEN variable instead (e.g., pass the
interpolated bearer token value using TOKEN) so the request uses the real
environment token; update each req.set call to build the header from TOKEN
rather than the literal placeholder.
- Around line 86-87: The tests in reports.spec.js use inconsistent types for
organization_id between the create test (uses 'string') and the update test
(uses 1); confirm the API schema's expected type and make both tests use the
same type. Locate the payloads in the create and update test cases (look for the
objects containing config and organization_id) and change the incorrect value so
both tests send either a string or a number that matches the API schema,
updating any assertion expectations if necessary.
- Around line 63-68: The test in reports.spec.js is asserting the wrong status
code for a GET read endpoint: in the test block named "should return 201" change
the expectation to match the service (http 200) by updating the test title and
the assertion in the test function that currently calls
expect(res.status).toBe(201) to expect(res.status).toBe(200); keep the same
request setup (URL `/mentoring/v1/reports/read?id=1` and header) but ensure the
assertion and description reflect HTTP 200 OK.
In @src/integration-tests-new/reports/schemas/reports.schemas.json:
- Around line 70-84: The schemas for the field "defaultValues" are inconsistent
between the create endpoint (currently an array of objects with "label" and
"value") and the reportData endpoint (currently an array of strings); update
reports.schemas.json so both endpoints use the same schema that matches the real
API response — either change the reportData "defaultValues" to an array of
objects with
"items":{"type":"object","properties":{"label":{"type":"string"},"value":{"type":"string"}},"required":["label","value"]}
or change the create endpoint to an array of strings, and then adjust any
related examples/tests to reflect the chosen shape.
- Line 244: The update endpoint schema currently declares the "config" field as
"type": "string"; change it to "type": "object" and match the same object schema
used by the create endpoint (and read/reportData schemas) so it accepts JSON
objects consistent with the DB JSONB storage—i.e., replace the string type with
the identical object structure used for "config" in the create schema.
- Around line 167-170: The read endpoint schema currently defines "config" as an
object with empty "properties", which provides no validation; update the
"config" definition in reports.schemas.json (the read endpoint schema) to either
mirror the detailed config structure used in POST_mentoring_v1_reports_create
(copy the relevant property definitions and types) or make it permissive by
adding "additionalProperties": true (or using a schema like "type": "object",
"additionalProperties": true) so the config is meaningfully validated or
explicitly allowed to vary.
In @src/integration-tests-new/requestSessions/requestSessions.specs.js:
- Line 1: The inline comment for jest.setTimeout is inconsistent with the value;
update the comment or the timeout to match: either change the comment to "Set
default timeout to 60 seconds" to reflect jest.setTimeout(60000) or change the
call to jest.setTimeout(30000) if the intended timeout is 30 seconds; locate the
call to jest.setTimeout in the requestSessions.specs.js file and make the chosen
correction so the comment and value agree.
In
@src/integration-tests-new/requestSessions/schemas/requestSessions.schemas.json:
- Around line 20-22: The schemas for the `id` field are inconsistent between
endpoints: `POST_mentoring_v1_requestSessions_create` defines `"id": { "type":
"number" }` while `GET_mentoring_v1_requestSessions_list` defines it as `"type":
"string"`; determine the actual runtime type returned by the API (numeric or
string) and update both schema entries to the same type (change the `id`
definition in `POST_mentoring_v1_requestSessions_create` and/or
`GET_mentoring_v1_requestSessions_list` to match the API), ensuring any related
test expectations or serializers/deserializers that reference `id` are adjusted
accordingly.
In @src/integration-tests-new/role-extension/role-extension.spec.js:
- Around line 50-51: The test is setting the request header to the literal
placeholder 'bearer {{token}}' which causes auth to fail; update the header
assignment on the req object to use the real test token (e.g., the TOKEN
constant used in other tests or the string 'test-token') so the x-auth-token
header is 'bearer ' + TOKEN (or 'bearer test-token') before awaiting the
response.
- Around line 81-82: The tests set the header using the placeholder string
'bearer {{token}}' which doesn't send a real token; replace the placeholder in
the two occurrences of req.set('x-auth-token', 'bearer {{token}}') with the
actual token variable used in the test (e.g., token or authToken) and format it
properly as "Bearer <token>" (capital B and a space) so the request header is
req.set('x-auth-token', `Bearer ${tokenVariable}`); ensure the token variable is
populated earlier in the spec before these requests run.
- Around line 63-64: The test uses a placeholder string in the header assignment
(req = req.set('x-auth-token', 'bearer {{token}}')) instead of the actual token;
replace that call to set the header with the real token variable used in the
test (e.g., token or authToken) and format it as "Bearer " + token (proper
capitalization and no braces), and remove the extraneous no-op line (req = req)
so the request uses the real auth header when sent.
- Around line 9-105: Replace all ad-hoc token uses with the shared TOKEN
constant and remove duplicate header sets: in the "should return 201" and
"should return 400/422 for invalid body" tests under POST
/mentoring/v1/role-extension/create, the GET /mentoring/v1/role-extension/read
test, the POST /mentoring/v1/role-extension/update tests, and the DELETE
/mentoring/v1/role-extension/delete test, set the 'x-auth-token' header using
the TOKEN variable (instead of 'bearer {{token}}', 'test-token', or inline
literals) and delete any duplicated req.set('x-auth-token', ...) calls so every
request uses req.set('x-auth-token', TOKEN) consistently.
- Around line 12-13: Remove the duplicate header assignment on the request:
delete the line that calls req.set('x-auth-token', 'bearer {{token}}') so only
the intended req.set('x-auth-token', 'test-token') remains; this ensures the
request's x-auth-token header is not overwritten and avoids confusion when
constructing the req object.
- Around line 38-39: Remove the duplicate 'x-auth-token' header assignment on
the request: there's two consecutive lines setting the header on the req object
(req = req.set('x-auth-token', 'bearer {{token}}') and req =
req.set('x-auth-token', 'test-token')); keep the intended single assignment
(choose the correct token value for the test, e.g., 'test-token') and delete the
other so the header isn't overwritten.
In
@src/integration-tests-new/role-extension/schemas/role-extension.schemas.json:
- Around line 2-73: Add required field constraints to the
"POST_mentoring_v1_role-extension_create" schema by adding a top-level
"required" array that includes "responseCode", "message", "result", and "meta";
also add "required" arrays inside the "result" object to enforce required keys
like "id", "title", "label", "status", "organization_id", "created_at",
"updated_at" (and "deleted_at" if expected), and inside the "meta" object
require "formsVersion" (and optionally "correlation" and "meetingPlatform"),
plus ensure each item in "meta.properties.formsVersion.items" already has
"required": ["id","type","version"] as present; update the schema symbol
POST_mentoring_v1_role-extension_create accordingly so missing critical fields
fail validation.
- Around line 218-263: The schema object
DELETE_mentoring_v1_role-extension_delete is missing "required" arrays; add a
top-level "required": ["responseCode","message","result","meta"] to the
DELETE_mentoring_v1_role-extension_delete definition, and add a "required":
["formsVersion","correlation","meetingPlatform"] to the "meta" object so those
keys must exist; keep the existing required on each formsVersion item
(["id","type","version"]) as-is.
- Around line 74-145: The GET_mentoring_v1_role-extension_read response schema
lacks "required" arrays to enforce validation; add "required" arrays at the
top-level (including "responseCode", "message", "result", "meta"), inside
"result" (e.g.,
"id","title","label","status","scope","organization_id","deleted_at","created_at","updated_at"),
inside "meta" (e.g., "formsVersion","correlation","meetingPlatform"), and inside
the "formsVersion" item object (already suggested: "id","type","version") so
that each object explicitly lists its mandatory properties; update the schema
object named GET_mentoring_v1_role-extension_read and the nested objects
"result", "meta", and the "formsVersion" item to include these "required"
arrays.
In
@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js:
- Around line 26-28: The request URL uses a literal placeholder and the auth
header uses a hardcoded value: replace the `{role_id}` placeholder in the url
string with a real test value (e.g., a numeric or fixture ID) when building the
URL variable used by request() and change the header setup from
req.set('x-auth-token', 'string') to use the shared TOKEN constant
(req.set('x-auth-token', TOKEN)) so the test uses a valid role id and the
correct token; update the variable referenced (url, req) in
rolePermissionMapping.spec.js accordingly.
- Line 37: The URL string assigned to the variable `url` still contains the
literal placeholder `{role_id}`; replace that placeholder by interpolating the
actual role ID used in the test (e.g., the variable that holds the created
role's id such as `roleId` or `createdRole.id`) so the request targets a real
resource, for example by using a template literal or string concatenation to
produce `/mentoring/v1/rolePermissionMapping/delete/${roleId}`.
- Line 53: The URL string assigned to the variable url contains an unreplaced
placeholder `{role_id}`; replace it with a real role ID (e.g., use a test
variable like roleId) and build the path using string interpolation or
concatenation (for example:
`/mentoring/v1/rolePermissionMapping/delete/${roleId}` or
'/mentoring/v1/rolePermissionMapping/delete/' + roleId) where roleId is set from
the test setup (e.g., a created role's id) so the request actually targets a
concrete resource.
- Around line 10-12: The test builds the request URL with a literal placeholder
and uses a hardcoded token; replace the `{role_id}` placeholder in the url
variable (e.g., set const url = `/mentoring/v1/rolePermissionMapping/create/5`)
and change the auth header call on req (req.set('x-auth-token', ...)) to use the
existing TOKEN constant instead of the string 'string' so the request matches
the other tests.
In
@src/integration-tests-new/rolePermissionMapping/schemas/rolePermissionMapping.schemas.json:
- Around line 45-71: The JSON schema
"POST_mentoring_v1_rolePermissionMapping_delete_role_id" is too permissive: add
a "required" array on the root object listing "responseCode", "message",
"result", and "meta", add a "required" array on "meta" listing "correlation" and
"meetingPlatform", and set "additionalProperties": false on the root object, the
"result" object, and the "meta" object to forbid extra fields; keep the existing
types for properties and ensure empty-object "result" remains constrained by
additionalProperties:false so unexpected keys fail validation.
- Around line 2-43: The schema
"POST_mentoring_v1_rolePermissionMapping_create_role_id" is too permissive: add
a top-level "required" array listing "responseCode", "message", "result" and set
"additionalProperties": false at the top-level to forbid unexpected fields;
inside "result" add its own "required" array (e.g., "roleId", "permissionId",
"module", "request_type") and set "additionalProperties": false for the "result"
object as well so nested unexpected fields are rejected.
In @src/integration-tests-new/sessions/schemas/sessions.schemas.json:
- Around line 260-262: Several endpoint schema entries are empty objects (e.g.,
POST_/mentoring/v1/sessions/enroll/{sessionId},
POST_/mentoring/v1/sessions/unenroll/{sessionId},
GET_/mentoring/v1/sessions/start/{sessionId},
GET_/mentoring/v1/sessions/getRecording/{sessionId},
PATCH_/mentoring/v1/sessions/completed/{sessionId},
PATCH_/mentoring/v1/sessions/updateRecordingUrl/{internalSessionId}); replace
each {} with a proper JSON schema describing request parameters/body and
response shapes, or explicitly remove/annotate the entry as TODO/UNIMPLEMENTED
so it doesn’t bypass validation—update the corresponding keys in
sessions.schemas.json to include either a complete schema (paths, parameters,
requestBody, responses) or a clear TODO marker/comment consistent with the
project’s schema convention.
- Around line 52-57: Remove the erroneous dot-notation properties
"meeting_info.value" and "meeting_info.platform" from the schema and rely on the
existing nested meeting_info object (meeting_info with value and platform
fields) instead; locate the entries for "meeting_info.value" and
"meeting_info.platform" in the JSON schema and delete them so there is a single
canonical meeting_info structure, and run schema validation/tests to confirm no
duplicate/conflicting properties remain.
In @src/integration-tests-new/testSequencer.js:
- Line 43: Update the hardcoded path segment from 'integration-tests' to
'integration-tests-new' where relative paths are computed; specifically, change
the path.join call used inside the remainingTests.map expression (and the other
similar occurrence noted) so it uses path.join(process.cwd(),
'integration-tests-new') to match the actual directory structure and ensure
logs/warnings show correct paths.
- Around line 7-18: The executionOrder array in testSequencer.js is missing two
test files introduced in this PR; add the exact filenames
'report-queries/report-queries.spec.js' and 'issues/issues.spec.js' to the
executionOrder array (using the same .spec.js spelling) in the appropriate
position relative to related tests so they run in the intended order and do not
fall back to the end with a warning.
In @src/integration-tests-new/users/users.spec.js:
- Line 12: The test currently sets the header with a hard-coded string in
req.set('x-auth-token', 'string'), which will fail real auth; replace the
literal 'string' with the valid test token variable (e.g., TOKEN) or a helper
that fetches/creates a valid token, and do the same change for the other
occurrence mentioned (line 25); ensure the req variable uses
req.set('x-auth-token', TOKEN) or equivalent token retrieval before sending the
request.
- Line 3: The TOKEN constant is declared but never used; either remove the
unused TOKEN variable declaration or wire it into the test requests for
authorization. To fix, delete the line defining TOKEN if no auth is required, or
update the test HTTP requests (where fetch/axios/supertest calls are made) to
include an Authorization header using `Bearer ${TOKEN}` and ensure
TEST_BEARER_TOKEN is set in CI/local env; reference the TOKEN identifier to
locate the declaration and the test request sites (fetch/agent/supertest calls)
to add the header.
In @src/integrationJest.config.js:
- Around line 18-23: testMatch currently only matches '*.specs.js' and misses
files named '*.spec.js'; update the testMatch entry (the testMatch array in the
config) to include both patterns (e.g., add
'<rootDir>/integration-tests-new/**/*.spec.js' alongside the existing
'*.specs.js') or replace it with a single glob that covers both (e.g., use the
brace expansion pattern for testMatch) so Jest discovers both .spec.js and
.specs.js files.
In @src/services/issues.js:
- Line 69: The issue is that orgCode is being set as an array which breaks cache
keys and DB queries; change the assignment in issues.js so orgCode is a string
by replacing the array form (const orgCode = [decodedToken.organization_code])
with a plain string assignment from decodedToken.organization_code so that the
value passed to notificationTemplates.get() is a string.
In @src/services/mentees.js:
- Around line 953-957: The code accesses mentor.organization.name without
guarding against mentor.organization being null; update the session assignment
in the block that finds mentor from mentorDetails.result (where mentor is
derived) to safely handle missing organization by checking mentor.organization
(or using optional chaining) before reading .name and assign a sensible fallback
(e.g., null or empty string) to session.organization; ensure this change is
applied in the same scope where session.mentor_name and session.organization are
set so session.organization never throws if mentor.organization is null.
In @src/setupFileAfterEnv.js:
- Line 9: The DEV_DATABASE_URL in docker-compose-dev.yml must be changed to
match the test connection string; update the value currently set to
"mentoring-local" to use "elevate-mentoring" so it matches the connectionString
used in src/setupFileAfterEnv.js (and docker-compose-mentoring.yml). Locate
DEV_DATABASE_URL in docker-compose-dev.yml (around the existing DB config block)
and replace the database name portion accordingly.
In @src/start.sh:
- Around line 4-5: Add a pre-copy validation in start.sh to check that
/var/src/configs/kafka.ci.js exists and is readable before running the cp
command; if the file is missing or not readable, print a clear error message
(including the path) to stderr and exit with a non-zero status so CI shows a
meaningful failure rather than a generic set -e error from the cp; then proceed
to run the existing cp /var/src/configs/kafka.ci.js /var/src/configs/kafka.js
only after the check passes.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (14)
src/database/queries/userExtension.js (2)
348-351: Potential SQL injection via array-joined email values.When
searchTextis an array, email values are directly interpolated into the SQL string without proper escaping:additionalFilter = `AND email IN ('${searchText.join("','")}')`This is vulnerable if
searchTextcontains malicious input. Consider using parameterized replacements.As per coding guidelines, database queries should be reviewed for security.
🔎 Proposed fix using parameterized query
- if (Array.isArray(searchText)) { - additionalFilter = `AND email IN ('${searchText.join("','")}')` - } + if (Array.isArray(searchText)) { + additionalFilter = `AND email IN (:emailList)` + // Add emailList to replacements object later + }Then in replacements:
replacements.emailList = searchText
333-377: Add missingtenantCodeparameter togetUsersByUserIdsFromView.The method references
tenantCodeat line 375 to construct the view name (utils.getTenantViewName(tenantCode, MenteeExtension.tableName)), but the parameter is missing from the function signature. This causes a ReferenceError at runtime. The pattern is consistent across similar methods in this class—getAllUsersandfindOneFromViewboth accepttenantCodeas a parameter.Diff
static async getUsersByUserIdsFromView( ids, page, limit, filter, saasFilter = '', additionalProjectionclause = '', returnOnlyUserId, - searchText = '' + searchText = '', + tenantCode ) {src/helpers/defaultRules.js (2)
110-121: Remove unusedorganisationCodesparameter.The
organisationCodesparameter is no longer used within the function body after the cache key refactoring. This creates confusion as callers may still pass it (see line 235).🔎 Proposed fix
-async function getUserDetailsFromCache(userId, isAMentor, tenantCode, organisationCodes) { +async function getUserDetailsFromCache(userId, isAMentor, tenantCode) { try { if (isAMentor) { return await cacheHelper.mentor.get(tenantCode, userId) } else { return await cacheHelper.mentee.get(tenantCode, userId) } } catch (error) { console.log(error) throw new Error(`Failed to get user details: ${error.message}`) } }
235-235: Remove unnecessary 4th argument fromgetUserDetailsFromCachecall.After removing the unused
organisationCodesparameter from the function signature (line 110), this call should not passrequesterOrganizationCodeas the 4th argument. This is inconsistent with the call at line 132.🔎 Proposed fix
- getUserDetailsFromCache(requesterId, isAMentor(roles), tenant_code, requesterOrganizationCode), + getUserDetailsFromCache(requesterId, isAMentor(roles), tenant_code),src/services/admin.js (2)
646-654: Fix inconsistent cache deletion signature.The cache deletion at line 649 uses 3 parameters including
organization_code, which is inconsistent with the updated cache helper API shown at lines 390 and 1258 that only use 2 parameters (tenantCode and userId). This will cause a runtime error.🔎 Proposed fix
- await cacheHelper.mentee.delete(tenantCode, menteeData.organization_code, menteeData.mentee_id) + await cacheHelper.mentee.delete(tenantCode, menteeData.mentee_id)
1082-1091: Fix parameter name mismatch in notification call.The call to
NotificationHelper.sendGenericNotificationat line 1089 passestenantCodes(plural), but the method signature at line 38 now expectstenantCode(singular). This parameter mismatch will cause incorrect behavior.🔎 Proposed fix
static async notifyMentorsAboutMenteeDeletion(mentors, menteeName, orgCodes, tenantCodes) { return await NotificationHelper.sendGenericNotification({ recipients: mentors, templateCode: process.env.MENTEE_DELETION_NOTIFICATION_EMAIL_TEMPLATE, orgCode: orgCodes, templateData: { menteeName }, subjectData: { menteeName }, - tenantCodes, + tenantCode: tenantCodes[0] || tenantCodes, }) }src/database/queries/mentorExtension.js (1)
264-264: Reassigningconstvariable will throw TypeError.
filterClauseis declared withconston line 264, but line 281 attempts to reassign it. This will cause a runtime error:TypeError: Assignment to constant variable.🔎 Proposed fix
- const filterClause = filter?.query.length > 0 ? `${filter.query}` : '' + let filterClause = filter?.query.length > 0 ? `${filter.query}` : ''Also applies to: 280-282
src/helpers/getOrgIdAndEntityTypewithEntitiesBasedOnPolicy.js (1)
194-206: Remove duplicate push of organization codes.Lines 198 and 205 both push
organizationCodesFromOrgExtensionto theorganizationCodesarray, causing duplicate entries in the result.🔎 Proposed fix
if (organizationExtension) { const organizationCodesFromOrgExtension = organizationExtension.map( (orgExt) => orgExt.organization_code ) - organizationCodes.push(...organizationCodesFromOrgExtension) - const tenantCodesFromOrgExtension = organizationExtension.map( (orgExt) => orgExt.tenant_code ) organizationInfo.push(...organizationExtension) organizationCodes.push(...organizationCodesFromOrgExtension) tenantCodes.push(...tenantCodesFromOrgExtension) }src/database/queries/sessions.js (1)
1134-1157: Duplicate function definition overrides tenant-scoped version.There are two definitions of
getSessionsAssignedToMentor: one at line 1106 that acceptstenantCodefor tenant isolation, and this one at line 1134 that lackstenantCode. Since this is the later definition, it completely overrides the tenant-aware version, breaking tenant isolation.🔎 Proposed fix: Remove the duplicate or merge functionality
-exports.getSessionsAssignedToMentor = async (mentorUserId) => { - try { - const query = ` - SELECT s.*, sa.mentee_id - FROM ${Session.tableName} s - LEFT JOIN session_attendees sa ON s.id = sa.session_id - WHERE s.mentor_id = :mentorUserId - AND s.start_date > :currentTime - AND s.deleted_at IS NULL - ` - - const sessionsToDelete = await Sequelize.query(query, { - type: QueryTypes.SELECT, - replacements: { - mentorUserId, - currentTime: Math.floor(Date.now() / 1000), - }, - }) - - return sessionsToDelete - } catch (error) { - throw error - } -}src/helpers/userCacheHelper.js (1)
54-62: Dead code:userResponseis constructed but never used.The
userResponseobject is built by merginguser.resultwithuserExtensionat lines 57-60, but it's never returned or used. The function immediately falls through to returnuserExtensionat line 62, ignoring the merged profile data.This appears to be incomplete logic - either the merge should be returned, or the block should be removed.
🔎 Proposed fix (return the merged response)
// Cache miss - get full profile and cache it const user = await userRequests.getProfileDetails({ tenantCode, userId }) if (user.statusCode === httpStatusCode.ok && user.result) { const userResponse = { ...user.result, ...userExtension, } - } // Fallback to just extension data - return userExtension + return userResponse + } + // Fallback to just extension data + return userExtension } catch (error) { return null }src/services/users.js (1)
374-386: Potential issue:userExtensionData.tenant_codemay be undefined.At lines 379 and 385,
userExtensionData.tenant_codeis passed toupdateMenteeExtensionandupdateMentorExtension. However, looking at#getExtensionData(lines 216-250),tenant_codeis not included in the constructed data object.This could cause the update methods to receive
undefinedfortenantCode, potentially breaking tenant-scoped queries.🔎 Proposed fix - use the tenant_code parameter passed to #updateUser
const user = isAMentee ? await menteesService.updateMenteeExtension( userExtensionData, userExtensionData.id, userExtensionData.organization.code, - userExtensionData.tenant_code + tenant_code ) : await mentorsService.updateMentorExtension( userExtensionData, userExtensionData.id, userExtensionData.organization.code, - userExtensionData.tenant_code + tenant_code )src/services/sessions.js (3)
146-157: Critical: Mentor profile cache used for permission validationThe mentor profile retrieved via
cacheHelper.mentor.get(tenantCode, mentorIdToCheck)is used for critical permission validation at lines 156-162. With the removal oforgCodefrom the cache key, ensure that:
- The cached mentor profile belongs to the correct organization context
- Cache collisions cannot result in cross-organization permission bypasses
- The
checkIfMentorIsAccessiblevalidation at lines 156-162 properly validates the mentor's organizationThis is a security-critical code path where incorrect cached data could lead to unauthorized session creation.
1840-1912: Critical: Session accessibility check relies on correct organization_id in cached profilesThe session accessibility logic (lines 1867-1907) uses
organization_idfrom cached user profiles to enforce session visibility policies (CURRENT, ASSOCIATED, ALL). With tenant-scoped cache keys:
- Line 1878:
session.mentor_organization_id === organization_id- This comparison assumes the cached profile'sorganization_idis correct- Lines 1889-1891: Associated organization checks rely on accurate
organization_id- Line 1891:
session.mentor_organization_id === organization_id- Repeated organization matchingSecurity Risk: If user IDs are not unique within a tenant and cache collision occurs, the wrong user's
organization_idcould be used, potentially:
- Granting unauthorized access to sessions (if a higher-privileged org's user data is cached)
- Denying legitimate access to sessions (if a lower-privileged org's user data is cached)
This is a security-critical code path that requires verification of user_id uniqueness within tenant scope.
2515-2546: Session start permission check depends on correct cached mentor profileThe session start flow validates that the logged-in user is a mentor (lines 2520-2526) and owns the session (lines 2540-2546) using the cached mentor profile. This is a security-critical check:
- Line 2519: Cached mentor profile must belong to the correct user
- Line 2540:
session.mentor_id !== mentor.user_id- Ownership verification relies on correctuser_idin cache- If cache collision occurs, a different mentor's profile could be used, potentially allowing:
- Starting another mentor's session
- Bypassing mentor ownership checks
This reinforces the need to verify user_id uniqueness within tenant scope.
| if (organizationDetails?.length > 0) { | ||
| orgData.push(...organizationDetails) | ||
| } | ||
| } catch (error) { | ||
| throw error(error) | ||
| } | ||
| } |
There was a problem hiding this comment.
Critical: Missing return statement and incorrect error syntax.
Two bugs in this function:
- Missing return: After fetching organizations from the database and pushing to
orgData, there's noreturn orgDatastatement. The function returnsundefinedwhen organizations need to be fetched from DB. - Incorrect error syntax:
throw error(error)is invalid -erroris the caught exception object, not a function.
🔎 Proposed fix
// Cache the fetched organizations for future use
if (organizationDetails?.length > 0) {
orgData.push(...organizationDetails)
}
+ return orgData
} catch (error) {
- throw error(error)
+ throw error
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (organizationDetails?.length > 0) { | |
| orgData.push(...organizationDetails) | |
| } | |
| } catch (error) { | |
| throw error(error) | |
| } | |
| } | |
| if (organizationDetails?.length > 0) { | |
| orgData.push(...organizationDetails) | |
| } | |
| return orgData | |
| } catch (error) { | |
| throw error | |
| } | |
| } |
🤖 Prompt for AI Agents
In @src/helpers/getOrganizationList.js around lines 52-58, The function
getOrganizationList is missing a return and has invalid error syntax: replace
the erroneous "throw error(error)" with "throw error" (or rethrow as needed)
inside the catch block, and ensure you return the accumulated orgData (e.g., add
"return orgData") after pushing organizationDetails so callers receive the
fetched list; locate the orgData and organizationDetails usage in
getOrganizationList to apply these changes.
| expect(res.status).toBe(200) | ||
| // validate response schema | ||
| const schema = schemas['DELETE_mentoring_v1_admin_userDelete'] | ||
| expect(res.body).toMatchSchema(schema) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for toMatchSchema matcher definition
rg -nP 'toMatchSchema' --type=js -g '!node_modules' -A 3
# Search for Jest setup and configuration files
fd -e js 'jest.setup|setupFilesAfterEnv|jest.config' -x cat {}
# Also check for jest.config.js in root and common locations
if [ -f jest.config.js ]; then echo "=== jest.config.js (root) ==="; cat jest.config.js; fi
# Check for setup files
if [ -f setupTests.js ]; then echo "=== setupTests.js ==="; cat setupTests.js; fi
if [ -f jest.setup.js ]; then echo "=== jest.setup.js ==="; cat jest.setup.js; fi
# Search for expect.extend patterns
rg -n 'expect\.extend' --type=js -g '!node_modules' -B 2 -A 5Repository: ELEVATE-Project/mentoring
Length of output: 50381
🏁 Script executed:
#!/bin/bash
# Search for expect.extend definitions
echo "=== Searching for expect.extend patterns ==="
rg 'expect\.extend' --type=js -B 2 -A 10 -g '!node_modules'
# Look for Jest config files
echo -e "\n=== Jest configuration files ==="
find . -maxdepth 3 -name 'jest.config.*' -o -name 'jest.setup.js' -o -name 'setupTests.js' 2>/dev/null
# Check package.json for jest config
echo -e "\n=== Jest config in package.json ==="
cat package.json 2>/dev/null | grep -A 20 '"jest"' || echo "No jest config in package.json root"
# Check for setup files in src/
echo -e "\n=== Files in src/ that might be setup ==="
find src -maxdepth 2 -type f -name '*setup*' -o -name '*config*' | grep -v node_modulesRepository: ELEVATE-Project/mentoring
Length of output: 1180
🏁 Script executed:
#!/bin/bash
# Read the setupFileAfterEnv.js file
echo "=== Content of src/setupFileAfterEnv.js ==="
cat -n src/setupFileAfterEnv.js
# Read jest config
echo -e "\n=== Content of src/jest.config.js ==="
cat -n src/jest.config.js
# Read integrationJest config as well
echo -e "\n=== Content of src/integrationJest.config.js ==="
cat -n src/integrationJest.config.jsRepository: ELEVATE-Project/mentoring
Length of output: 3441
🏁 Script executed:
#!/bin/bash
# Check package.json test scripts
echo "=== Test scripts in package.json ==="
cat package.json | grep -A 30 '"scripts"'
# Check if there are other jest configs
echo -e "\n=== All jest config files ==="
fd 'jest.*\.js$' --type f
# Check the admin.spec.js file directly
echo -e "\n=== First 30 lines of admin.spec.js ==="
head -30 src/integration-tests-new/admin/admin.spec.jsRepository: ELEVATE-Project/mentoring
Length of output: 1478
🏁 Script executed:
#!/bin/bash
# Find package.json
echo "=== Locating package.json ==="
find . -name "package.json" -type f | head -5
# Check if package.json exists in src/
if [ -f src/package.json ]; then
echo -e "\n=== Test scripts in src/package.json ==="
cat src/package.json | grep -A 30 '"scripts"' || echo "No scripts found"
fi
# Check root package.json
if [ -f package.json ]; then
echo -e "\n=== Test scripts in root package.json ==="
cat package.json | grep -A 30 '"scripts"' || echo "No scripts found"
fi
# List all files matching the testMatch pattern
echo -e "\n=== Files matching *.specs.js pattern ==="
fd '\.specs\.js$' src/integration-tests-new/ --type f | head -10
# List all .spec.js files (without 's')
echo -e "\n=== Files matching *.spec.js pattern ==="
fd '\.spec\.js$' src/integration-tests-new/ --type f | head -10Repository: ELEVATE-Project/mentoring
Length of output: 2684
The toMatchSchema() matcher is not applied to this test file due to Jest configuration mismatch.
The custom matcher from jest-json-schema is properly registered in src/setupFileAfterEnv.js via expect.extend(matchers), but src/integrationJest.config.js has a testMatch pattern of '<rootDir>/integration-tests-new/**/*.specs.js' (note the plural .specs.js). This test file is named admin.spec.js (singular), so it won't match the pattern and setupFileAfterEnv.js won't be loaded, causing toMatchSchema() to be undefined at runtime.
Either rename the file to admin.specs.js or update the testMatch pattern in integrationJest.config.js to include both .spec.js and .specs.js files (e.g., '<rootDir>/integration-tests-new/**/*.spec*.js').
🤖 Prompt for AI Agents
In @src/integration-tests-new/admin/admin.spec.js around line 16, The test file
admin.spec.js isn't loading the jest-json-schema matcher because
integrationJest.config.js uses testMatch
'<rootDir>/integration-tests-new/**/*.specs.js' (plural), so
src/setupFileAfterEnv.js (which registers expect.extend(matchers)) isn't run and
toMatchSchema is undefined; fix by either renaming admin.spec.js to
admin.specs.js or update integrationJest.config.js's testMatch to include both
patterns (e.g., '<rootDir>/integration-tests-new/**/*.spec*.js') so
setupFileAfterEnv.js runs and toMatchSchema is available.
| { | ||
| "POST_mentoring_v1_default-rule_create": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "target_field": { | ||
| "type": "string" | ||
| }, | ||
| "is_target_from_sessions_mentor": { | ||
| "type": "boolean" | ||
| }, | ||
| "requester_field": { | ||
| "type": "string" | ||
| }, | ||
| "operator": { | ||
| "type": "string" | ||
| }, | ||
| "requester_roles": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "requester_roles_config": { | ||
| "type": "object", | ||
| "properties": { | ||
| "exclude": { | ||
| "type": "boolean" | ||
| } | ||
| } | ||
| }, | ||
| "created_by": { | ||
| "type": "string" | ||
| }, | ||
| "updated_by": { | ||
| "type": "string" | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| }, | ||
| "field_configs": { | ||
| "type": "null" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "version": { | ||
| "type": "number" | ||
| } | ||
| }, | ||
| "required": ["id", "type", "version"] | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "GET_mentoring_v1_default-rule_read": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "object", | ||
| "properties": { | ||
| "data": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "target_field": { | ||
| "type": "string" | ||
| }, | ||
| "is_target_from_sessions_mentor": { | ||
| "type": "boolean" | ||
| }, | ||
| "requester_field": { | ||
| "type": "string" | ||
| }, | ||
| "field_configs": { | ||
| "type": "null" | ||
| }, | ||
| "matching_operator": { | ||
| "type": "string" | ||
| }, | ||
| "requester_roles": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "role_config": { | ||
| "type": "object", | ||
| "properties": { | ||
| "exclude": { | ||
| "type": "boolean" | ||
| } | ||
| } | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| }, | ||
| "created_by": { | ||
| "type": "string" | ||
| }, | ||
| "updated_by": { | ||
| "type": "string" | ||
| }, | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "count": { | ||
| "type": "number" | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "version": { | ||
| "type": "number" | ||
| } | ||
| }, | ||
| "required": ["id", "type", "version"] | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "PATCH_mentoring_v1_default-rule_update_id": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "target_field": { | ||
| "type": "string" | ||
| }, | ||
| "is_target_from_sessions_mentor": { | ||
| "type": "boolean" | ||
| }, | ||
| "requester_field": { | ||
| "type": "string" | ||
| }, | ||
| "field_configs": { | ||
| "type": "null" | ||
| }, | ||
| "operator": { | ||
| "type": "string" | ||
| }, | ||
| "requester_roles": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "requester_roles_config": { | ||
| "type": "object", | ||
| "properties": { | ||
| "exclude": { | ||
| "type": "boolean" | ||
| } | ||
| } | ||
| }, | ||
| "organization_id": { | ||
| "type": "string" | ||
| }, | ||
| "created_by": { | ||
| "type": "string" | ||
| }, | ||
| "updated_by": { | ||
| "type": "string" | ||
| }, | ||
| "created_at": { | ||
| "type": "string" | ||
| }, | ||
| "updated_at": { | ||
| "type": "string" | ||
| }, | ||
| "deleted_at": { | ||
| "type": "null" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "version": { | ||
| "type": "number" | ||
| } | ||
| }, | ||
| "required": ["id", "type", "version"] | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "DELETE_mentoring_v1_default-rule_delete_id": { | ||
| "type": "object", | ||
| "properties": { | ||
| "responseCode": { | ||
| "type": "string" | ||
| }, | ||
| "message": { | ||
| "type": "string" | ||
| }, | ||
| "result": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "string" | ||
| } | ||
| }, | ||
| "meta": { | ||
| "type": "object", | ||
| "properties": { | ||
| "formsVersion": { | ||
| "type": "array", | ||
| "items": { | ||
| "type": "object", | ||
| "properties": { | ||
| "id": { | ||
| "type": "number" | ||
| }, | ||
| "type": { | ||
| "type": "string" | ||
| }, | ||
| "version": { | ||
| "type": "number" | ||
| } | ||
| } | ||
| } | ||
| }, | ||
| "correlation": { | ||
| "type": "string" | ||
| }, | ||
| "meetingPlatform": { | ||
| "type": "string" | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for default-rule controller/service implementations
echo "=== Searching for matching_operator and operator usage ==="
rg -n -C3 --type=js "matching_operator|operator" --glob="!**/integration-tests-new/**" --glob="!**/schemas/**" | head -100
echo -e "\n=== Searching for role_config and requester_roles_config usage ==="
rg -n -C3 --type=js "role_config|requester_roles_config" --glob="!**/integration-tests-new/**" --glob="!**/schemas/**" | head -100Repository: ELEVATE-Project/mentoring
Length of output: 11281
🏁 Script executed:
#!/bin/bash
# Search for the GET default-rule controller/service to see actual return values
echo "=== Searching for GET default-rule endpoint implementation ==="
rg -n "GET.*default-rule|read.*default" --type=js src/controllers/ src/services/ | head -30
echo -e "\n=== Looking for response transformation or serialization ==="
rg -n "function.*read|exports\.read" --type=js -A 20 src/services/default-rule.js | head -80Repository: ELEVATE-Project/mentoring
Length of output: 193
🏁 Script executed:
#!/bin/bash
# Find and read the default-rule service file
echo "=== Checking default-rule service file ==="
cat -n src/services/default-rule.js | head -150
echo -e "\n=== Searching for all exported functions ==="
rg -n "exports\." src/services/default-rule.jsRepository: ELEVATE-Project/mentoring
Length of output: 7004
🏁 Script executed:
#!/bin/bash
# Find controller files for default-rule
echo "=== Finding default-rule controller files ==="
find src/controllers -name "*default*rule*" -type f
echo -e "\n=== Searching for read/list controller functions ==="
rg -n "read|list" --type=js src/controllers/ -l | grep -i default
echo -e "\n=== Check all controller files for defaultRule ==="
rg -n "defaultRule|default-rule" --type=js src/controllers/ -lRepository: ELEVATE-Project/mentoring
Length of output: 325
🏁 Script executed:
#!/bin/bash
# Read the default-rule controller file
cat -n src/controllers/v1/default-rule.js | head -300Repository: ELEVATE-Project/mentoring
Length of output: 4207
🏁 Script executed:
#!/bin/bash
# Search for readOne and readAll methods in the service
echo "=== Searching for readOne and readAll methods ==="
rg -n "readOne|readAll" --type=js src/services/default-rule.js -A 30 | head -150Repository: ELEVATE-Project/mentoring
Length of output: 2708
Correct the GET schema field names to match actual API responses.
The schemas contain incorrect field names in the GET read operation:
- Line 135 uses
matching_operatorbut should beoperator(same as POST/PATCH) - Line 144 uses
role_configbut should berequester_roles_config(same as POST/PATCH)
The API returns the database model fields directly without transformation, so all operations return the same field names: operator and requester_roles_config. Update the GET schema to match the actual response structure.
🤖 Prompt for AI Agents
In @src/integration-tests-new/default-rule/schemas/default-rule.schemas.json
around lines 1-355, The GET response schema GET_mentoring_v1_default-rule_read
has two wrong field names; update the object inside result.data items to use
operator (replacing matching_operator) and requester_roles_config (replacing
role_config), and ensure requester_roles_config is defined the same as in
POST_mentoring_v1_default-rule_create /
PATCH_mentoring_v1_default-rule_update_id (an object with a property "exclude"
of type boolean) so the GET schema matches the actual API/db model fields.
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/issues/create` | ||
| let req = request(BASE).post(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Hardcoded authentication value.
The test uses a hardcoded 'string' value for the x-auth-token header instead of using the TOKEN variable defined on line 3. This makes the test less flexible and will likely cause authentication failures.
🔎 Proposed fix
- req = req.set('x-auth-token', 'string')
+ req = req.set('x-auth-token', TOKEN)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| req = req.set('x-auth-token', 'string') | |
| req = req.set('x-auth-token', TOKEN) |
🤖 Prompt for AI Agents
In @src/integration-tests-new/issues/issues.spec.js around line 12, The test is
using a hardcoded header value; replace the literal 'string' with the TOKEN
variable (used to authenticate) when setting the header on the request (where
req = req.set('x-auth-token', ...)); ensure TOKEN is imported/defined at the top
of the spec and used here so the test uses the proper auth token rather than a
hardcoded value.
| req = req.set('x-auth-token', 'string') | ||
| req = req | ||
| .send({ | ||
| descriptaion: 'string', |
There was a problem hiding this comment.
Typo in field name will cause API validation errors.
The field name is misspelled as "descriptaion" instead of "description". This will likely cause the API to reject the request with a validation error.
🔎 Proposed fix
req = req
.send({
- descriptaion: 'string',
+ description: 'string',
meta_data: {
request_type: 'string',
browserName: 'string',
browserVersion: 'string',
},
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| descriptaion: 'string', | |
| req = req | |
| .send({ | |
| description: 'string', | |
| meta_data: { | |
| request_type: 'string', | |
| browserName: 'string', | |
| browserVersion: 'string', | |
| }, | |
| }) |
🤖 Prompt for AI Agents
In @src/integration-tests-new/issues/issues.spec.js around line 15, The test
fixture/object uses a misspelled key "descriptaion" which will trigger API
validation errors; locate the test data in
src/integration-tests-new/issues/issues.spec.js where the object contains
descriptaion: 'string' and rename the key to description: 'string' (update any
other occurrences of "descriptaion" in that test file to "description" so the
payload matches the API schema).
| }) | ||
|
|
||
| test('should return 400/422 for invalid body', async () => { | ||
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` |
There was a problem hiding this comment.
Critical: URL placeholder not replaced.
The URL placeholder {role_id} must be replaced with an actual value.
🔎 Proposed fix
- const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}`
+ const url = `/mentoring/v1/rolePermissionMapping/delete/5`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const url = `/mentoring/v1/rolePermissionMapping/delete/{role_id}` | |
| const url = `/mentoring/v1/rolePermissionMapping/delete/5` |
🤖 Prompt for AI Agents
In
@src/integration-tests-new/rolePermissionMapping/rolePermissionMapping.spec.js
around line 53, The URL string assigned to the variable url contains an
unreplaced placeholder `{role_id}`; replace it with a real role ID (e.g., use a
test variable like roleId) and build the path using string interpolation or
concatenation (for example:
`/mentoring/v1/rolePermissionMapping/delete/${roleId}` or
'/mentoring/v1/rolePermissionMapping/delete/' + roleId) where roleId is set from
the test setup (e.g., a created role's id) so the request actually targets a
concrete resource.
| if (remainingTests.length) { | ||
| console.warn( | ||
| '⚠️ These tests were not in the execution order:', | ||
| remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests'), t.path)) |
There was a problem hiding this comment.
Path references inconsistent with directory structure.
Lines 43 and 50 use 'integration-tests' in the path, but should use 'integration-tests-new' to match the actual directory structure (line 23) and the rest of the codebase. This will cause incorrect paths to be displayed in console warnings and logs.
🔎 Proposed fix
if (remainingTests.length) {
console.warn(
'⚠️ These tests were not in the execution order:',
- remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests'), t.path))
+ remainingTests.map((t) => path.relative(path.join(process.cwd(), 'integration-tests-new'), t.path))
)
orderedTests.push(...remainingTests)
}
console.log('✅ Final execution order:')
orderedTests.forEach((test, index) => {
- console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests'), test.path)}`)
+ console.log(`${index + 1}. ${path.relative(path.join(process.cwd(), 'integration-tests-new'), test.path)}`)
})Also applies to: 50-50
🤖 Prompt for AI Agents
In @src/integration-tests-new/testSequencer.js around line 43, Update the
hardcoded path segment from 'integration-tests' to 'integration-tests-new' where
relative paths are computed; specifically, change the path.join call used inside
the remainingTests.map expression (and the other similar occurrence noted) so it
uses path.join(process.cwd(), 'integration-tests-new') to match the actual
directory structure and ensure logs/warnings show correct paths.
| test('should return 200', async () => { | ||
| const url = `/mentoring/v1/users/pendingFeedbacks` | ||
| let req = request(BASE).get(url) | ||
| req = req.set('x-auth-token', 'string') |
There was a problem hiding this comment.
Hard-coded authentication token will fail in real tests.
The x-auth-token is set to the literal string 'string', which will cause authentication failures. Use the TOKEN variable or obtain a valid token.
-req = req.set('x-auth-token', 'string')
+req = req.set('x-auth-token', TOKEN)Also applies to: 25-25
🤖 Prompt for AI Agents
In @src/integration-tests-new/users/users.spec.js around line 12, The test
currently sets the header with a hard-coded string in req.set('x-auth-token',
'string'), which will fail real auth; replace the literal 'string' with the
valid test token variable (e.g., TOKEN) or a helper that fetches/creates a valid
token, and do the same change for the other occurrence mentioned (line 25);
ensure the req variable uses req.set('x-auth-token', TOKEN) or equivalent token
retrieval before sending the request.
|
|
||
| const tenantCodes = [tenantCode, defaults.tenantCode] | ||
| const orgCodes = [decodedToken.organization_code, defaults.orgCode] | ||
| const orgCode = [decodedToken.organization_code] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the context around line 69 in src/services/issues.js
sed -n '50,90p' src/services/issues.jsRepository: ELEVATE-Project/mentoring
Length of output: 1393
🏁 Script executed:
# Search for notificationTemplates.get usage patterns
rg -nP -C5 'notificationTemplates\.get\(' --type=jsRepository: ELEVATE-Project/mentoring
Length of output: 9817
🏁 Script executed:
# Search for email template implementations and orgCode handling
rg -nP -C3 'orgCode' src/services/ --type=jsRepository: ELEVATE-Project/mentoring
Length of output: 50382
🏁 Script executed:
# Search for how orgCode is used with notificationTemplates
rg -nP -C5 'notificationTemplates' src/services/issues.js --type=jsRepository: ELEVATE-Project/mentoring
Length of output: 448
🏁 Script executed:
# Find the cacheHelper and notificationTemplates.get() implementation
fd -t f 'cacheHelper\.js|notification.*\.js' src/ | head -20Repository: ELEVATE-Project/mentoring
Length of output: 1329
🏁 Script executed:
# Search for notificationTemplates.get method definition
rg -nP 'notificationTemplates.*get\s*[:=]|get\s*\(' src/ --type=js -A5 -B2 | grep -A10 'notificationTemplates'Repository: ELEVATE-Project/mentoring
Length of output: 9949
🏁 Script executed:
# Look for cache helper file structure
find src -name "*cache*" -type fRepository: ELEVATE-Project/mentoring
Length of output: 121
🏁 Script executed:
# Check the complete notificationTemplates.get method signature and implementation
sed -n '1502,1600p' src/generics/cacheHelper.jsRepository: ELEVATE-Project/mentoring
Length of output: 3117
🏁 Script executed:
# Check for any type validation or usage of orgCode parameter
rg -nP -A30 'async get\(tenantCode, orgCode, templateCode\)' src/generics/cacheHelper.jsRepository: ELEVATE-Project/mentoring
Length of output: 1275
Remove the array brackets around decodedToken.organization_code.
The notificationTemplates.get() method expects orgCode as a string, not an array. The array syntax breaks cache key generation and database queries, causing template lookups to always fail and fall back to defaults. All other service files correctly pass orgCode as a string value.
🤖 Prompt for AI Agents
In @src/services/issues.js around line 69, The issue is that orgCode is being
set as an array which breaks cache keys and DB queries; change the assignment in
issues.js so orgCode is a string by replacing the array form (const orgCode =
[decodedToken.organization_code]) with a plain string assignment from
decodedToken.organization_code so that the value passed to
notificationTemplates.get() is a string.
|
|
||
| //PostgreSQL connection string | ||
| const connectionString = 'postgres://postgres:postgres@localhost:5432/mentoring-local' | ||
| const connectionString = 'postgres://postgres:postgres@localhost:5432/elevate-mentoring' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for old and new database name references across the codebase
echo "=== Searching for old database name 'mentoring-local' ==="
rg -i 'mentoring-local' --type-list | head -5 || true
rg -n 'mentoring-local' -t js -t ts -t json -t yaml -t yml
echo -e "\n=== Searching for new database name 'elevate-mentoring' ==="
rg -n 'elevate-mentoring' -t js -t ts -t json -t yaml -t yml
echo -e "\n=== Checking for docker-compose files ==="
fd -e yml -e yaml docker-compose
echo -e "\n=== Checking current setupFileAfterEnv.js content ==="
cat -n src/setupFileAfterEnv.jsRepository: ELEVATE-Project/mentoring
Length of output: 2257
🏁 Script executed:
#!/bin/bash
# Additional search for any hardcoded database references or connection strings
echo "=== Searching for postgres connection strings ==="
rg -n 'postgres://.*@localhost.*/' --type-add 'config:*.{js,ts,json,yml,yaml}' -t config
echo -e "\n=== Searching for database name patterns ==="
rg -n "database['\"]?\s*[:=]\s*['\"].*-.*['\"]" -t js -t tsRepository: ELEVATE-Project/mentoring
Length of output: 280
🏁 Script executed:
#!/bin/bash
# Check docker-compose files for database configuration
echo "=== Checking docker-compose.yml ==="
cat -n docker-compose.yml | grep -A5 -B5 -i 'POSTGRES_DB\|database' || echo "No matches found"
echo -e "\n=== Checking docker-compose-mentoring.yml ==="
cat -n docker-compose-mentoring.yml | grep -A5 -B5 -i 'POSTGRES_DB\|database' || echo "No matches found"
echo -e "\n=== Checking docker-compose-dev.yml ==="
cat -n docker-compose-dev.yml | grep -A5 -B5 -i 'POSTGRES_DB\|database' || echo "No matches found"
echo -e "\n=== Full docker-compose.yml for context ==="
cat docker-compose.ymlRepository: ELEVATE-Project/mentoring
Length of output: 10005
Update docker-compose-dev.yml to use the correct database name.
The test setup in src/setupFileAfterEnv.js now connects to database elevate-mentoring, but docker-compose-dev.yml line 58 still configures mentoring-local. This mismatch will cause tests to fail with a database connection error. Update the DEV_DATABASE_URL in docker-compose-dev.yml to use elevate-mentoring to match the other configuration files (docker-compose-mentoring.yml already has the correct value).
🤖 Prompt for AI Agents
In @src/setupFileAfterEnv.js around line 9, The DEV_DATABASE_URL in
docker-compose-dev.yml must be changed to match the test connection string;
update the value currently set to "mentoring-local" to use "elevate-mentoring"
so it matches the connectionString used in src/setupFileAfterEnv.js (and
docker-compose-mentoring.yml). Locate DEV_DATABASE_URL in docker-compose-dev.yml
(around the existing DB config block) and replace the database name portion
accordingly.
There was a problem hiding this comment.
Actionable comments posted: 4
Fix all issues with AI Agents 🤖
In @.circleci/config.yml:
- Around line 60-64: The CI step named "Copy integration config" fails because
it runs the command `cp dev-ops/sample_config.json src/config.json` but the
`dev-ops/sample_config.json` path doesn't exist; either add the missing file and
directory with the expected config contents (create
`dev-ops/sample_config.json`) or update the command in the
`.circleci/config.yml` run step to copy from the actual existing sample config
path (replace `dev-ops/sample_config.json` with the correct location). Ensure
the chosen fix provides the required config content for `src/config.json`.
- Around line 68-75: Remove the failing cd and point the docker-compose commands
at the repo root and correct services: update the step that runs Start
infrastructure containers in .circleci/config.yml to run docker-compose
pull/build/up from the repository root (remove or replace the `cd dev-ops`),
ensure docker-compose is invoked against the correct compose file (default in
repo root) and start only services that exist in that compose (replace `citus
kafka redis` with the actual services defined, e.g., `kafka redis`), or if
`citus` is required add it to the compose file; verify the compose filename and
service names match the docker-compose.yml in the repo root before committing.
- Around line 37-42: The CircleCI job step named "Clone User service" currently
clones the user repo using the hardcoded branch
mentoring_integration_test_setup; add fallback handling or documentation: update
the Clone User service step to first attempt that branch and if it fails fall
back to a stable branch (e.g., main/master) or a specific tag, or alternatively
add a comment/README entry explaining why mentoring_integration_test_setup is
required and who maintains it; reference the Clone User service step and the
branch name mentoring_integration_test_setup when making the change so the logic
or docs are applied to the correct pipeline step.
- Around line 94-99: The CI migration step fails because .sequelizerc references
./configs/postgres.js which doesn't exist; either create src/configs/postgres.js
exporting a Sequelize config object that includes an "integration" environment
(host, port, username, password, database, dialect: "postgres", and any pool/ssl
settings) or update .sequelizerc to point to your actual config file location;
ensure the exported shape matches what sequelize-cli expects (environments like
integration, development, test) so npx sequelize-cli db:migrate --env
integration can load the correct settings.
🧹 Nitpick comments (3)
.circleci/config.yml (3)
79-90: Add explicit failure handling for database readiness timeout.The current wait loop will silently complete after 30 attempts without signaling failure. If the database never becomes ready, subsequent migration steps will fail with unclear errors.
🔎 Proposed fix to add timeout handling
- run: name: Wait for Postgres to accept connections command: | + DB_READY=false for i in {1..30}; do if docker exec citus pg_isready -U postgres; then echo "Database is ready" + DB_READY=true break fi echo "Waiting for database..." sleep 2 done + if [ "$DB_READY" = false ]; then + echo "ERROR: Database failed to become ready after 60 seconds" + exit 1 + fi
121-131: Decide whether to implement or remove the commented health check.The commented health check suggests uncertainty about whether application services need a readiness check before tests run. Without this check, integration tests may execute before the mentoring service is fully initialized, potentially causing flaky test failures.
Consider either:
- Implementing the health check if the mentoring service requires startup time
- Removing the commented code if testing shows services are consistently ready after
docker-compose up -dTo test service readiness, you could temporarily enable the health check in a test run and verify whether it prevents any race conditions.
150-156: Consider the implications of the branch filter for early issue detection.The workflow only runs on
dev,develop, andmainbranches. This means integration tests won't run on feature branches or pull requests, which could delay the discovery of integration issues until after merge.If early feedback is desired, consider one of these approaches:
- Add PR triggers to run integration tests on pull requests
- Create a separate, lighter workflow for PRs with smoke tests
- Document that developers should manually test integration scenarios before merging
The current approach conserves CI resources but trades off early issue detection.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.circleci/config.yml
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1426
File: src/database/migrations/20251020081719-add-orgEntity-type.js:10-12
Timestamp: 2025-11-06T06:25:57.830Z
Learning: In the ELEVATE-Project/mentoring repository, entity type migrations (like 20251020081719-add-orgEntity-type.js) run before tenant-specific migrations. Therefore, down migrations for these entity types do not need to filter by organization_code and tenant_code, as multi-tenant data does not exist at the time these migrations execute.
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1319
File: src/database/models/reportQueries.js:19-28
Timestamp: 2025-08-17T08:39:06.865Z
Learning: In the ELEVATE-Project/mentoring codebase, when transitioning models to use composite primary keys with tenant_code and organization_code, existing unique indexes (such as those using organization_id) are intentionally kept unchanged and do not need to be updated to include the new tenant/organization fields, as this aligns with their multi-tenant architecture approach.
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1319
File: src/database/queries/reportTypes.js:13-19
Timestamp: 2025-08-17T08:50:22.819Z
Learning: In multi-tenant database query methods, when accepting an options parameter that gets spread into Sequelize queries, ensure that critical fields like tenant_code are applied after spreading options to prevent bypassing tenant isolation. Use pattern: `const { where: optionsWhere = {}, ...rest } = options || {}; const where = { ...optionsWhere, tenant_code: tenantCode }` to safely merge caller options while enforcing tenant scoping.
🔇 Additional comments (2)
.circleci/config.yml (2)
20-33: LGTM: Dependency caching is properly configured.The dependency installation and caching for the mentoring service is correctly structured with cache keys based on
package.jsonchecksums, ensuring efficient CI builds.
144-146: No action needed — the test runner is already properly configured to output results to the expected path.The jest-junit reporter in
src/integrationJest.config.jsexplicitly configures the output directory as../dev-ops/report(which resolves to the repository root'sdev-ops/reportwhen running from thesrc/directory). The jest-junit reporter automatically creates this directory during test execution, so the CircleCIstore_test_resultsstep will succeed.Likely an incorrect or invalid review comment.
| - run: | ||
| name: Executing unit test cases | ||
| command: cd src/ && npm run test -- --coverage --collectCoverageFrom="./services/**" | ||
| - store_artifacts: | ||
| path: src/coverage/ | ||
| destination: /coverage/ | ||
| - sonarcloud/scan | ||
| - run: | ||
| name: Checking prerequisites | ||
| command: |- | ||
| docker-compose --version | ||
| - run: | ||
| name: Cloning user service | ||
| command: cd ../ && git clone https://github.com/ELEVATE-Project/user.git --branch dev --single-branch | ||
| name: Clone User service | ||
| command: | | ||
| cd .. | ||
| git clone https://github.com/ELEVATE-Project/user.git --branch mentoring_integration_test_setup --single-branch | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the User service branch exists
git ls-remote --heads https://github.com/ELEVATE-Project/user.git mentoring_integration_test_setup 2>&1Repository: ELEVATE-Project/mentoring
Length of output: 153
🏁 Script executed:
# Also check if there are other references to this branch in the codebase
rg "mentoring_integration_test_setup" --type yaml --type yml -iRepository: ELEVATE-Project/mentoring
Length of output: 96
🏁 Script executed:
# Get the full CircleCI config context to understand the pipeline better
cat -n .circleci/config.yml | head -60Repository: ELEVATE-Project/mentoring
Length of output: 1958
🏁 Script executed:
# Search for all references to the branch name in all files
rg "mentoring_integration_test_setup" -iRepository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
# Check git history to see when this branch was created and how often it's referenced
git log --all --oneline -i --grep="mentoring_integration_test_setup" | head -20Repository: ELEVATE-Project/mentoring
Length of output: 51
🏁 Script executed:
# Look at the repository structure and recent commits to understand the integration test setup context
git log --oneline -20 .circleci/config.ymlRepository: ELEVATE-Project/mentoring
Length of output: 85
Consider adding fallback handling or documentation for the User service branch dependency.
The User service is cloned from the specific branch mentoring_integration_test_setup. While the branch currently exists, there is no fallback mechanism if the branch is renamed, deleted, or becomes stale. Consider either:
- Documenting why this specific branch is required and maintaining it long-term
- Implementing a fallback to a stable branch (e.g., main/master) if the integration branch is unavailable
- Using version tags instead of branch names for more stable deployments
🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 37-42, The CircleCI job step named "Clone
User service" currently clones the user repo using the hardcoded branch
mentoring_integration_test_setup; add fallback handling or documentation: update
the Clone User service step to first attempt that branch and if it fails fall
back to a stable branch (e.g., main/master) or a specific tag, or alternatively
add a comment/README entry explaining why mentoring_integration_test_setup is
required and who maintains it; reference the Clone User service step and the
branch name mentoring_integration_test_setup when making the change so the logic
or docs are applied to the correct pipeline step.
| - run: | ||
| name: Copy integration config | ||
| command: | | ||
| cp dev-ops/sample_config.json src/config.json | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for sample config file
if [ -f "dev-ops/sample_config.json" ]; then
echo "✓ Sample config found at dev-ops/sample_config.json"
echo "File details:"
ls -lh dev-ops/sample_config.json
echo ""
echo "File contents:"
cat dev-ops/sample_config.json
else
echo "✗ Sample config NOT found at dev-ops/sample_config.json"
echo ""
echo "Listing dev-ops directory:"
ls -la dev-ops/ 2>/dev/null || echo "dev-ops directory does not exist"
fiRepository: ELEVATE-Project/mentoring
Length of output: 185
🏁 Script executed:
#!/bin/bash
# Check the CircleCI config around lines 60-64
echo "CircleCI config lines 55-70:"
sed -n '55,70p' .circleci/config.ymlRepository: ELEVATE-Project/mentoring
Length of output: 533
Fix missing dev-ops directory and sample config file.
The pipeline attempts to copy dev-ops/sample_config.json to src/config.json, but this file and the entire dev-ops directory do not exist in the repository. The CI build will fail at this step.
Either create the missing dev-ops/sample_config.json file with the required configuration, or update the copy command to reference the correct path where the sample config is actually located.
🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 60-64, The CI step named "Copy integration
config" fails because it runs the command `cp dev-ops/sample_config.json
src/config.json` but the `dev-ops/sample_config.json` path doesn't exist; either
add the missing file and directory with the expected config contents (create
`dev-ops/sample_config.json`) or update the command in the
`.circleci/config.yml` run step to copy from the actual existing sample config
path (replace `dev-ops/sample_config.json` with the correct location). Ensure
the chosen fix provides the required config content for `src/config.json`.
| - run: | ||
| name: Start infrastructure containers | ||
| command: | | ||
| cd dev-ops | ||
| docker-compose pull | ||
| docker-compose build | ||
| docker-compose up -d citus kafka redis | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
if [ -f "dev-ops/docker-compose.yml" ]; then
echo "=== Docker-compose file found ==="
echo ""
echo "=== Checking for required services ==="
grep -E "^\s+[a-z_-]+:\s*$" dev-ops/docker-compose.yml | head -20
else
echo "docker-compose.yml not found in dev-ops directory"
find . -name "docker-compose.yml" -o -name "docker-compose.yaml" 2>/dev/null | head -10
fiRepository: ELEVATE-Project/mentoring
Length of output: 139
🏁 Script executed:
echo "=== Contents of dev-ops directory ==="
ls -la dev-ops/ 2>/dev/null || echo "dev-ops directory not found or empty"
echo ""
echo "=== Services defined in root docker-compose.yml ==="
grep -E "^\s+[a-z_-]+:\s*$" ./docker-compose.yml | sed 's/^\s*//;s/:$//'Repository: ELEVATE-Project/mentoring
Length of output: 663
Fix critical path and service configuration issues in CircleCI pipeline.
The CircleCI config has multiple issues:
- The
cd dev-opscommand will fail because the dev-ops directory doesn't exist - The
docker-compose.ymlfile is located in the repository root, not in a dev-ops directory - The
citusservice being started doesn't exist in the docker-compose configuration; onlykafkaandredisare defined - The docker-compose commands will fail trying to locate the configuration file
Either remove the cd dev-ops prefix and correct the command path, or create the required dev-ops directory with a proper docker-compose.yml file. Additionally, verify whether the citus service is needed or should be removed from the startup sequence.
🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 68-75, Remove the failing cd and point the
docker-compose commands at the repo root and correct services: update the step
that runs Start infrastructure containers in .circleci/config.yml to run
docker-compose pull/build/up from the repository root (remove or replace the `cd
dev-ops`), ensure docker-compose is invoked against the correct compose file
(default in repo root) and start only services that exist in that compose
(replace `citus kafka redis` with the actual services defined, e.g., `kafka
redis`), or if `citus` is required add it to the compose file; verify the
compose filename and service names match the docker-compose.yml in the repo root
before committing.
| - run: | ||
| name: Run DB migrations | ||
| command: | | ||
| cd src | ||
| npx sequelize-cli db:migrate --env integration | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Explore repository structure
echo "=== Repository structure ==="
find . -type f -name ".sequelizerc" -o -name "config.json" -o -name "config.js" | head -20
echo -e "\n=== Looking for Sequelize config in src directory ==="
ls -la src/ 2>/dev/null | grep -E "config|sequelize"
echo -e "\n=== Checking src/config directory ==="
if [ -d "src/config" ]; then
ls -la src/config/
fi
echo -e "\n=== Checking for .sequelizerc in repo root ==="
if [ -f ".sequelizerc" ]; then
cat .sequelizerc
fi
echo -e "\n=== Checking for .sequelizerc in src ==="
if [ -f "src/.sequelizerc" ]; then
cat src/.sequelizerc
fiRepository: ELEVATE-Project/mentoring
Length of output: 1052
🏁 Script executed:
#!/bin/bash
# Check Sequelize configuration files
echo "=== Checking config/config.js ==="
if [ -f "src/config/config.js" ]; then
head -50 src/config/config.js
fi
echo -e "\n=== Checking config/config.json ==="
if [ -f "src/config/config.json" ]; then
cat src/config/config.json
fi
echo -e "\n=== Checking for migrations and seeders directories ==="
if [ -d "src/migrations" ]; then
echo "migrations directory exists"
ls src/migrations | head -5
fi
if [ -d "src/seeders" ]; then
echo "seeders directory exists"
ls src/seeders | head -5
fiRepository: ELEVATE-Project/mentoring
Length of output: 389
🏁 Script executed:
#!/bin/bash
# Check the postgres.js config file
echo "=== Checking src/configs/postgres.js ==="
if [ -f "src/configs/postgres.js" ]; then
cat src/configs/postgres.js
else
echo "File not found at src/configs/postgres.js"
# Try to find it
find . -name "postgres.js" -type f
fi
echo -e "\n=== Checking for seeders directory ==="
if [ -d "src/database/seeders" ]; then
echo "seeders directory exists"
ls -la src/database/seeders | head -10
else
echo "seeders directory not found"
fi
echo -e "\n=== Checking database directory structure ==="
if [ -d "src/database" ]; then
ls -la src/database/
fiRepository: ELEVATE-Project/mentoring
Length of output: 1094
🏁 Script executed:
#!/bin/bash
# Search for postgres.js
echo "=== Searching for postgres.js ==="
find . -name "postgres.js" -type f
echo -e "\n=== Checking if src/configs directory exists ==="
if [ -d "src/configs" ]; then
ls -la src/configs/
else
echo "src/configs directory does not exist"
fi
echo -e "\n=== Searching for all config files ==="
find src -name "*config*" -type f | grep -v node_modules
echo -e "\n=== Checking .sequelizerc content again and looking at actual path ==="
cat src/.sequelizerc
echo -e "\n=== What's actually in src/ directory ==="
ls -la src/Repository: ELEVATE-Project/mentoring
Length of output: 3555
🏁 Script executed:
#!/bin/bash
# Check envVariables.js
echo "=== Checking envVariables.js ==="
head -100 src/envVariables.js
echo -e "\n=== Searching for Sequelize initialization ==="
grep -r "sequelize\|Sequelize" src --include="*.js" | grep -v node_modules | grep -v test | head -15
echo -e "\n=== Checking app.js for database config ==="
grep -A 5 -B 5 "sequelize\|database\|DB_" src/app.js | head -30Repository: ELEVATE-Project/mentoring
Length of output: 3986
🏁 Script executed:
#!/bin/bash
# Check database models initialization
echo "=== Checking database/models directory ==="
ls -la src/database/models/ | head -15
echo -e "\n=== Checking for index.js in models ==="
if [ -f "src/database/models/index.js" ]; then
cat src/database/models/index.js
fi
echo -e "\n=== Searching for database connection ==="
grep -r "new Sequelize\|require.*sequelize" src --include="*.js" | grep -v node_modules | head -10Repository: ELEVATE-Project/mentoring
Length of output: 2971
🏁 Script executed:
#!/bin/bash
# Search for any postgres-related config files
echo "=== Searching for postgres config files ==="
find . -name "*postgres*" -type f 2>/dev/null | grep -v node_modules
echo -e "\n=== Checking sample.config.json ==="
cat src/sample.config.json
echo -e "\n=== Checking if there's any config directory elsewhere ==="
find src -type d -name "configs" 2>/dev/null
echo -e "\n=== Checking how environment is set in CircleCI ==="
grep -A 20 "environment:" .circleci/config.yml | head -30Repository: ELEVATE-Project/mentoring
Length of output: 1245
🏁 Script executed:
#!/bin/bash
# Check the actual CircleCI config for environment and setup
echo "=== Checking .circleci/config.yml for integration test section ==="
sed -n '75,125p' .circleci/config.yml
echo -e "\n=== Checking for any setup or initialization scripts ==="
cat src/setup.sh
echo -e "\n=== Checking .env files ==="
ls -la src/.env* 2>/dev/null
head -20 src/.env.sample 2>/dev/nullRepository: ELEVATE-Project/mentoring
Length of output: 3302
🏁 Script executed:
#!/bin/bash
# Check if config files are generated during CI/CD setup
echo "=== Checking entire CircleCI config for config file generation ==="
grep -n "postgres.js\|configs" .circleci/config.yml
echo -e "\n=== Checking if config is created from environment variables ==="
grep -n "DATABASE\|DB_" src/.env.sample | head -20
echo -e "\n=== Looking for any documentation or README ==="
find . -maxdepth 2 -name "README*" -o -name "SETUP*" | head -5
if [ -f "README.md" ]; then
grep -A 10 -B 2 "database\|sequelize\|postgres" README.md | head -30
fi
echo -e "\n=== Checking if there's any pre-migration setup in CircleCI ==="
grep -B 30 "Run DB migrations" .circleci/config.yml | grep -E "run:|command:|name:" | tail -20Repository: ELEVATE-Project/mentoring
Length of output: 534
🏁 Script executed:
#!/bin/bash
# Get the full CircleCI config to understand the complete flow
echo "=== Checking full integration job setup ==="
grep -n "integration:" .circleci/config.yml -A 150 | head -200
echo -e "\n=== Checking if config files exist at root level ==="
ls -la | grep -E "config|postgres|\.env"
echo -e "\n=== Check if there's a separate config generation script ==="
find . -maxdepth 2 -type f -name "*.sh" | xargs grep -l "postgres.js\|configs" 2>/dev/nullRepository: ELEVATE-Project/mentoring
Length of output: 243
🏁 Script executed:
#!/bin/bash
# Check start.sh script
echo "=== Checking start.sh script ==="
cat src/start.sh
echo -e "\n=== Check the actual CircleCI integration job more carefully ==="
sed -n '/integration:/,/^ [a-z]/p' .circleci/config.yml | head -150Repository: ELEVATE-Project/mentoring
Length of output: 482
🏁 Script executed:
#!/bin/bash
# Look for CI-specific config files
echo "=== Searching for CI-specific config files ==="
find . -name "*.ci.js" -o -name "*integration*config*" | grep -v node_modules | head -20
echo -e "\n=== Check docker-compose for mentoring service setup ==="
find . -name "docker-compose*.yml" -o -name "docker-compose*.yaml" | head -5
if [ -f "dev-ops/docker-compose.yml" ]; then
grep -A 30 "mentoring:" dev-ops/docker-compose.yml | head -40
fi
echo -e "\n=== Check for configs directory with CI setup in docker images ==="
find . -type d -name "configs" 2>/dev/null
echo -e "\n=== Check if configs is created as part of docker build ==="
find . -name "Dockerfile*" | xargs grep -l "configs" 2>/dev/nullRepository: ELEVATE-Project/mentoring
Length of output: 506
🏁 Script executed:
#!/bin/bash
# Check docker-compose files for mentoring service
echo "=== Checking docker-compose.yml for mentoring service ==="
grep -A 30 "mentoring:" docker-compose.yml 2>/dev/null | head -40
echo -e "\n=== Checking for environment variables or volume mounts in docker-compose ==="
grep -B 5 -A 20 "mentoring" docker-compose.yml 2>/dev/null | grep -E "environment:|volumes:|image:|working_dir:" | head -20
echo -e "\n=== Looking for any Dockerfile ==="
find . -name "Dockerfile" | head -10
echo -e "\n=== Check dev-ops directory structure ==="
if [ -d "dev-ops" ]; then
ls -la dev-ops/
fiRepository: ELEVATE-Project/mentoring
Length of output: 1255
🏁 Script executed:
#!/bin/bash
# Check Dockerfile
echo "=== Checking Dockerfile ==="
cat Dockerfile
echo -e "\n=== Checking dev-ops directory ==="
ls -la dev-ops/ 2>/dev/null || echo "dev-ops not found"
echo -e "\n=== Search for where postgres.js config should be created ==="
grep -r "configs/postgres" src --include="*.js" | grep -v node_modules | head -10
echo -e "\n=== Check if there's a .dockerignore or any build scripts ==="
cat .dockerignore 2>/dev/null || echo "No .dockerignore"
echo -e "\n=== Look at the complete package.json for db scripts ==="
grep -A 5 "\"db:" src/package.jsonRepository: ELEVATE-Project/mentoring
Length of output: 1427
🏁 Script executed:
#!/bin/bash
# Check the migration and seeder scripts
echo "=== Checking scripts/run-migrations.js ==="
if [ -f "src/scripts/run-migrations.js" ]; then
cat src/scripts/run-migrations.js
fi
echo -e "\n=== Checking scripts/run-seeders.js ==="
if [ -f "src/scripts/run-seeders.js" ]; then
cat src/scripts/run-seeders.js
fi
echo -e "\n=== List all scripts ==="
ls -la src/scripts/
echo -e "\n=== Check if postgres.js needs to be created from env vars ==="
grep -r "DATABASE_URL\|DB_" src/.env.sample | head -10Repository: ELEVATE-Project/mentoring
Length of output: 2825
🏁 Script executed:
#!/bin/bash
# Check what env variables are defined and needed for postgres
echo "=== Full .env.sample to see database configuration ==="
cat src/.env.sample | grep -i "database\|postgres\|db_"
echo -e "\n=== Check CircleCI environment setup more carefully ==="
sed -n '1,100p' .circleci/config.yml | grep -E "environment:|DATABASE|POSTGRES|DB_" -A 2
echo -e "\n=== Check entire CircleCI integration job ==="
cat .circleci/config.yml | sed -n '/integration:/,/^[a-z]/p' | head -200Repository: ELEVATE-Project/mentoring
Length of output: 227
🏁 Script executed:
#!/bin/bash
# Read the CircleCI config directly
echo "=== Reading .circleci/config.yml directly ==="
wc -l .circleci/config.yml
echo -e "\n=== Showing lines around the integration migrations (lines 80-120) ==="
sed -n '80,120p' .circleci/config.yml
echo -e "\n=== Looking for environment setup in CircleCI ==="
head -50 .circleci/config.yml
echo -e "\n=== Check if there's an env file being passed in the CI job ==="
grep -n "env_file\|environment:" .circleci/config.yml | head -20Repository: ELEVATE-Project/mentoring
Length of output: 2731
The Sequelize config file referenced in .sequelizerc is missing and must be created.
The .sequelizerc file points to ./configs/postgres.js, but this file does not exist. Both the Sequelize CLI commands and the application's database model initialization depend on this file to load the integration environment configuration. The migrations and seeders will fail without it.
Create src/configs/postgres.js with the required integration environment configuration for the PostgreSQL database, or update .sequelizerc to point to the actual location of the Sequelize configuration file.
🤖 Prompt for AI Agents
In @.circleci/config.yml around lines 94-99, The CI migration step fails because
.sequelizerc references ./configs/postgres.js which doesn't exist; either create
src/configs/postgres.js exporting a Sequelize config object that includes an
"integration" environment (host, port, username, password, database, dialect:
"postgres", and any pool/ssl settings) or update .sequelizerc to point to your
actual config file location; ensure the exported shape matches what
sequelize-cli expects (environments like integration, development, test) so npx
sequelize-cli db:migrate --env integration can load the correct settings.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/start.sh (1)
12-13: Remove debug comment artifacts.Lines 12-13 contain generic "running script" comments that appear to be debug leftovers. These should be removed to keep the script clean.
🔎 Proposed cleanup
# echo "[MENTORING] Running seeds..." # npm run db:seed:all:integration -# running script -# running script 2 echo "[MENTORING] Starting application..."
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/start.sh
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1426
File: src/database/migrations/20251020081719-add-orgEntity-type.js:10-12
Timestamp: 2025-11-06T06:25:57.830Z
Learning: In the ELEVATE-Project/mentoring repository, entity type migrations (like 20251020081719-add-orgEntity-type.js) run before tenant-specific migrations. Therefore, down migrations for these entity types do not need to filter by organization_code and tenant_code, as multi-tenant data does not exist at the time these migrations execute.
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1319
File: src/database/queries/reportTypes.js:13-19
Timestamp: 2025-08-17T08:50:22.819Z
Learning: In multi-tenant database query methods, when accepting an options parameter that gets spread into Sequelize queries, ensure that critical fields like tenant_code are applied after spreading options to prevent bypassing tenant isolation. Use pattern: `const { where: optionsWhere = {}, ...rest } = options || {}; const where = { ...optionsWhere, tenant_code: tenantCode }` to safely merge caller options while enforcing tenant scoping.
Learnt from: sumanvpacewisdom
Repo: ELEVATE-Project/mentoring PR: 1319
File: src/database/models/reportQueries.js:19-28
Timestamp: 2025-08-17T08:39:06.865Z
Learning: In the ELEVATE-Project/mentoring codebase, when transitioning models to use composite primary keys with tenant_code and organization_code, existing unique indexes (such as those using organization_id) are intentionally kept unchanged and do not need to be updated to include the new tenant/organization fields, as this aligns with their multi-tenant architecture approach.
Learnt from: rakeshSgr
Repo: ELEVATE-Project/mentoring PR: 1481
File: src/database/queries/sessionAttendees.js:304-330
Timestamp: 2025-12-03T08:35:14.163Z
Learning: In Sequelize queries with LEFT JOINs via include, tenant_code filtering on the main table's where clause does NOT automatically apply to the joined table. To enforce tenant isolation across joins, add a where clause with tenant_code to each include block, especially when joining tables that have tenant_code fields. The where clause on an include becomes part of the JOIN's ON condition, properly scoping the join itself.
| set -e | ||
|
|
||
| # Replace kafka.js with kafka.ci.js | ||
| cp /var/src/configs/kafka.ci.js /var/src/configs/kafka.js |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Checking for /var/src paths in shell scripts ==="
rg -n '/var/src' --type sh
echo -e "\n=== Checking for Docker/container files ==="
fd -E 'node_modules' \( -iname 'dockerfile*' -o -iname 'docker-compose*' -o -iname '.dockerignore' \)
echo -e "\n=== Checking for kubernetes configs ==="
fd -E 'node_modules' \( -iname '*.k8s.yml' -o -iname '*.k8s.yaml' -o -iname 'kustomization.yaml' \)
echo -e "\n=== Checking for CI configs (GitHub, GitLab, etc) ==="
fd -E 'node_modules' \( -path '*/.github/*' -o -path '*/.gitlab-ci.yml' -o -path '*/Jenkinsfile' \)
echo -e "\n=== Examining the start.sh file context ==="
wc -l src/start.sh
cat -n src/start.shRepository: ELEVATE-Project/mentoring
Length of output: 1507
🏁 Script executed:
#!/bin/bash
echo "=== Checking README files ==="
fd -i readme
echo -e "\n=== Checking for documentation about setup ==="
fd -i setup | head -20
echo -e "\n=== Checking for CONTRIBUTING or similar docs ==="
fd \( -iname 'CONTRIBUTING*' -o -iname 'DEVELOPMENT*' -o -iname 'SETUP*' \)
echo -e "\n=== Looking for any references to start.sh in docs/configs ==="
rg 'start\.sh' --type md --type txt 2>/dev/null || true
echo -e "\n=== Checking git status/info if available ==="
git log --oneline src/start.sh 2>/dev/null | head -5 || true
echo -e "\n=== Check if src/ has other shell scripts ==="
fd -e sh src/Repository: ELEVATE-Project/mentoring
Length of output: 2134
🏁 Script executed:
#!/bin/bash
echo "=== Checking src/setup.sh ==="
cat -n src/setup.sh 2>/dev/null || echo "File not found"
echo -e "\n=== Checking git diff for start.sh ==="
git diff HEAD~1 src/start.sh 2>/dev/null || git show src/start.sh 2>/dev/null || echo "Git context not available"
echo -e "\n=== Checking package.json scripts ==="
jq '.scripts' package.json 2>/dev/null | head -30
echo -e "\n=== Checking for compose or Docker setup in root ==="
ls -la | grep -i docker || echo "No Docker files in root"
echo -e "\n=== Check for .env or config patterns ==="
fd -e env -e example | head -10Repository: ELEVATE-Project/mentoring
Length of output: 2482
🏁 Script executed:
#!/bin/bash
echo "=== Checking Dockerfile ==="
cat Dockerfile
echo -e "\n=== Checking docker-compose.yml for start.sh or /var/src ==="
grep -n 'start\.sh\|/var/src\|command:' docker-compose.yml | head -20
echo -e "\n=== Checking docker-compose-dev.yml ==="
grep -n 'start\.sh\|/var/src\|command:' docker-compose-dev.yml | head -20
echo -e "\n=== Checking for kafka configs ==="
fd kafka
echo -e "\n=== Checking overall directory structure of src/ ==="
find src/ -type f -name '*.js' -o -name '*.sh' -o -name '*.json' | grep -E '(config|kafka|start|setup)' | head -20Repository: ELEVATE-Project/mentoring
Length of output: 2215
🏁 Script executed:
#!/bin/bash
echo "=== Looking for kafka config files (any pattern) ==="
fd -i kafka
echo -e "\n=== Looking specifically for .js files with 'kafka' in configs/ ==="
find . -path ./node_modules -prune -o -type f -name '*kafka*' -print
echo -e "\n=== Checking if configs directory exists ==="
find src/ -type d -name 'configs'
echo -e "\n=== Search for kafka.ci or kafka.js ==="
rg 'kafka\.(js|ci\.js)' --type js
echo -e "\n=== Check entire src structure for missing directories ==="
ls -la src/ | head -30Repository: ELEVATE-Project/mentoring
Length of output: 2404
🏁 Script executed:
#!/bin/bash
echo "=== Final verification: Any hidden config files? ==="
find src -name 'kafka*' -o -name 'configs' 2>/dev/null
echo -e "\n=== Checking git history for these files ==="
git log --all --full-history -- src/configs/kafka.ci.js 2>/dev/null | head -10 || echo "File never existed in repo"
git log --all --full-history -- src/configs/kafka.js 2>/dev/null | head -10 || echo "File never existed in repo"
echo -e "\n=== Checking .gitignore for configs ==="
grep -i config .gitignore 2>/dev/null || echo "No .gitignore matches"Repository: ELEVATE-Project/mentoring
Length of output: 833
This script references non-existent files and will fail at runtime.
The script attempts to copy /var/src/configs/kafka.ci.js to /var/src/configs/kafka.js, but neither the kafka.ci.js file nor the configs/ directory exist in the repository. These files must be created before this script can execute. The git history confirms these files have never existed in the codebase.
The hardcoded path /var/src/ is appropriate for Docker containerized execution (as evidenced by the Dockerfile WORKDIR and docker-compose volume mounts), so path configuration is not the concern—the missing dependencies are.
Additionally, clean up the commented-out debug lines (# running script, # running script 2 on lines 12-13).
Uh oh!
There was an error while loading. Please reload this page.