Integrating with Splunk

This article outlines how to pull data from Hoxhunt’s External GraphQL API and push it into Splunk using a simple script and scheduled job. By following the steps below, you can ingest new Hoxhunt data into Splunk for further analysis and visualization.

 

NOTE: Hoxhunt does not currently offer a ready-made integration with Splunk, the purpose of this article is to outline the steps required to create and maintain one by customers who are using Splunk

 

Prerequisites

  • Hoxhunt GraphQL authentication token

    • Generate this under Account SettingsAccess Tokens in Hoxhunt.

    • You’ll include this token in the request headers to the Hoxhunt GraphQL endpoint.

  • Splunk HTTP Event Collector (HEC) token

  • Basic scripting knowledge

    • You’ll need to create, maintain, and schedule a script (e.g., Node.js, Python, or similar) that fetches data from Hoxhunt’s GraphQL API, processes it, and posts it to Splunk.

 

 

High-Level Data Flow

Hoxhunt GraphQL API <--> Script + Cron Job <--> Splunk

 

  1. Script calls Hoxhunt GraphQL API to retrieve data.

  2. Script transforms the returned data into the Splunk-compatible event format.

  3. Script sends these events to Splunk via the HEC endpoint.

  4. Script stores a “last successful timestamp” to filter only new data on subsequent runs.

  5. A scheduled job (e.g., CRON) runs the script at your desired intervals.

 

Example pseudocode script:

// Retrieve the last timestamp when the script successfully ran.
function getLastSuccessfulTimestamp() {

// This could be from a local file, database, etc.
// It will be used to filter GraphQL query to fetch only new data.
}
// Fetch data from the GraphQL API.
function fetchData() {
return fetch({
url: HOXHUNT_GQL_API_ENDPOINT,
headers: { Authorization: `Authtoken ${EXTERNAL_API_TOKEN}` },
method: 'post',
body: `Your GraphQL Query with ${lastTimestamp} filter`
})
}

// Transform the GraphQL response for Splunk ingestion.
function transformResponse(response) {
// This can be splitting items from response into individual Splunk events, picking relevant fields, renaming fields etc.
return response
}
// Publish the events to Splunk
function postToSplunk(transformedData) {
return fetch({
url: SPLUNK_HEC_ENDPOINT,
headers: { Authorization: `Splunk ${SPLUNK_HEC_TOKEN}` },
method: 'post',
body: transformedData
})
}
// Save new timestamp that will be used next time the script is run
function saveSuccesfullTimestamp() {}
getLastSuccesfulTimestamp()
.then(fetchData)
.then(transformResponse)
.then(postToSplunk)
.then(saveSuccesfulTimestamp)
.then(() => console.log('done'))
.catch(console.error)

 

If are looking for incidents escalated by some specific incident rule the QUERY could for example be:

 

const QUERY = `
query BecEscalatedIncidents {
currentUser {
organization {
incidentRules(filter: {name_eq: "${INCIDENT_RULE_NAME}"}) {
name
incidentMatches(
filter: {timestamp_gt: "${LAST_SUCCESFUL_EXECUTION}"}
) {
timestamp
incident {
humanReadableId
createdAt
}
}
}
}
}
}
`

 

And then the transformResponse could for example be:

 

function transformResponse(response) {
const incidentMatches =
response.data.currentUser.incidentRules[0].incidentMatches
const splunkEvents = incidentMatches.map(
(match) => `{"event": "${mmatch.humanReadableId} escalated"}`
)
return splunkEvents.join('')
}

Additional References

 

 

TIP: To minimize Splunk licensing costs and ensure better performance, focus your GraphQL queries on only the data you truly need (for example, just new escalated incidents). You can also compress or batch events if desired.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request