@cdklabs/generative-ai-cdk-constructs
@cdklabs/generative-ai-cdk-constructs / bedrock / WebCrawlerDataSourceProps
Interface to create a new standalone data source object.
readonly
optional
chunkingStrategy:ChunkingStrategy
The chunking stategy to use for splitting your documents or content. The chunks are then converted to embeddings and written to the vector index allowing for similarity search and retrieval of the content.
ChunkingStrategy.DEFAULT
WebCrawlerDataSourceAssociationProps
.chunkingStrategy
readonly
optional
contextEnrichment:ContextEnrichment
The context enrichment configuration to use.
- No context enrichment is used.
WebCrawlerDataSourceAssociationProps
.contextEnrichment
readonly
optional
crawlingRate:number
The max rate at which pages are crawled, up to 300 per minute per host. Higher values will decrease sync time but increase the load on the host.
300
WebCrawlerDataSourceAssociationProps
.crawlingRate
readonly
optional
crawlingScope:CrawlingScope
The scope of the crawling.
- CrawlingScope.DEFAULT
WebCrawlerDataSourceAssociationProps
.crawlingScope
readonly
optional
customTransformation:CustomTransformation
The custom transformation strategy to use.
- No custom transformation is used.
WebCrawlerDataSourceAssociationProps
.customTransformation
readonly
optional
dataDeletionPolicy:DataDeletionPolicy
The data deletion policy to apply to the data source.
- Sets the data deletion policy to the default of the data source type.
WebCrawlerDataSourceAssociationProps
.dataDeletionPolicy
readonly
optional
dataSourceName:string
The name of the data source.
- A new name will be generated.
WebCrawlerDataSourceAssociationProps
.dataSourceName
readonly
optional
description:string
A description of the data source.
- No description is provided.
WebCrawlerDataSourceAssociationProps
.description
readonly
optional
filters:CrawlingFilters
The filters (regular expression patterns) for the crawling. If there’s a conflict, the exclude pattern takes precedence.
None
WebCrawlerDataSourceAssociationProps
.filters
readonly
optional
kmsKey:IKey
The KMS key to use to encrypt the data source.
- Service owned and managed key.
WebCrawlerDataSourceAssociationProps
.kmsKey
readonly
knowledgeBase:IKnowledgeBase
The knowledge base to associate with the data source.
readonly
optional
maxPages:number
The maximum number of pages to crawl. The max number of web pages crawled from your source URLs, up to 25,000 pages. If the web pages exceed this limit, the data source sync will fail and no web pages will be ingested.
- No limit
WebCrawlerDataSourceAssociationProps
.maxPages
readonly
optional
parsingStrategy:ParsingStategy
The parsing strategy to use.
- No Parsing Stategy is used.
WebCrawlerDataSourceAssociationProps
.parsingStrategy
readonly
sourceUrls:string
[]
The source urls in the format https://www.sitename.com
.
Maximum of 100 URLs.
WebCrawlerDataSourceAssociationProps
.sourceUrls
readonly
optional
userAgent:string
The user agent string to use when crawling.
- Default user agent string
WebCrawlerDataSourceAssociationProps
.userAgent
readonly
optional
userAgentHeader:string
The user agent header to use when crawling. A string used for identifying the crawler or bot when it accesses a web server. The user agent header value consists of the bedrockbot, UUID, and a user agent suffix for your crawler (if one is provided). By default, it is set to bedrockbot_UUID. You can optionally append a custom suffix to bedrockbot_UUID to allowlist a specific user agent permitted to access your source URLs.
- Default user agent header (bedrockbot_UUID)