This Snap streams the Tweets based on the search keyword for the authenticated user. Streamed Tweets are written to the output view and are available for downstream Snaps immediately. The output format is JSON.
|Support and limitations:|
This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Twitter Account for information on setting up this type of account. It is recommended that you edit the account and reauthorize old expired or invalid accounts.
|Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.|
Search by keyword
The keyword to be searched on Twitter.
Timeout in seconds
Required. The time out parameter for streaming Tweets on Twitter.
This setting may be used to control the behavior of this Snap in terms of the number of seconds it should listen for Tweets to be pushed from Twitter. If left at the default of 60 seconds, the Snap will wait 60 seconds before gracefully shutting down (and depending on the configuration of the overall pipeline, may cause the entire pipeline to stop). Longer durations may be used, as may a number of 0 (zero). If you use 0, the Snap will run indefinitely, waiting on incoming Tweets, and the Snap will not terminate itself. To stop a pipeline which is listening indefinitely, the "Stop Pipeline Execution" functionality should be used in the Designer or Dashboard. In the event of receiving a termination instruction, the Snap will stop listening and close its connection to Twitter, before closing its output stream and stopping. Any downstream Snaps will also be terminated and their connections closed.
An example of a Twitter Streaming Search Snap is described using a use case.
Download the sample pipeline: Twitter Stream. In this pipeline, a Twitter stream is searched against a keyword (SnapLogic), and the results are sorted and written to a file. As this pipeline is configured to run on a recurring basis, when the pipeline is run again, the latest search is combined with the data in the file, the data is sorted and duplicate entries are removed.